The reduction of fake and misleading content on Facebook is mostly driven by the state-of-the-art text and visual recognition systems, including Machine Translation, Automatic Speech and Character Recognition, and Image and Text Categorization. However, we still need major improvements in AI systems to further improve online safety. What is currently lacking is a well-defined set of tasks around online safety, together with appropriate benchmarks to quantify performance.
The goal of the RFP is to challenge the community to address these challenges together, and to solicit new tasks.
To make progress on the major social issues in society, Facebook is partnering with universities to support them in building open-source datasets on which the community can measure the progress of existing techniques to reduce misleading behavior online.
Applications Are Currently CLosed
Notifications will be sent by email to selected applicants by mid-July.
The goal of this RFP is to help the academic community to address problems in the area of safer online conversations. This includes problems around misinformation, as well as hate speech and inauthentic online behavior, to name a few. The grant aims to provide funding for projects that build research infrastructure such as datasets or evaluation platforms that can accelerate research in a broader way. In particular, we are encouraging:
The funding can range from $10K to $50K, depending on the proposal, but should roughly match the cost of annotation, e.g. using a crowdsourcing annotation platform or paying expert annotators.
Notes:
For questions related to this RFP, please email academicrelations@fb.com.