Natural language processing (NLP) has seen significant advances in recent years enabled by pre-training text representations on large amounts of unlabeled data. Unfortunately, this comes at the expense of large computational requirements, both at training and at inference time. The resulting models are orders of magnitude slower than what is typically used currently in real applications. This computational cost is the biggest barrier in the way of applying such models in practice and their wider adoption in the industry. Furthermore, with the rapid development of mobile devices and end-to-end encrypted communication, it is important to be able to run NLP models directly on those devices.
We are pleased to accept research proposals focused on making NLP models more efficient both at training and testing time. We hope this work will enable production applications of such models at scale and potentially make them directly runnable on mobile devices without losing the quality of models run on servers.
Applicants from the academic community are invited to submit a 1-2 page proposal outlining their intended research, budget, and estimated timeline.
Awards will range up to $80,000 for projects lasting up to 12 months. Successful proposals will demonstrate innovative and compelling research that has the potential to significantly advance technology. Up to ten projects will be awarded.
Massachusetts Institute of Technology
University of Amsterdam
Applications Are Currently CLosed
Notifications will be sent by email to selected applicants by July 28, 2019.
Research topics should be relevant to computationally efficient NLP. Topics can include, but are not limited to:
We’re especially interested in applications to the following:
For questions related to this RFP, please email email@example.com.