Massachusetts Institute of Technology
While neural networks have achieved the state-of-the-art results on various natural language processing (NLP) tasks, their robustness to changes in the input distribution and their ability to transfer to related tasks are one of the biggest open challenges. Modern NLP systems interact with text from heterogeneous sources with distinct distributions while the underlying linguistic regularities may be shared across tasks. This presents several interrelated challenges:
Less robust models can lead to low-quality outputs while being exposed to natural noise, being susceptible to adversarial inputs, or catastrophic failures in the extreme case. We invite the academic community to propose novel and robust methods to address the above challenges.
Massachusetts Institute of Technology
Carnegie Mellon University
University of Massachusetts Amherst
Stanford University
Applications Are Currently CLosed
Notifications will be sent by email to selected applicants by July 28, 2019.
Research topics should be relevant to understanding and improving the robustness of neural NLP systems (Machine Translation, Question Answering, Representation Learning), including but not limited to:
Recipients will be invited to attend a workshop at Menlo Park, CA in August 2020.