July 31, 2017

Advancing understanding at ACL 2017

By: Meta Research

Facebook Researchers are in Vancouver, Canada this week to present their research at the Annual Meeting of the Association for Computational Linguistics (ACL).

At Facebook, understanding and using language is an ambitious and long-term AI research goal. A fruitful approach has been to focus on a few main research avenues, mainly dialog, text representation, and machine translation. The work we are presented at ACL supports our efforts in each of these directions.

Understanding dialog

Our work on dialog is outlined in this recent blog, The Long Game Towards Understanding Dialog. A truly effective dialogue system will be an assistive technology that will likely include a chatbot-type of system being able to interact with people through natural language communication.

Tackling open domain queries

The paper, Reading Wikipedia to Answer Open-Domain Questions by Danqi Chen from Stanford University and Facebook AI Researchers (FAIR) Adam FischJason Weston, and Antoine Bordes. tries to intelligently answer questions such as:

 How many provinces did the Ottoman empire contain in the 17th century? What U.S. state’s motto is “Live free or Die”? What part of the atom did Chadwick discover?

These questions come from Question-Answer (QA) training datasets that Facebook has used to build a system that tackles ‘open domain’ queries. In this case the researchers used Wikipedia as the unique knowledge source from which to identify the relevant text span in an article that answers the question. This task contains multiple sub-challenges: machine reading at scale, document retrieval (finding the relevant articles) and machine comprehension of text (identifying the answer spans from those articles).

So, the response to the first question from our system looks as follows:

Article: Ottoman Empire Paragraph: ... At the beginning of the 17th century the empire contained 32 provinces and numerous vassal states. Some of these were later absorbed into the Ottoman Empire, while others were granted various types of autonomy during the course of centuries.

A key requirement for this research was to push for the system to be able to simultaneously perform well across all the QA datasets.

Like many such computational challenges, the collaboration blended methods in order to build a complete system; in this case they included techniques from search, distant supervision, and multitask learning.

Our experiments on answering factoid questions while using Wikipedia as the unique knowledge source for documents utilized DrQA, a system for reading comprehension applied to open-domain question answering.  This github repository includes code, data, and pre-trained models for processing and querying Wikipedia as described in the paper.

NLP research efforts

Along with our current effort in dialogue agents, we are presenting several research breakthroughs in natural language processing. Our work developing powerful approaches and lightweight tools for text processing builds on the release of FastText last year as outlined in this open source announcement and the subsequent release of pre-trained word vectors.

FastText is a library for text understanding that effortlessly learns word embeddings and simple yet state-of-the-art classifiers. It has been widely adopted by the community. The paper on word representation being presented at ACL, Enriching Word Vectors with Subword Information by Facebook Researchers Piotr BojanowskiEdouard GraveArmand Joulin, and Tomas Mikolov, builds on FastText.

Fairseq, our state-of-the-art software framework for neural sequence to sequence learning, will be presented in the paper A Convolutional Encoder Model for Neural Machine Translation by Jonas Gehring, Michael Auli, David Grangier, and Yann N. Dauphin, all from Facebook AI Research.

This is just a small reflection of a larger body of work associated with Facebook’s computational linguistics and dialogue systems. In addition to presenting papers, Facebook researchers are on hand at ACL to collaborate with the community as we collectively seek to advance the state-of-the-art in AI.

Facebook papers at ACL2017:

A Convolutional Encoder Model for Neural Machine Translation
Jonas Gehring, Michael Auli, David Grangier, Yann N. Dauphin

Automatically Generating Rhythmic Verse with Neural Networks
Jack Hopkins, Douwe Kiela

Enriching Word Vectors with Subword Information
Piotr BojanowskiEdouard GraveArmand JoulinTomas Mikolov

Reading Wikipedia to Answer Open-Domain Questions
Danqi Chen, 
Adam FischJason WestonAntoine Bordes

Workshop participation:

CoNLL, The SIGNLL Conference on Computational Natural Language Learning, focuses on statistical, cognitive and grammatical inference.  Poster, A non-DNN Feature Engineering Approach to Dependency Parsing, by Facebook researchers Xian Qian and Yang Liu will be presented.

Rep4NLP, paper Learning Joint Multilingual Sentence Representation with Neural Machine Translation by Facebook researchers Holger SchwenkMatthijs Douze will be presented at a Workshop on Representation Learning for NLP, sponsored by Facebook and DeepMind focuses on vector space models of meaning, compositionality, and the application of deep neural networks and spectral methods to NLP, and provides a forum for discussing recent advances on these topics, as well as future research directions in linguistically motivated vector-based models in NLP.

RoboNLP, Language Ground for Robotics, brings together members of the NLP, robotics, and vision communities to focus on the much-needed task-oriented aspect of language grounding.

ParlAI open call for research proposals

Building intelligent chatbots is a key research challenge and Facebook is committed to accelerating these endeavors — first by creating and sharing relevant tooling and by encouraging research that explores and extends this infrastructure.

ParlAI, released this year, is a unified platform for training and evaluating AI models on a variety of openly available dialog datasets using open-sourced learning agents. It complements the recent release of CommAI, our communication-based environment for developing artificial general intelligence through increasingly complex tasks.

Facebook is pleased to invite university teams to respond to a call for research proposals on chatbots and dialogue systems that make use of ParlAI. Applicants will be expected to either contribute to the pool of available agents, for example, by conducting researching into strongly performing models and/or by adding to the pool of available tasks that are useful for training and evaluating those agents.

Access the ParlAI code, and learn more about ParlAI research awards.