I’m currently a Research Scientist at the Language and Translation Technologies group at Meta AI. Recently, I worked on unsupervised pre-training of universal representations and cross-lingual models for NLP. My current interests include question answering, language generation and weakly supervised learning. Previously I was an Applied Researcher in the Language Modeling team at Microsoft, Sunnyvale. During my PhD I worked in the area of information theory and networking at UC Berkeley, where I investigated the information theoretic properties of heavy tailed stochastic processes.
Natural language processing, language modeling, multi-lingual models, question answering, few shot / zero shot learning