CCQA: A New Web-Scale Question Answering Dataset for Model Pre-Training

North American Chapter of the Association for Computational Linguistics (NAACL)

Abstract

We propose a novel open-domain question-answering dataset based on the Common Crawl project. With a previously unseen number of around 130 million multilingual question-answer pairs (including about 60 million English data-points), we use our largescale, natural, diverse and high-quality corpus to in-domain pre-train popular language models for the task of question-answering. In our experiments, we find that our Common Crawl Question Answering dataset (CCQA) achieves promising results in zero-shot, low resource and fine-tuned settings across multiple tasks, models and benchmarks.

Our dataset generation script and CCQA pre-trained checkpoints can be found here.

Featured Publications