Are we Estimating or Guesstimating Translation Quality?

Association for Computational Linguistics (ACL)

Abstract

Recent advances in pre-trained multilingual language models lead to state-of-the-art results on the task of quality estimation (QE) for machine translation. A carefully engineered ensemble of such models dominated the QE shared task at WMT 2019. Our in-depth analysis, however, shows that the success of using pre-trained language models for QE is overestimated due to three issues we observed in current QE datasets: (i) The distributions of quality scores are imbalanced and skewed towards good quality scores; (ii) QE models can perform well on these datasets without even ingesting source or translated sentences; (iii) They contain statistical artifacts that correlate well with human-annotated QE labels. Our findings suggest that though QE models might capture fluency of translated sentences and complexity of source sentences, they cannot model adequacy of translations effectively.

Latest Publications