Prathyusha Jwalapuram

Research Scientist, Rakuten, Singapore

Category: Evaluation

[arXiv] Can Your Context-Aware MT System Pass the DiP Benchmark Tests? : Evaluation Benchmarks for Discourse Phenomena in Machine Translation

Prathyusha Jwalapuram, Barbara Rychalska, Shafiq Joty, Dominika Basaj

We introduce the first of their kind MT benchmark datasets that aim to track and hail improvements across four main discourse phenomena: anaphora, lexical consistency, coherence and readability, and discourse connective translation. We also introduce evaluation methods for these tasks, and evaluate several baseline MT systems on the datasets.

[EMNLP 2019] Evaluating Pronominal Anaphora in Machine Translation: An Evaluation Measure and a Test Suite

Prathyusha Jwalapuram, Shafiq Joty, Irina Temnikova, Preslav Nakov

We contribute an extensive, targeted dataset that can be used as a test suite for pronoun translation, covering multiple source languages and different pronoun errors drawn from real system translations, for English. We further propose an evaluation measure to differentiate good and bad pronoun translations.