Prathyusha Jwalapuram

Research Scientist, Rakuten, Singapore

Tag: benchmark

[EACL 2021] Rethinking Coherence Modeling: Synthetic vs. Downstream Tasks

Tasnim Mohiuddin, Prathyusha Jwalapuram, Xiang Lin, Shafiq Joty

We conduct experiments on benchmarking well-known traditional and neural coherence models on synthetic sentence ordering tasks, and contrast this with their performance on three downstream applications: coherence evaluation for MT and summarization, and next utterance prediction in retrieval-based dialog.

[arXiv] Can Your Context-Aware MT System Pass the DiP Benchmark Tests? : Evaluation Benchmarks for Discourse Phenomena in Machine Translation

Prathyusha Jwalapuram, Barbara Rychalska, Shafiq Joty, Dominika Basaj

We introduce the first of their kind MT benchmark datasets that aim to track and hail improvements across four main discourse phenomena: anaphora, lexical consistency, coherence and readability, and discourse connective translation. We also introduce evaluation methods for these tasks, and evaluate several baseline MT systems on the datasets.