[ACL 2022] Rethinking Self-Supervision Objectives for Coherence Modeling
Prathyusha Jwalapuram, Shafiq Joty, Xiang Lin
We build a generalizable coherence model that performs well on several downstream tasks by using contrastive training and a large global negative sample queue encoded by a momentum encoder.
[EACL 2021] Rethinking Coherence Modeling: Synthetic vs. Downstream Tasks
Tasnim Mohiuddin, Prathyusha Jwalapuram, Xiang Lin, Shafiq Joty
We conduct experiments on benchmarking well-known traditional and neural coherence models on synthetic sentence ordering tasks, and contrast this with their performance on three downstream applications: coherence evaluation for MT and summarization, and next utterance prediction in retrieval-based dialog.
[EMNLP 2020] Pronoun-Targeted Fine-tuning for NMT with Hybrid Losses
Prathyusha Jwalapuram, Shafiq Joty, Youlin Shen
Through a combination of targeted fine-tuning objectives and intuitive re-use of the training data the model has failed to adequately learn from, we improve the model performance of both a sentence-level and a contextual model without using any additional data. We target the improvement of pronoun translations through our fine-tuning and evaluate our models on a pronoun benchmark testset.
[arXiv] Can Your Context-Aware MT System Pass the DiP Benchmark Tests? : Evaluation Benchmarks for Discourse Phenomena in Machine Translation
Prathyusha Jwalapuram, Barbara Rychalska, Shafiq Joty, Dominika Basaj
We introduce the first of their kind MT benchmark datasets that aim to track and hail improvements across four main discourse phenomena: anaphora, lexical consistency, coherence and readability, and discourse connective translation. We also introduce evaluation methods for these tasks, and evaluate several baseline MT systems on the datasets.
[AAAI 2020] Zero-Resource Cross-Lingual Named Entity Recognition
M Saiful Bari, Shafiq Joty, Prathyusha Jwalapuram
We propose an unsupervised cross-lingual NER model that can transfer NER knowledge from one language to another in a completely unsupervised way without relying on any bilingual dictionary or parallel data.
[EMNLP 2019] Evaluating Pronominal Anaphora in Machine Translation: An Evaluation Measure and a Test Suite
Prathyusha Jwalapuram, Shafiq Joty, Irina Temnikova, Preslav Nakov
We contribute an extensive, targeted dataset that can be used as a test suite for pronoun translation, covering multiple source languages and different pronoun errors drawn from real system translations, for English. We further propose an evaluation measure to differentiate good and bad pronoun translations.
[ACL 2019] A Unified Linear-Time Framework for Sentence-Level Discourse Parsing
Xiang Lin, Shafiq Joty, Prathyusha Jwalapuram, M Saiful Bari
We propose an efficient neural framework for sentence-level discourse analysis in accordance with Rhetorical Structure Theory (RST).