Category: Coherence Modeling
[ACL 2022] Rethinking Self-Supervision Objectives for Coherence Modeling
Prathyusha Jwalapuram, Shafiq Joty, Xiang Lin
We build a generalizable coherence model that performs well on several downstream tasks by using contrastive training and a large global negative sample queue encoded by a momentum encoder.
[EACL 2021] Rethinking Coherence Modeling: Synthetic vs. Downstream Tasks
Tasnim Mohiuddin, Prathyusha Jwalapuram, Xiang Lin, Shafiq Joty
We conduct experiments on benchmarking well-known traditional and neural coherence models on synthetic sentence ordering tasks, and contrast this with their performance on three downstream applications: coherence evaluation for MT and summarization, and next utterance prediction in retrieval-based dialog.