Benchmarks for Unsupervised Discourse Change Detection
The main motivation for this work lies in the need to track discourse dynamics in historical corpora.
However, in many real use cases ground truth is not available and annotating discourses on a corpus-level
is hardly possible. We propose a novel procedure to generate synthetic datasets for this task, a novel
evaluation framework and a set of benchmarking models. Finally, we run large-scale experiments using
these synthetic datasets and demonstrate that a model trained on such a dataset can obtain meaningful
results when applied to a real dataset, without any adjustments of the model.