Published December 18, 2020
| Version v1
Conference paper
Open
The Natural Language Pipeline, Neural Text Generation and Explainability
- 1. CNRS/LORIA, Université de Lorraine
- 2. Utrecht University
Description
End-to-end encoder-decoder approaches to data-to-text generation are often black boxes whose predictions are difficult to explain. Breaking up the end-to-end model into sub-modules is a natural way to address this problem. The traditional pre-neural Natural Language Generation (NLG) pipeline provides a framework for breaking up the end-to-end encoder-decoder. We survey recent papers that integrate traditional NLG submodules in neural approaches and analyse their explainability. Our survey is a first step towards building explainable neural NLG models.
Files
2020.nl4xai-1.5.pdf
Files
(162.2 kB)
Name | Size | Download all |
---|---|---|
md5:2c4cb94327d5283ea1a34e16731ec896
|
162.2 kB | Preview Download |