Learning Faithful Attention for Interpretable Classification of Crisis-Related Microblogs under Constrained Human Budget
Creators
- 1. L3S Research Center
- 2. Indian Institute of Technology (ISM) Dhanbad, India
Description
The recent widespread use of social media platforms has created convenient ways to obtain and spread up-to-date information during crisis events such as disasters. Time-critical analysis of crisis data can help human organizations gain actionable information and plan for aid responses. Many existing studies have proposed methods to identify informative messages and categorize them into diferent humanitarian classes. Advanced neural network architectures tend to achieve state-of-the-art performance, but the model decisions are opaque. While attention heatmaps show insights into the model’s prediction, some studies found that standard attention does not provide meaningful explanations. Alternatively, recent works proposed interpretable approaches for the classifcation of crisis events that rely on human rationales to train and extract short snippets as explanations. However, the rationale annotations are not always available, especially in real-time situations for new tasks and events. In this paper, we propose a two-stage approach to learn the rationales under minimal human supervision and derive faithful machine attention. Extensive experiments over four crisis events show that our model is able to obtain better or comparable classifcation performance (∼86% Macro-F1) to baselines and faithful attention heatmaps using only 40-50% human-level supervision. Further, we employ a zero-shot learning setup to detect actionable tweets along with actionable word snippets as rationales.
Files
www23.pdf
Files
(1.5 MB)
Name | Size | Download all |
---|---|---|
md5:0d548bd543a562f5169eca732fad1591
|
1.5 MB | Preview Download |