Poster Open Access
Nguyen An-phi; Rodriguez-Martinez
Why do we need interpretability to unveil the decision process ofa machine learning model?
Trust - for high-risk scenarios, e.g. healthcare, the user needs to trust the decision taken.
Debugging - the model may be badly trained or there might be an unfair bias in either the dataset or the model itself.
Hypothesis generation - surprising results might be consequences of new mechanisms or patterns unknown even to field experts.