Published August 22, 2019 | Version v1
Poster Open

Interpretability for computational biology

Description

Why do we need interpretability to unveil the decision process ofa machine learning model?
Trust - for high-risk scenarios, e.g. healthcare, the user needs to trust the decision taken.
Debugging - the model may be badly trained or there might be an unfair bias in either the dataset or the model itself.
Hypothesis generation - surprising results might be consequences of new mechanisms or patterns unknown even to field experts.

Files

interpretability.pdf

Files (264.4 kB)

Name Size Download all
md5:cbf9c8297795ce8862bf99ae4979e99c
264.4 kB Preview Download

Additional details

Funding

iPC – individualizedPaediatricCure: Cloud-based virtual-patient models for precision paediatric oncology 826121
European Commission