The trustworthiness landscape in machine learning: a conceptual guide with applications in medicine
Authors/Creators
Description
Preprint of "The trustworthiness landscape in machine learning: a conceptual
guide with applications in medicine"
Abstract
Trust is a fundamental aspect of all human interactions. As artificial intelligence (AI), particularly the machine learning (ML) realm of AI, increasingly impacts society, this inevitably leads to more human-AI interactions. Thus, finding ways to foster trust in AI and ML models becomes more and more essential, especially since these models permeate sensitive domains such as drug design and medical decision-making.
In this paper, we aim to elucidate how technical design features of ML models can contribute to the trustworthiness of, and trust in, ML models.
To this end, we comprehensively surveyed existing work to identify and define various facets of trustworthiness in the ML domain, including, amongst others, generalizability, reliability, robustness, privacy, security, interpretability, explainability, transparency, and fairness. By doing so, we uncover ambiguities in definitions as well as interrelations and tensions between these concepts.
We summarize key insights to support researchers in recognizing and developing ML models that are trustworthy within their respective research domains. Additionally, we provide illustrative examples that demonstrate how these concepts can enhance the trustworthiness of ML models in the medical domain.
Files
Trustworthiness__zenodo_version.pdf
Files
(1.4 MB)
| Name | Size | Download all |
|---|---|---|
|
md5:7e14eefcef9d00c52a264ef22a62ffcb
|
1.4 MB | Preview Download |