Make Mechanistic Interpretability Auditable: A Call to Develop Standardized Empirical Guidelines
Creators
- 1. Martian
- 2. University of Delaware
- 3. ML Alignment & Theory Scholars
Description
While mechanistic interpretability (MI) has produced important insights into neural network internals, the field has yet to establish a standardized system to audit experiments. As such, many of its findings remain underutilized in safety-critical applications such as medical AI and autonomous systems, as stakeholders cannot certify their validity. Recent work demonstrates this concretely: two papers found conflicting conclusions for the same behavior, and a third study revealed that both were partially correct but incomparable due to methodological inconsistencies. Without standardized auditing, such ambiguities hinder adoption in high-stakes contexts requiring strong correctness guarantees. We call for the MI community to work towards developing a three-part framework: (1) Community-driven and expert-verified experiment guidelines supported by an open "Experiment Repository Platform", (2) Protocols to improve auditing efficiency, and (3) Automated auditing systems that scalably quantify experiment validity. This position paper encourages constructive debate over the necessity, design and implementation of such a framework, providing early concrete examples to help catalyze these dialogues. Overall, we propose that auditing MI itself is essential for its application in AI safety, industry, and governance.
Files
Make_Mechanistic_Interpretability_Auditable.pdf
Files
(730.5 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:ed2bbbc033af93e341355223f7d37f33
|
730.5 kB | Preview Download |