Published December 29, 2025 | Version 1.0
Preprint Open

The Instruction Stack Audit Framework (ISAF): A Technical Methodology for Tracing AI Accountability Across Nine Abstraction Layers

  • 1. HAIEC

Description

AI accountability failures occur when regulatory audits examine outputs while root causes exist in instruction layers that remain undocumented and unauditable. Analysis of documented AI incidents including Air Canada's chatbot liability (2024), Amazon's hiring bias (2018), and Zillow's algorithmic valuation loss exceeding $500 million (2021) reveals a consistent pattern: problematic behavior traces to design-layer decisions involving objective functions, framework configurations, and data selection that were never systematically reviewed before deployment.

Current AI governance frameworks including the EU AI Act, NIST AI Risk Management Framework, and ISO/IEC 42001 focus primarily on model outputs and data governance without providing technical specifications for documenting the full instruction stack from hardware substrate to emergent behavior. This creates a fundamental traceability gap where organizations can achieve nominal regulatory compliance while leaving the majority of their instruction stack unaudited.

This paper introduces the Instruction Stack Audit Framework (ISAF), a proposed methodology designed to address this documentation gap. ISAF provides a nine-layer technical specification defining instruction propagation from voltage thresholds through objective functions to outputs, accompanied by a 127-checkpoint audit protocol for systematic instruction verification, an instruction lineage logging schema enabling cryptographic verification, a layer ownership assignment methodology for accountability attribution, and a risk scoring system based on abstraction distance and control strength.

The framework draws on principles established in prior work on deterministic compliance systems and extends them to full-stack AI accountability. Three case analyses demonstrate how ISAF-based audits could have identified instruction-level risks in documented failures. The complete audit specification, logging schemas, and implementation templates are provided in appendices.

ISAF is released for academic validation, industry pilot implementations, and regulatory consideration.

Keywords: AI governance, algorithmic accountability, EU AI Act compliance, NIST AI RMF, ISO 42001, objective function auditing, instruction traceability, deterministic compliance, ML operations, AI safety, cryptographic audit trails, regulatory documentation requirements

 

Files

The Instruction Stack Audit Framework (ISAF)_ A Technical Methodology for Tracing AI Accountability Across Nine Abstraction Layers.pdf

Additional details

Related works

References
Report: 10.5281/zenodo.18056133 (DOI)

Software

Development Status
Active

References

  • Aho, A. V., Lam, M. S., Sethi, R., & Ullman, J. D. (2006). Compilers: Principles, Techniques, and Tools (2nd ed.). Pearson
  • Civil Resolution Tribunal of British Columbia. (2024). Moffatt v. Air Canada, Decision DT-2024-001234.
  • Dastin, J. (2018, October 10). Amazon scraps secret AI recruiting tool that showed bias against women. Reuter
  • Dijkstra, E. W. (1968). The structure of the "THE"-multiprogramming system. Communications of the ACM, 11(5), 341-34
  • European Parliament and Council. (2024). Regulation (EU) 2024/1689 on Artificial Intelligence (Artificial Intelligence Act).
  • Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumé III, H., & Crawford, K. (2018). Datasheets for datasets. arXiv preprint arXiv:1803.09010.
  • KC, S., & HAIEC Lab. (2024). Deterministic Bias Detection for NYC Local Law 144: Why Reproducibility Matters More Than Accuracy. Zenodo. https://doi.org/10.5281/zenodo.18056133
  • International Organization for Standardization. (2023). ISO/IEC 42001:2023 Information technology -- Artificial intelligence -- Management system.
  • Kocher, P., Horn, J., Fogh, A., Genkin, D., Gruss, D., Haas, W., Hamburg, M., Lipp, M., Mangard, S., Prescher, T., Schwarz, M., & Yarom, Y. (2019). Spectre attacks: Exploiting speculative execution. Communications of the ACM, 63(7), 93-101.
  • Partnership on AI. (2024). AI Incident Database. https://incidentdatabase.ai
  • Zillow Group, Inc. (2022). Form 10-K Annual Report for fiscal year ended December 31, 2021. Securities and Exchange Commission.
  • Paleyes, A., Urma, R. G., & Lawrence, N. D. (2022). Challenges in deploying machine learning: A survey of case studies. ACM Computing Surveys, 55(6), 1-29