Published May 31, 2025 | Version v1
Journal article Open

Engineering Trustworthy Quality Assurance Systems: Bias Mitigation, Explainability, and Human-in-the-Loop Governance for Responsible AI

Description

The integration of artificial intelligence (AI) into software quality assurance (QA) has fundamentally reshaped how organizations approach test automation, defect prediction, and release validation by enabling data-driven decision-making at scale. As AI systems increasingly influence quality gates and deployment outcomes, however, they introduce critical risks related to embedded bias, limited transparency, and excessive reliance on automated judgments. These challenges can undermine trust, obscure failure modes, and weaken accountability if left unaddressed. This article examines Responsible AI principles within QA engineering through three foundational pillars-bias mitigation, explainability, and human-in-the-loop (HITL) models-drawing on established empirical research, regulatory frameworks, and practical open-source toolkits. We propose a lifecycle-oriented QA approach that systematically embeds fairness assessments, interpretable validation mechanisms, and structured human oversight across data preparation, model development, deployment, and post-release monitoring. Supported by selected case studies and publicly available architectural diagrams, the article demonstrates how responsible AI practices can transform QA pipelines into transparent, auditable, and resilient systems that balance automation efficiency with human judgment, ultimately enhancing trust, reliability, and long-term software quality.

Files

EJAET-12-5-31-37.pdf

Files (439.0 kB)

Name Size Download all
md5:20d581f7a8a1cbb717b111bf3ccadc6a
439.0 kB Preview Download

Additional details

References

  • [1]. Caton, S., & Haas, C. (2024). Fairness in machine learning: A survey. Communications of the ACM. https://doi.org/10.1145/3616865
  • [2]. Saeed, W., & Omlin, C. (2023). Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities. Knowledge-Based Systems, 254. https://doi.org/10.1016/j.knosys.2023.110273
  • [3]. Mosqueira-Rey, E., Hernández-Pereira, E., Alonso-Ríos, D., Bobes-Bascarán, J., & Fernández-Leal, Á. (2022). Human-in-the-loop machine learning: A state of the art. Artificial Intelligence Review, 56, 3005–3054. https://doi.org/10.1007/s10462-022-10246-w
  • [4]. Nauta, M., Trienes, J., Pathak, S., Nguyen, E., Peters, M., Schmitt, Y., … Seifert, C. (2022). From anecdotal evidence to quantitative evaluation methods: A systematic review on evaluating explainable AI. arXiv. https://arxiv.org/abs/2201.08164
  • [5]. Soremekun, E., Papadakis, M., Cordy, M., & Le Traon, Y. (2022). Software fairness: An analysis and survey. arXiv. https://arxiv.org/abs/2205.08809
  • [6]. Kranthi Kumar Routhu. (2017). The Evolution of HR from On-Premise to Oracle Cloud HCM: Challenges and Opportunities. In International Journal of Scientific Research & Engineering Trends (Vol. 3, Number 1). Zenodo. https://doi.org/10.5281/zenodo.17669776
  • [7]. Dodge, J., Liao, Q. V., Zhang, Y., Bellamy, R. K. E., & Dugan, C. (2019). Explaining models: An empirical study of how explanations impact fairness judgment. arXiv. https://arxiv.org/abs/1901.07694
  • [8]. Abujaber, A. A., et al. (2024). Ethical framework for artificial intelligence in healthcare. Journal of Medical Ethics and History of Medicine. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11230076/
  • [9]. Kranthi Kumar Routhu. (2018). Seamless HR Finance Interoperability: A Unified Framework through Oracle Integration Cloud. In International Journal of Science, Engineering and Technology (Vol. 6, Number 1). Zenodo. https://doi.org/10.5281/zenodo.17292100
  • [10]. Feng, Q. (2025). Fair machine learning in healthcare: A survey. IEEE Intelligent Systems. https://www.computer.org/csdl/journal/ai/2025/03/10700762/20Fh2Pz74qc
  • [11]. Nanchari, N. (2020). Remote Patient Monitoring in Healthcare: Leveraging Iot for Continuous Care. In International Journal of Science, Engineering and Technology (Vol. 8, Number 4). Zenodo. https://doi.org/10.5281/zenodo.15791053
  • [12]. Murikah, W., Nthenge, J. K., & Musyoka, F. M. (2024). Bias and ethics of AI systems applied in auditing: A systematic review. Scientific African, 18, e02281. https://doi.org/10.1016/j.sciaf.2024.e02281
  • [13]. Sudhir Vishnubhatla. (2021). Intelligent Loan Processing: Streaming, Explainability, and Customer 360 Platforms in Modern Banking. Journal of Scientific and Engineering Research, 8(2), 309–316. https://doi.org/10.5281/zenodo.17639093
  • [14]. Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). "Why should I trust you?" Explaining the predictions of any classifier. ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. https://doi.org/10.1145/2939672.2939778
  • [15]. Sudhir Vishnubhatla. (2021). Customer 360 Platforms: Big Data Cloud and AIDriven Solutions for Personalized Financial Services. In International Journal of Science, Engineering and Technology (Vol. 9, Number 3). Zenodo. https://doi.org/10.5281/zenodo.17483408