Published February 1, 2025 | Version v4
Preprint Open

A New Principle in Software Testing: Human Control over AI to Ensure Safety and Reliability.

Creators

  • 1. Colyte

Description

Abstract: The rapid evolution of Artificial Intelligence (AI) has revolutionized automation across various industries, including software testing. While traditional testing relied heavily on human intervention, AI has automated numerous processes, improving efficiency and scalability. However, this reliance on AI introduces significant risks when systems operate without adequate oversight. High-profile failures, such as autonomous vehicles involved in accidents due to contextual misjudgment or healthcare diagnostic tools misidentifying critical conditions, underscore the dangers of unchecked AI systems. These incidents demonstrate the necessity for a hybrid approach where human testers play a pivotal role in mitigating risks and ensuring ethical and reliable outcomes. This paper explores the principle of human oversight in AI-driven testing, advocating for a balanced model that combines the strengths of human intuition with AI's efficiency.

Files

software_testing_AI_manuscript_version4.pdf

Files (310.9 kB)

Name Size Download all
md5:182cf63df995c024ba0c47a4c6a9f514
310.9 kB Preview Download