Methodology to Identify Issues and Improve the Robustness of AI Agents
Authors/Creators
Description
AI agents based on large language models (LLMs) are becoming a key tool for automating complex tasks. Unlike general LLMs that simply generate text, modern agents are able to independently plan actions, call external tools and APIs, work with knowledge, and make decisions based on multi-stage analysis of the situation. However, with the increasing complexity of such types of systems, a critical problem of ensuring their robustness arises. This work presents a systematic approach to identifying and classifying problems in AI agent robustness. The provided problem taxonomy describes nine common problems, which might happen during the execution tasks in an AI agent. For practical usage, a comprehensive evaluation methodology is proposed, including metamorphic testing to evaluate the resistance to changes in input data, checking the correctness of working with information sources, analyzing the flow of tasks inside an AI agent, monitoring the tool usage, and evaluating the quality of the final results. The methodology contains specific metrics with success criteria and approaches to their implementation. It is shown that the proposed system covers all identified categories of errors and makes it possible to evaluate the robustness of AI agents not only at the level of components, but also the interaction part and as a system overall.
Files
v6e1-001.pdf
Files
(860.7 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:9ae8a6183f45cac4ce84d9e953ea0c44
|
860.7 kB | Preview Download |