Multi-Agent Debate Strategies to Enhance Requirements Engineering with Large Language Models
Authors/Creators
Description
This paper investigates the potential of Multi-Agent Debate (MAD) strategies to enhance the performance of Large Language Model (LLM) agents in Requirements Engineering (RE) tasks. While prior research has focused on prompt engineering, fine-tuning, and retrieval-augmented generation, these methods often treat LLMs as isolated black boxes, relying on single-pass outputs with limited robustness and adaptability. Inspired by the way human debates improve accuracy by incorporating diverse perspectives, this study explores whether collaborative interactions among multiple LLM agents can yield similar benefits. We systematically analyze existing MAD strategies across different domains, identifying their key characteristics and developing a taxonomy of core attributes. Building on this foundation, we implement and evaluate a preliminary MAD-based framework for RE classification. The results demonstrate both the feasibility and potential advantages of applying MAD to RE, paving the way for more robust, adaptive, and accurate use of LLMs in engineering contexts.
Files
Multi-Agent Debate Strategies to Enhance Requirements Engineering with Large Language Models.pdf
Files
(487.5 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:1c1a0232caac6d9f304c4f7a1c303dc4
|
487.5 kB | Preview Download |