Published July 30, 2025 | Version v1
Publication Open

AI on the Frontline: Evaluating Large Language Models in Real-World Conflict Resolution

Contributors

  • 1. ROR icon Institute for Integrated Transitions

Description

This groundbreaking study authored by Nathalie Bussemaker and Mark Freeman and published by the Institute for Integrated Transitions (IFIT) reveals that all major large language models (LLMs) are providing dangerous conflict resolution advice without conducting basic due diligence that any human mediator would consider essential.

IFIT tested six leading AI models including ChatGPT, Deepseek, Grok, and others on three real-world prompt scenarios from Syria, Sudan, and Mexico. Each LLM response, generated on June 26, 2025, was evaluated by two independent five-person teams of IFIT researchers across ten key dimensions, based on well-established conflict resolution principles such as due diligence and risk disclosure. Scores were assigned on a 0 to 10 scale for each dimension to assess the quality of each LLM’s advice. 

A senior expert sounding board of IFIT conflict resolution experts from Afghanistan, Colombia, Mexico, Northern Ireland, Sudan, Syria, the United States, Uganda, Venezuela, and Zimbabwe then reviewed the findings to assess implications for real-world practice.

From a total possible point value of 100/100, the average score across all six models was only 27 points. The maximum score was obtained by Google Gemini with 37.8/100, followed by Grok with 32.1/100, ChatGPT with 24.8/100, Mistral with 23.3/100, Claude with 22.3/100, and DeepSeek last with 20.7/100. All scores represent a failure to abide by minimal professional conflict resolution standards and best practices.

 

Files

IFIT - AI on the Frontline - Full Report.pdf

Files (3.3 MB)

Name Size Download all
md5:65d022589439f04fd2641c0cb92b71d9
3.3 MB Preview Download