Published March 7, 2026 | Version v1
Journal article Open

A Framework for Self-Reflective Error Detection and Correction in Large Language Models for Communications Applications

Authors/Creators

Description

Hallucination is still one of the most fundamental issues for LLMS, especially in high-stake applications that require fact reliability and traceability. Despite that the RAG shows great potential as grounding mechanisms; it still fails to completely prevent unsupported claims or citation hallucinations. To address this issue, we present a new experimental paradigm which combines (1) cross-modal retrieval-based grounding, (2) multi-layer self-verification, and (3) semantic entropy-based uncertainty gating for dynamically controlling the effort of verification. Motivated by the semantic entropy-based hallucination detection techniques, the proposed model triggers additional validation once the amount of semantics uncertainty surpasses a learned threshold. The framework focuses on evidence-based accuracy, citation validity, calibration and computational efficiency. A complete methodological protocol, performance measures and deployment-friendly architecture are presented.

Files

A Framework for Self-Reflective Error Det. and Corr. in Large Language Models for Communecation ap (1).pdf