Published February 6, 2026 | Version 1
Working paper Open

IMPRD: Iterative Multi-Perspective Rhetorical Debugging for LLM-Assisted Content Optimization

Description

Systematic methodologies for optimizing LLM-assisted content remain underdeveloped despite

widespread adoption. We introduce IMPRD (Iterative Multi-Perspective Rhetorical Debugging),

a cognitive scaffolding methodology that achieves consistent convergence from draft-quality

(7.1-7.8) to publication-ready (8.5-9.0) content through structured multi-persona evaluation.

IMPRD employs random odd-number sampling from persona pools (solving both tiebreaking

and local minima problems), weighted scoring, and explicit convergence criteria. We demonstrate

effectiveness across three orders of magnitude in content length—from social media posts (100

words, 1-2 iterations) to blog articles (3,000 words, 3-4 iterations) to book manuscripts (100,000

words, 60+ iterations)—with mean improvement of 1.3 points across all applications (n=28

content pieces). IMPRD extends IMPCD (Iterative Multi-Perspective Conceptual Debugging),

which was developed through methodological bootstrapping: recursive self-application until

convergence validated the multi-perspective iteration pattern. This bootstrapping approach

provides a template for systematic methodology development. Our results suggest that external

cognitive scaffolding through systematic methodology can extend less expensive model capabilities

to approach more capable reasoning-focused models, and we propose REASON as a broader

framework for developing cognitive scaffolding methodologies that externalize different reasoning

patterns for LLM usage.

Files

paper.pdf

Files (405.7 kB)

Name Size Download all
md5:fe2957a40b584eeb8755432dee1ac34b
405.7 kB Preview Download

Additional details

References

  • Multi-LLM Evaluator Framework. Emergent Mind, 2024. https://www.emergentmind.com/ topics/multi-llm-evaluator-framework
  • PersonaMatrix: A Recipe for Persona-Aware Evaluation of Legal Summarization. arXiv preprint arXiv:2509.16449, 2025.
  • Madaan, A., et al. Self-Refine: Iterative Refinement with Self-Feedback. arXiv preprint arXiv:2303.17651, 2023.
  • DPRF: A Generalizable Dynamic Persona Refinement Framework for Optimizing Behav- ior Alignment Between Personalized LLM Role-Playing Agents and Humans. arXiv preprint arXiv:2510.14205, 2025.
  • Bai, Y., et al. Constitutional AI: Harmlessness from AI Feedback. arXiv preprint, Anthropic, 2022.
  • Fuzzy, Symbolic, and Contextual: Enhancing LLM Instruction via Cognitive Scaffolding. arXiv preprint arXiv:2508.21204, 2025.
  • Cognitive Foundations for Reasoning and Their Manifestation in LLMs. arXiv preprint arXiv:2511.16660, 2025.
  • Kudina, O., Ballsun-Stanton, B., & Alfano, M. The use of large language models as scaffolds for proleptic reasoning. Asian Journal of Philosophy, 4(1):1-18, 2025.
  • LLM Reasoners: A library for advanced large language model reasoning. GitHub repository, 2025. https://github.com/maitrix-org/llm-reasoners
  • Chancellor, S. Iterative Multi-Perspective Conceptual Debugging (IMPCD): A methodology for philosophical concept refinement through expert panel iteration. Conceptual Refinement Repository, 2026. https://github.com/schancel/conceptual-refinement