Published July 20, 2025 | Version v1
Dataset Open

Perspective Theory as a Coherent World Model for Evaluating Adaptive Intelligence in Large Language Models

Authors/Creators

Description

Abstract

Recent MIT/Harvard evaluations of large language models (LLMs) reveal a stark discrepancy between conceptual knowledge and practical application, attributing failures to a lack of true understanding. This paper recaps and extends the idea that such failures stem not from LLM limitations but from incoherent world models used in training and testing. We propose equipping LLMs with Perspective Theory—a complete, self-generative ontology where existence is perpetual motion from the something/nothing paradox—as a variable to test adaptive intelligence. By replacing summed, incomplete models with this paradox-resolving framework, we hypothesize measured improvements in coherence and deduction across nuanced tasks. The paper outlines a protocol to replicate the MIT/Harvard test, demonstrating that world model incoherence, not AI "faking," is the root cause of low performance.

Files

Mit harvard 2.pdf

Files (1.4 MB)

Name Size Download all
md5:848fc9ce3f8a28c1b36e46c187af4e0f
1.4 MB Preview Download