Published May 6, 2026
| Version v1
Preprint
Open
Chetana: A Theory-Indexed Probe Framework for AI Consciousness Indicator Scoring
Authors/Creators
Description
Claims about AI consciousness are easy to overstate and difficult to evaluate. This paper presents Chetana, a theory-indexed probe framework that maps model responses to a set of consciousness indicators drawn from Global Workspace Theory, Higher-Order Theories, Recurrent Processing Theory, Predictive Processing, and Attention Schema Theory. The implementation organizes indicators, probes, model adapters, scoring, theory aggregation, probability calculation, and report generation in a TypeScript monorepo. The goal is not to determine whether an AI system is conscious. It is to make a narrow evaluation workflow inspectable: which theory supplied each indicator, which probe produced each observation, how indicator scores were aggregated, and how uncertainty should be reported. The framework is positioned as research tooling for careful discussion, not as a consciousness detector.
This artifact bundle includes the manuscript, PDF, workflow figure, bibliography, metadata, and source notes grounded in the Chetana repository. It is framed as indicator-scoring research tooling, not as a consciousness detector.
Files
chetana-consciousness-indicator-package.zip
Files
(9.8 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:682c64ce5361d676fa168b1139f2ff77
|
9.8 kB | Preview Download |