Published January 25, 2026 | Version v1
Preprint Open

Beyond Prompting: Two Modes of Knowing in Human-AI Collaboration

Authors/Creators

Description

The prompting paradigm dominates human-AI collaboration, assuming knowledge is fully articulable before interaction. This assumption fails for tacit knowledge, phenomenological insight, and emergent theoretical understanding. This paper introduces Epistemic Mode Theory (EMT), distinguishing human-side modes from AI-side operations—a distinction that explains documented friction in human-AI collaboration and suggests a new AI training paradigm.

The Mode/Operation Distinction:

Human-Side Modes:

  • Construction Mode: Assembly through specification—the practitioner articulates requirements; output is built from explicit components
  • Abstraction Mode: Emergence through dialogue—the practitioner articulates emerging insight; output is surfaced from latent knowing

AI-Side Operations:

  • Assembly Operation: Building outputs to specification (currently trained via RLHF, instruction-following)
  • Reflecting Operation: Mirroring articulations to enable recognition (currently untrained)

Central Claim: Current AI training optimizes Assembly Operation only; Reflecting Operation remains untrained. This capability gap—not user skill deficit—explains documented friction when practitioners attempt to surface tacit knowledge through prompting interfaces. When Abstraction Mode meets Assembly Operation, the AI interprets tentative articulation as specification and builds unwanted outputs.

Contributions:

The paper presents the Reflective Amplification Protocol (RAP) as operational methodology that configures Assembly-trained systems to approximate Reflecting Operation, reporting its application across 897 hours of theoretical development. Evidence demonstrates that mode matching enabled completion of a comprehensive framework ecosystem that construction-based approaches could not adequately support.

The distinction suggests Reflecting Training as a new AI development paradigm—using recognition reports ("Did this help you see your own knowing?") rather than quality ratings ("Is this output good?") as reward signal. This would represent a Copernican shift: positioning the human as center of knowledge (surfacing their own knowing) rather than the AI as center (producing outputs).

The theory scales beyond human-AI collaboration to human-human and AI-AI contexts because it describes epistemic operations, not interaction types.

Files

CACM_Beyond_Prompting_Mobley_D_D.pdf

Files (16.6 MB)

Name Size Download all
md5:735ece784ca1a603338472e158297957
16.6 MB Preview Download

Additional details

References

  • Morris, M. R. (2024). Prompting considered harmful. Communications of the ACM, 67
  • Nonaka, I., & Takeuchi, H. (1995). The knowledge-creating company: How Japanese companies create the dynamics of innovation
  • Polanyi, M. (1966). The tacit dimension . University of Chicago Press