Published December 17, 2025 | Version v0.2.1
Publication Embargoed

BERT/GPT with Inner-Thinking Cycles

Contributors

Project member:

Description

PoT (Pointer-over-Heads Transformer) is built around a simple idea: instead of producing its output in one forward pass, the model thinks through its representations over several refinement steps.

At the start, every token has an initial embedding — a rough guess of what it means in context. PoT doesn’t stop there. It runs the same Transformer stack R times, updating those embeddings after each pass. At every step, the model looks at its current hidden states and asks:

“Given what I know now, how should I use my attention heads to refine this understanding?”

Each iteration slightly reshapes the embedding space. Tokens move, cluster, and separate as their meanings become sharper and more contextually grounded. This process is not about memorizing — it’s about progressive self-correction. By the final iteration, the embeddings encode a richer, more internally consistent view of the sequence.

What makes PoT different is the controller that guides this process. For every token and refinement step, the controller decides how strongly to use each attention head. Some heads specialize in local structure, others in global dependencies or positional cues. By adjusting their mixture across iterations, the model can “compose” reasoning stages — starting with local alignment, then moving toward abstract relations or long-range coherence.

The controller itself operates on two timescales:

A fast component that adapts on every refinement step — reacting immediately to the evolving state of each token.

A slow component that changes less frequently — maintaining a broader contextual plan that influences the fast dynamics.

Together, they form a kind of hierarchical reasoning loop inside the embedding space. Rather than running deeper networks, PoT deepens its thinking process — continuously refining the meaning of each token until the hidden representations stabilize.

In other words:

PoT doesn’t just compute token embeddings — it thinks within them, iteratively reorganizing its own representation space to reach a more coherent internal understanding.

Files

Embargoed

The files will be made publicly available on January 1, 2027.

Additional details

Additional titles

Translated title
Iterative Refinement via Dynamic Head Routing

Related works

References
Publication: 10.48550/arXiv.2506.21734 (DOI)
Preprint: 10.48550/arXiv.2510.04871 (DOI)

Dates

Submitted
2025-12-16

Software

Repository URL
https://github.com/Eran-BA/PoT
Programming language
Python
Development Status
Active