Published February 7, 2026 | Version v2
Preprint Open

Toroidal Logit Bias for Hallucination Reduction in Large Language Models

Authors/Creators

  • 1. (Paraxiom Research)

Description

Toroidal Logit Bias for Hallucination Reduction in Large Language Models

  v1.1 - Added TruthfulQA evaluation (817 samples)

  Key Results:
  - Custom benchmark (100 prompts): +40% error reduction (Qwen), +15.4% (OLMo)
  - TruthfulQA (817 prompts): +6.8% error reduction (Qwen)
    - Paired analysis: 46 improvements vs 32 regressions (McNemar p=0.14)
    - Consistent directional improvement (b > c)

  Method: Inference-time toroidal logit bias. No fine-tuning required, ~5% latency
  overhead.

  Scope: This work focuses narrowly on an inference-time intervention for hallucination
   reduction. It makes no claims about ontology, training dynamics, or universal
  representations. The contribution is operational and empirical.

  Changelog v1.1:
  - Added TruthfulQA evaluation (817 samples) with generation-based matching
  - Added paired McNemar's test analysis
  - Confirmed directional improvement across both benchmarks

Files

toroidal_hallucination_reduction_2026.pdf

Files (233.5 kB)

Name Size Download all
md5:fec160ab0127c700e352e3828cae4064
233.5 kB Preview Download

Additional details

Software

Repository URL
https://github.com/Paraxiom/topological-coherence
Programming language
Rust