There is a newer version of the record available.

Published April 15, 2026 | Version v2
Working paper Open

Grounding Large Language Models with Tensor Network Coefficients to Force Deterministic Data Analysis

  • 1. Dynsell Quantum Research

Description

Large language models are mathematically probabilistic, rendering their output potentially inaccurate. These LLM hallucinations grow as data becomes increasingly entangled or complex. This paper introduces a new computational methodology for natual-language based data analysis. It is defined as "AURA". The methods and structure of AURA develops the foundation for a data analysis and evaluation workspace or integration that eliminates this probabilistic problem entirely. The AURA methodology defines all statistical relationships within a given dataset through a deterministic tensor network mechanism. It then constructs Hamiltonian values from inter-column Pearson correlations. Solving the Hamiltonian yields the same statistical relationships as the original dataset. The resulting verified coefficients are passed to the LLM's context window, creating a deterministic environment for the LLM to operate within. Consequentially, computational reproducibility of every cited number becomes possible. The systematic implementation or interface consists of four required components. The Translation Ledger maintains an auditable recording of all compilations for human verification and recordings for when possibilities for deterministic deviations become present (inaccuracy is yielded through data quality issues or ambiguous user queries). Schema Aware Context standardizes column metadata during natural language query composition for contextual narrative building alongside deterministic coefficient coupling. Deterministic Render Blocks present dynamic, auditable outputs with dual SLA receipts. Lastly, the Iteration Chain stacks queries into a traceable drill-down history. 2.6 million rows were tested across 15 distinct validation tests, which are outlined in this paper. The computational methods herein achieved zero correlation difference against independent NumPy/SciPy ground truth. The system will be presented as a Dynsell Quantum Research service, which will demonstrate the full breadth of this research.

Files

Grounding a Large Language Model with Tensor Network Coefficients to Force Deterministic Data Analysis.pdf

Additional details

Related works

Is previous version of
Working paper: 10.5281/zenodo.19588742 (DOI)

Funding

European Commission
Fluently - Fluently - the essence of human-robot interaction 101058680

Dates

Copyrighted
2026-04-14

Software

Development Status
Active

References

  • OpenAI. "ChatGPT Code Interpreter." OpenAI Blog, 2023. https://openai.com/blog/chatgpt-plugins
  • Ribeiro, M. T., Singh, S., and Guestrin, C. "'Why Should I Trust You?': Explaining the Predictions of Any Classifier." Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135-1144, 2016.
  • Lundberg, S. M. and Lee, S.-I. "A Unified Approach to Interpreting Model Predictions." Advances in Neural Information Processing Systems, 30, 2017.
  • Orús, R. "A Practical Introduction to Tensor Networks: Matrix Product States and Projected Entangled Pair States." Annals of Physics, 349, pp. 117-158, 2014.
  • Stoudenmire, E. and Schwab, D. J. "Supervised Learning with a Quantum-Inspired Tensor Network." Advances in Neural Information Processing Systems, 29, 2016.
  • Lucas, A. "Ising formulations of many NP problems." Frontiers in Physics, 2:5, 2014.
  • Hairer, E., Lubich, C., and Wanner, G. Geometric Numerical Integration: Structure-Preserving Algorithms for Ordinary Differential Equations. Springer, 2006.
  • Ji, Z., Lee, N., Frieske, R., et al. "Survey of Hallucination in Natural Language Generation." ACM Computing Surveys, 55(12), 2023.
  • Lewis, P., Perez, E., Piktus, A., et al. "Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks." Advances in Neural Information Processing Systems, 33, 2020.
  • Mehta, R. and Zhu, R. "Blue or Red? Exploring the Effect of Color on Cognitive Task Performances." Science, 323(5918), pp. 1226-1229, 2009.
  • Doshi-Velez, F. and Kim, B. "Towards A Rigorous Science of Interpretable Machine Learning." arXiv:1702.08608, 2017.
  • Wang, J., Roberts, C., Vidal, G., and Leichenauer, S. "Anomaly detection with tensor networks." arXiv:2006.02516, 2020.
  • Han, Z.-Y., Wang, J., Fan, H., Wang, L., and Zhang, P. "Unsupervised Generative Modeling Using Matrix Product States." Physical Review X, 8(3), 031012, 2018.
  • Garcez, A. d'A. and Lamb, L. C. "Neurosymbolic AI: The 3rd Wave." Artificial Intelligence Review, 56, pp. 12387-12406, 2023.