Published February 13, 2026 | Version V1.0
Preprint Open

Image-Text Cross-Validation Methodology for Ancient Technical Manuscripts

Authors/Creators

Description

This paper presents a cross-validation methodology for the Voynich Manuscript and other ancient manuscripts containing both text and images. Based on a proposed Software–Hardware Separation Principle, this work observes that in the Voynich Manuscript, text encodes functional states (“software”) while images encode hardware specifications (component parameters).

Key contributions:

  • Blind test protocol achieving high distributional overlap between text-based predictions and observed imagery

  • ABCD + Gold classification algorithm for visual feature prediction from KST-standardized text

  • Open-source verification guide with prompts and step-by-step instructions for independent replication

  • Dragon root case study (f25v): text-based prediction of dragon imagery subsequently verified in the actual manuscript

While the Voynich Manuscript is the primary test case, the methodology is designed to generalize to other ancient technical manuscripts that combine text and imagery.

This research is a human–AI collaborative achievement by the BlazeCipher team. Details of the collaboration protocol are documented in Paper #1 and the project README.

Related papers:

  • Paper #1: KST Methodology (DOI: 10.5281/zenodo.18483132)

  • Paper #2: English Abbreviation Hypothesis (DOI: 10.5281/zenodo.18483785)

  • Paper #4: ABCD + Gold Classification (DOI: 10.5281/zenodo.18627514)

Related resources:

Files

Paper_03_Image_Text_CrossValidation_Chinese_v2_1.pdf

Files (525.1 kB)

Additional details