Published March 11, 2026 | Version v2
Report Open

Natural-Synthesis-8B

Description

This technical report presents Natural-Synthesis-8B, an experimental fine-tune of Llama-3-8B trained on a synthetic dataset of 68 examples designed to install a biologically-inspired reasoning paradigm rather than domain-specific knowledge.

The central hypothesis is that reasoning process and reasoning content are separable learning targets. Standard fine-tuning teaches models what to think. This work teaches a model how to think — specifically, through a five-stage cognitive growth cycle (Seed → Root Exploration → Principled Pruning → Canopy Formation → Homeostatic Review) governed by five evaluative nutrients (Coherence, Parsimony, Explanatory Power, Fecundity, Evidential Grounding).

The result is a model that demonstrates consistent cross-domain structural reasoning, emergent systems thinking, and selective metacognitive activation at 8B parameters — capabilities that do not appear reliably in the base model. Custom evaluations show an 18% gain in cognitive flexibility over the base model baseline.

This report documents the paradigm, training methodology, benchmark comparisons, qualitative behavioral evidence, and a frank analysis of failure modes — including the coherence-without-truth problem inherent to any coherence-optimized reasoning system.

The model is available at: https://huggingface.co/JPQ24/llama-3-8b-Natural-synthesis-Lora-Merge

The dataset is aviable at: https://huggingface.co/datasets/JPQ24/Natural_synthesis

Files

Files (28.9 kB)

Name Size Download all
md5:dbc428c2a00d86c6d220b57101af2491
28.9 kB Download

Additional details

Dates

Created
2026-03-11