Published December 27, 2025 | Version v1
Preprint Open

An Inductive Approach to LLM-Assisted Meta-Synthesis: From Normalization of Deviance to Generalized Ontology Extraction

  • 1. Independent Researcher

Description

  Abstract

 

  Systematic analysis of academic literature increasingly requires extraction of structured information from theoretical texts—a task that challenges both traditional manual coding and naive computational approaches. Sedlar et al. (2023), in their systematic review of normalization of deviance, concluded that "behavioral research is desperately needed to support the mostly conceptual nature of the academic literature" (p. 303). Yet their own review—constrained by a single coder and 33 papers—exemplifies the scalability limitations they identified. This paper presents a methodology that directly addresses this gap: adapting large language models to extract structured assertions through parameter-efficient fine-tuning, enabling the kind of comprehensive literature analysis that manual methods cannot achieve.

 

  We describe schema design principles that preserve theoretical nuance while enabling quantitative aggregation, training data construction procedures that maintain interpretive validity, and fidelity verification protocols that integrate human oversight with computational efficiency. The methodology is demonstrated through application to normalization of deviance literature, yielding 5,678 classified tokens across 27 source documents—including the foundational Vaughan corpus that Sedlar's aerospace-excluding methodology omitted. Our extraction reveals that 67% of Vaughan's core mechanisms (practical drift, social construction of risk) are absent from Sedlar's framework—a "Type II Error Irony" wherein a framework designed to detect missed risks itself misses critical mechanisms.

 

  While the demonstration domain is organizational safety research, the approach generalizes to any meta-synthesis requiring structured extraction from theoretical texts. The human-AI research workflow documented here—wherein researchers establish standard operating procedures with AI agents, deploy fine-tuned extraction models, and close work orders through fidelity verification—provides a replicable template for rigorous computational scholarship.

 

  ---

  Keywords

 

  fine-tuning, large language models, meta-synthesis, literature analysis, assertion extraction, systematic review, normalization of deviance, human-AI collaboration, qualitative research methodology

Files

nod_extraction.pdf

Files (111.1 kB)

Name Size Download all
md5:21497bbb88a899b489aaab3a2edad5e1
111.1 kB Preview Download

Additional details

Dates

Issued
2025-12-26
Publication date