Published March 18, 2026 | Version 0.1.0
Preprint Open

Scalability of Latent Factor Posteriors to Varying Evidence Pool Sizes

  • 1. Epalea

Description

We present a comprehensive empirical study examining the scalability characteristics of Latent Posterior Factor (LPF) models across varying evidence pool sizes. Through systematic experiments spanning pool sizes from 10 to 500 evidence pieces per entity, we demonstrate that both LPF-SPN and LPF-Learned architectures maintain robust performance while exhibiting distinct scaling behaviours. Our findings reveal that LPF-SPN achieves superior calibration (ECE = 0.050–0.163) with computational efficiency (14–15 ms inference), while LPF-Learned attains near-perfect accuracy (98.5–100%) at the cost of increased latency (35–42 ms). Notably, performance remains stable across a 50× increase in evidence volume, validating the architectural design for real-world deployment scenarios where knowledge bases accumulate evidence over time.

Keywords:
Scalability analysis, Latent Posterior Factors (LPF), evidence aggregation, multi-evidence reasoning, large-scale AI systems, probabilistic reasoning, neural-symbolic AI, sum-product networks (SPN), learned aggregation, uncertainty calibration, expected calibration error (ECE), high-throughput inference, model efficiency, performance scaling, knowledge base systems, machine learning scalability, real-world deployment AI

Files

main.pdf

Files (584.3 kB)

Name Size Download all
md5:5752f6ca45128c3a1784308e909112d6
584.3 kB Preview Download

Additional details

Related works

Is supplemented by
Preprint: 10.5281/zenodo.19183861 (DOI)
Preprint: 10.5281/zenodo.19184458 (DOI)
Preprint: arXiv:2603.15670 (arXiv)
Preprint: arXiv:2603.15674 (arXiv)

Software

Repository URL
https://github.com/aaaEpalea/epalea.git
Programming language
Python
Development Status
Active