Recursive KL Divergence Optimization: A Dynamic Framework for Representation Learning
Creators
Contributors
Researcher:
Description
We propose a generalization of modern representation learning objectives by reframing them as recursive divergence alignment processes over localized conditional distributions. While recent frameworks like Information Contrastive Learning (I-Con) unify multiple learning paradigms through KL divergence between fixed neighborhood conditionals, we argue this view underplays a crucial recursive structure inherent in the learning process. We introduce Recursive KL Divergence Optimization (RKDO), a dynamic formalism where representation learning is framed as the evolution of KL divergences across data neighborhoods. This formulation captures contrastive, clustering, and dimensionality reduction methods as static slices, while offering a new path to model stability and local adaptation. Our experiments demonstrate that RKDO offers dual efficiency advantages: approximately 30% lower loss values compared to static approaches across three different datasets, and 60-80% reduction in computational resources needed to achieve comparable results. This suggests that RKDO's recursive updating mechanism provides a fundamentally more efficient optimization landscape for representation learning, with significant implications for resource-constrained applications.
Files
arxiv-RKDO.pdf
Files
(496.4 kB)
Name | Size | Download all |
---|---|---|
md5:c869b211b8b8b1ceea3f8f60f9a093dc
|
496.4 kB | Preview Download |
Additional details
Dates
- Created
-
2025-04-30
Software
- Repository URL
- https://github.com/anthonymartin/RKDO-recursive-kl-divergence-optimization
- Programming language
- Python
- Development Status
- Active