Robust and Compact Neural Computation via Hyperbolic Geometry
Creators
Description
Standard deep neural networks, while powerful, suffer from two critical flaws: a lack of robustness to noisy data and an often excessive parameter count. We propose a novel architecture, the Hyperbolic Network (HyperNet), that addresses both issues by performing computation within a non-Euclidean, hyperbolic space. Our model learns to map high-dimensional inputs to a low-dimensional Poincaré Ball manifold, where a "concept library" of ideal class representations resides. Classification is performed by finding the nearest concept using the Poincaré distance, a metric inherent to the geometry of the space. We demonstrate on MNIST that our HyperNet, while being 2x smaller than a comparable CNN baseline, is dramatically more robust. When subjected to extreme additive Gaussian noise (σ=0.6), the HyperNet retains 82.70% accuracy, whereas the standard CNN's performance collapses to 40.81%. This powerful trade-off—sacrificing minimal clean-data accuracy (94.79% vs. 98.73%) for a massive gain in robustness and a significant reduction in size—suggests that leveraging intrinsic geometric properties is a key to building more resilient and efficient AI.
Files
article.pdf
Files
(399.0 kB)
Name | Size | Download all |
---|---|---|
md5:4e4886900a01cff86497e0f3ce6e6af0
|
251.3 kB | Preview Download |
md5:52d15153c0f0aa0dd205aa7e6d16333a
|
147.7 kB | Preview Download |
Additional details
Dates
- Submitted
-
2025-09-04