The Illusion of Competence: Why Neural Networks Cannot Perceive Logical Boundaries
Description
Neuro-Symbolic AI aims to bridge the gap between connectionist pattern matching and symbolic reasoning. However, a fundamental question remains: Can a neural network, optimized via gradient descent, truly “learn” a strict logical rule? In this paper, we investigate the intrinsic limitations of neural networks in learning logical boundaries. We introduce the distinction between Pattern Learning and Rule Following. By constructing a Vectorized Structured Logic Network (V-SLN) to learn the Reichenbach Implication function, we reveal a critical decoupling phenomenon: while the network successfully approximates the continuous functional manifold (MSE < 10−6), it systematically fails to capture the discrete topological constraints at logical boundaries (Error > 10−2). We extend this diagnostic probing to Large Language Models, identifying a persistent “Semantic Cliff” phenomenon. Our analysis suggests that while parameter scaling steepens the sigmoid transition, the underlying continuous representation remains a significant barrier. We provide a mathematical proof rooted in Lipschitz continuity to explain this “Boundary Blindness.” Finally, we propose the Heterogeneous Logic Neural Network (H-LNN). By decoupling functional pathways into parallel Analog, Steep, and Binary lanes, and employing a Straight-Through Estimator (STE) to bypass gradient vanishing, H-LNN achieves deterministic boundary locking. Our findings suggest that high confidence in LLMs is often an “Illusion of Competence” born from smoothing over logical cliffs, and true robustness requires architectural heterogeneity.
Files
The_Illusion_of_Competence__Why_Neural_Networks_Cannot_Perceive_Logical_Boundaries.pdf
Files
(1.5 MB)
| Name | Size | Download all |
|---|---|---|
|
md5:7a441b7fc2d81039e0f50a50171bb2dc
|
1.5 MB | Preview Download |