Rethinking Machine Learning: From Statistical Abstractions to Empirical Mechanisms
Description
Theoretical accounts of machine learning were largely developed at a time when learning systems exhibited limited empirical capability, framing learning in terms of statistical concentration, convergence-based guarantees, and classical notions of generalization. However, empirical machine learning systems now achieve robust and scalable performance in settings that do not conform to the assumptions underlying these theories, exposing a systematic misalignment between theoretical explanations and observed behavior. In this work, we re-examine several foundational concepts inherited from classical learning theory—including statistical concentration, convergence-based guarantees, classical generalization criteria, and likelihood-based formulations—and show why their traditional interpretations fail to account for empirical machine learning systems. We argue that the effectiveness of machine learning does not arise from probabilistic inference or mathematical guarantees, but from the expressive capacity of parameterized models interacting with data. From this perspective, machine learning is best understood as a data-driven engineering discipline, rather than as a mathematical theory of learning from samples.
Files
Rethinking_ML.pdf
Files
(223.9 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:498a97d063ccb6fe1d60af8a81fbb258
|
223.9 kB | Preview Download |