Understanding Artificial Neural Networks: Mysterianism about Known Mechanism is Mysticism
Authors/Creators
Description
Mysterianism is the idea that human cognition, mind, cannot be understood. Taking this concept and applying it to known mechanism — such that claims are made that we do not know how engineered systems, such as artificial neural networks (ANNs), work, or that they constitute black boxes that we can only open with difficulty — is inappropriate at best and malicious at worst. We do know the mechanistic structure of such models because we designed and built them. We also do know their functional role (what they are for) as well as the mathematical function they are asked to approximate (map inputs to target outputs). Because mysterianist beliefs about known systems, such as ANNs, are often expressed, scientists need to sit up and take notice. We provide an error theory as to what is going on to help unpick this metatheoretical blunder. Ultimately, the problem is that 'understanding' is not a technical term in these cases: the word is co-opted for a specific narrative to sell 'artificial intelligence' through mystification. All computational systems, from pendulums to databases, will behave in ways we cannot predict or control — this is not a unique property of ANNs — and experts do indeed grasp the computational properties of these systems nonetheless.
Files
GuestNuñezHernándezBlokpoel2026.pdf
Files
(692.0 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:4512df9c7af737087fd03b2a3ad34fea
|
692.0 kB | Preview Download |