Video/Audio Open Access

From Boston to Eden - or how to get systems that are really autonomous and sufficiently intelligent to survive in their niche

Schomaker, Lambert

From Boston to Eden - or how to get systems that are really autonomous and sufficiently intelligent to survive in their niche

[lecture for the Dept. of AI, University of Groningen, Tuesday, March 3rd, 2020]

As impressive as the robots of the Boston Dynamics company are (no AI involved) and as impressive the many results of deep learning are (no AI involved, either), the goal of creating autonomous, intelligent machines is as far away as it ever was. 
In this presentation, I will give a brief overview of several deep-learning projects in our group. As a next step I will try to indicate 
what may be missing, as regards 'real' AI. We may need a closer look at biological systems, i.e., the brain of animals. There exists a wide gap between the control systems at the low level of reflexive movement and the equilibria that need to be maintained ('Boston') versus the higher levels of processing, up to the levels of cognition and reasoning, which are very much upstairs ('Eden'). The missing middleware layer should not be underestimated: It contains the brain stem, up to the thalamus in animals and humans. 
It corresponds to the 300-million year period before the 200 million years period where the neocortex was present. 
What is this middleware doing? The conclusion may be that there is no autonomy without self protection, possible due to the presence of a separate and specialized valuation network that determines probability times utility (p*U), similar to what brain stem, midbrain and amygdala are doing in animals.

References Peter Vamplew, Richard Dazeley, Cameron Foale, Sally Firmin and Jane Mummery (2017). Human-Aligned Artificial Intelligence is a Multiobjective Problem. Ethics and Information Technology. Namburi et al (2015), A Circuit Mechanism for Differentiating Positive and Negative Associations, Nature. 2015 April 30; 520(7549): 675–678. doi:10.1038/nature14366Related (not bio-inspired in any way, but stressing the difficulty of determining a reinforcement value in a grasping task): V. Ortenzi, M. Controzzi, F. Cini. J. Leitner, M. Bianchi M. A. Roa and P. Corke (2019). Robotic manipulation and the role of the task in the metric of success, Nature Machine Intelligence, 1, pp. 340-346. [precursor paper to this lecture: We noted that learning of (convex) problems yields systems that boringly revolve around a loss minimum, with predictable behavior, but exploration is lost. Continuous, autonomous reinforcement seeking would solve this:] van der Zant T., Kouw M., Schomaker L. (2013) Generative Artificial Intelligence. In: Müller V. (eds) Philosophy and Theory of Artificial Intelligence. Studies in Applied Philosophy, Epistemology and Rational Ethics, vol 5. Springer, Berlin, Heidelberg
Files (494.8 MB)
Name Size
494.8 MB Download
All versions This version
Views 194193
Downloads 8181
Data volume 40.1 GB40.1 GB
Unique views 178177
Unique downloads 6868


Cite as