Published March 3, 2020 | Version Revised version 19/4/2020
Video/Audio Open

From Boston to Eden - or how to get systems that are really autonomous and sufficiently intelligent to survive in their niche

  • 1. University of Groningen, The Netherlands

Description

From Boston to Eden - or how to get systems that are really autonomous and sufficiently intelligent to survive in their niche

[lecture for the Dept. of AI, University of Groningen, Tuesday, March 3rd, 2020]

As impressive as the robots of the Boston Dynamics company are (no AI involved) and as impressive the many results of deep learning are (no AI involved, either), the goal of creating autonomous, intelligent machines is as far away as it ever was. 
In this presentation, I will give a brief overview of several deep-learning projects in our group. As a next step I will try to indicate 
what may be missing, as regards 'real' AI. We may need a closer look at biological systems, i.e., the brain of animals. There exists a wide gap between the control systems at the low level of reflexive movement and the equilibria that need to be maintained ('Boston') versus the higher levels of processing, up to the levels of cognition and reasoning, which are very much upstairs ('Eden'). The missing middleware layer should not be underestimated: It contains the brain stem, up to the thalamus in animals and humans. 
It corresponds to the 300-million year period before the 200 million years period where the neocortex was present. 
What is this middleware doing? The conclusion may be that there is no autonomy without self protection, possible due to the presence of a separate and specialized valuation network that determines probability times utility (p*U), similar to what brain stem, midbrain and amygdala are doing in animals.

Notes

References Peter Vamplew, Richard Dazeley, Cameron Foale, Sally Firmin and Jane Mummery (2017). Human-Aligned Artificial Intelligence is a Multiobjective Problem. Ethics and Information Technology. Namburi et al (2015), A Circuit Mechanism for Differentiating Positive and Negative Associations, Nature. 2015 April 30; 520(7549): 675–678. doi:10.1038/nature14366Related (not bio-inspired in any way, but stressing the difficulty of determining a reinforcement value in a grasping task): V. Ortenzi, M. Controzzi, F. Cini. J. Leitner, M. Bianchi M. A. Roa and P. Corke (2019). Robotic manipulation and the role of the task in the metric of success, Nature Machine Intelligence, 1, pp. 340-346. [precursor paper to this lecture: We noted that learning of (convex) problems yields systems that boringly revolve around a loss minimum, with predictable behavior, but exploration is lost. Continuous, autonomous reinforcement seeking would solve this:] van der Zant T., Kouw M., Schomaker L. (2013) Generative Artificial Intelligence. In: Müller V. (eds) Philosophy and Theory of Artificial Intelligence. Studies in Applied Philosophy, Epistemology and Rational Ethics, vol 5. Springer, Berlin, Heidelberg https://link.springer.com/chapter/10.1007/978-3-642-31674-6_8

Files

From-Boston-to-Eden-Lambert-Schomaker-March-2020.mp4

Files (494.8 MB)

Additional details

Funding

MANTIS – Cyber Physical System based Proactive Collaborative Maintenance 662189
European Commission
PERICO – Peroxisome Interactions and Communication 812968
European Commission