Growing recursive self-improvers

Bas R. Steunebrink, Kristinn R. Thórisson, Jürgen Schmidhuber

Research output: Chapter in Book/Report/Conference proceedingConference contribution

9 Scopus citations

Abstract

Research into the capability of recursive self-improvement typically only considers pairs of 〈agent, self-modification candidate〉, and asks whether the agent can determine/prove if the self-modification is beneficial and safe. But this leaves out the much more important question of how to come up with a potential self-modification in the first place, as well as how to build an AI system capable of evaluating one. Here we introduce a novel class of AI systems, called experience-based AI (EXPAI), which trivializes the search for beneficial and safe self-modifications. Instead of distracting us with proof-theoretical issues, EXPAI systems force us to consider their education in order to control a system’s growth towards a robust and trustworthy, benevolent and well-behaved agent. We discuss what a practical instance of EXPAI looks like and build towards a “test theory” that allows us to gauge an agent’s level of understanding of educational material.
Original languageEnglish (US)
Title of host publicationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
PublisherSpringer [email protected]
Pages129-139
Number of pages11
ISBN (Print)9783319416489
DOIs
StatePublished - Jan 1 2016
Externally publishedYes

Bibliographical note

Generated from Scopus record by KAUST IRTS on 2022-09-14

Fingerprint

Dive into the research topics of 'Growing recursive self-improvers'. Together they form a unique fingerprint.

Cite this