Abstract
Some basic concepts of algorithmic complexity theory relevant to machine learning are reviewed along with the Solomon-Levin distribution (or universal prior) which deals with the prior problem. The universal prior leads to a probabilistic method for finding algorithmically simple problem solutions with high generalization capability. The method is based on Levin complexity and inspired by Levin's optimal universal search algorithm. For a given problem, solution candidates are computed by efficient self sizing programs that influence their own runtime and storage size. The method, at least with certain toy problems where it is computationally feasible, can lead to unmatchable generalization results.
Original language | English (US) |
---|---|
Pages (from-to) | 857-873 |
Number of pages | 17 |
Journal | Neural Networks |
Volume | 10 |
Issue number | 5 |
DOIs | |
State | Published - Jan 1 1997 |
Externally published | Yes |
Bibliographical note
Generated from Scopus record by KAUST IRTS on 2022-09-14ASJC Scopus subject areas
- Artificial Intelligence
- Cognitive Neuroscience