A Note on Learning Algorithms as Approximations of Solomonoff Induction
Resumo
We present a theoretical result showing that a learning algorithm can be seen to be an approximation of Solomonoff induction. A constant multiplicative bound between the predictive probability of a learning algorithm and the Solomonoff semi-distribution characterises this approximation. Our main contribution is to state and prove this result concisely, in general, clarifying the theoretical role of algorithmic probability in practical learning. We also establish a positive correlation between the Vapnik Chervonenkis (VC) dimension of a model class and the Kolmogorov complexity of individual hypotheses within it, reinforcing the conceptual alignment between learnability and compressibility. We contextualise these findings by discussing implications on the MDL principle and the emergent behaviour of large language models (LLMs), which empirically approximate algorithmic prediction. An example and comparisons with PAC learning, Bayesian inference, and reinforcement learning are provided.
Publicado
29/09/2025
Como Citar
SLAVIERO, Cleyton; HAEUSLER, Edward Hermann.
A Note on Learning Algorithms as Approximations of Solomonoff Induction. In: BRAZILIAN CONFERENCE ON INTELLIGENT SYSTEMS (BRACIS), 35. , 2025, Fortaleza/CE.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2025
.
p. 97-115.
ISSN 2643-6264.
