Solomonoff Induction
Implications
If Solomonoff Induction allows us to find the probabilities of all possible explanations of the truth, then discovering truth has been solved at the theoretical level. It is essentially incomputable (because of the enormous number of possible hypotheses, their lengths, and because of the Halting problem). But it does provide a groundwork for eventual efficiencies to make this truth finding computable. Universal Artificial Intelligence would be the creation of an efficient, usable truth finding computer using some method that provides an estimation of Solomonoff Induction.
↑ 18 References
Universal Artificial Intelligence by Margus Hutter provides a full formal approximation of Solomonoff Induction
This is actually the definition for Solomonoff Induction, as it uses the more technical description. I don’t see a reason why we can’t just have Occam’s Razor take on this definition. (Perhaps I need to check if there is a technically different definition for Occam’s Razor) ^c5e171
Can Probabilistic Thinking be a Unified theory for reason and belief updating? It would need to fully subordinate Bayes’ Theorem and Solomonoff Induction (the latter of which may already be a Unified theory of its own).
Create a course that teaches Bayes’ Theorem and Solomonoff Induction. Integrate the course with Spaced Repetition learning.
“I occasionally run into people who say something like, “There’s a theoretical limit on how much you can deduce about the outside world, given a finite amount of sensory data.” Yes. There is. The theoretical limit is that every time you see 1 additional bit, it cannot be expected to eliminate more than half of the remaining hypotheses (half the remaining probability mass, rather). And that a redundant message cannot convey more information than the compressed version of itself. Nor can a bit convey any information about a quantity with which it has correlation exactly zero across the probable worlds you imagine. But nothing I’ve depicted this human civilization doing even begins to approach the theoretical limits set by the formalism of Solomonoff induction. It doesn’t approach the picture you could get if you could search through every single computable hypothesis, weighted by their simplicity, and do Bayesian updates on all of them.” (Eliezer Yudkowsky, Rationality)
Related to Solomonoff Induction, Limits of Knowability
It is possible that science is an “approximation to some probability-theoretic ideal of rationality”.^[ Rationality, From A to Z#^9281a1] In which case, it is possible that science is a social and approximate version of Bayes’ Theorem and Solomonoff Induction.
It’s possible the true hypothesis is incomputable. If a Turing machine can’t output the hypothesis, then Solomonoff Induction can only converge to the correct output. This is a problem in the same way that nothing can predict the output (since Turing machine currently reflect the problem of a finite universe, as we understand them now)
Which Universal Turing Machine? There are actually infinite sets of rules that can simulate all other rules, which is to be used for Solomonoff Induction. This choice affects the length of the hypotheses, and thus the probability we place on them. (doesn’t Eliezer Yudkowsky have something to say about this? he seemed to have an explanation of Occam’s Razor that might assume a specific Universal Turing Machine). In any case, different Universal Turing Machine rules don’t significantly change the hypothesis length relative to the compiler.
“I occasionally run into people who say something like, “There’s a theoretical limit on how much you can deduce about the outside world, given a finite amount of sensory data.” Yes. There is. The theoretical limit is that every time you see 1 additional bit, it cannot be expected to eliminate more than half of the remaining hypotheses (half the remaining probability mass, rather). And that a redundant message cannot convey more information than the compressed version of itself. Nor can a bit convey any information about a quantity with which it has correlation exactly zero across the probable worlds you imagine. But nothing I’ve depicted this human civilization doing even begins to approach the theoretical limits set by the formalism of Solomonoff induction. It doesn’t approach the picture you could get if you could search through every single computable hypothesis, weighted by their simplicity, and do Bayesian updates on all of them.” (Eliezer Yudkowsky, Rationality)
Related to Solomonoff Induction, Limits of Knowability