An Intuitive Explanation of Solomonoff Induction

Outline

Background

Algorithms, explained

Induction, explained

Probability, explained

mentions that Bayes’ Theorem is used to update your probabilities)

acknowledged the difficulty of estimating probabilities

Problem of priors

hard to determine, and no clear strategy

full ignorance implies “even distribution”, but rarely a good choice in practice

mentions Maximum Entropy Principle, should look into that more

easy to choose a prior to skew your result, intentionally or not

need a method for determining priors that is always correct

Solution

Binary sequences

binary is the minimum alphabet (does this remain the case when we consider Qubits? Seems like yes since apparently quantum computers are still Turing machine)

all information can be represented in binary

some nice bits (lol) that relate to A unifying language enables further discovery

All algorithms

all types of Turing machine can be compiled into a Universal Turing Machine

all algorithms are represented by Turing machine, and all Turing machine are represented by inputs to the Universal Turing Machine.

Solomonoff Induction assumes that the hypothesis that explains the data is an algorithm.

Solomonoff’s Lightsaber

Solomonoff’s Lightsaber vs Occam’s Razor (former is more precise and complete, i.e. Technical Simplicity)

Find all the hypotheses that would predict the observation by trying every possible hypothesis through the Universal Turing Machine and keeping the ones that output the observation.

This is incomputable. Both because of the enormous number of possible algorithms and the Halting problem.

In theory, you now have all possible explanatory hypotheses, each a binary sequence.

Assuming randomness (what does that really mean in this context?), each bit has a 50% chance of flipping the way it did. So the the probability of each hypothesis is $(\frac{1}{2})^n$ where $n$ is the number of bits in the hypothesis. This means longer hypotheses are less likely, i.e. they require additional evidence compared to the shorter hypotheses. (these probabilities are not normalized, but still comparable)

This fits with Occam’s Razor

Probability of the observation alone = sum of probabilities of hypotheses that are consistent with the evidence.

Formalized Science

Solomonoff Induction is the Scientific Method formalized into an algorithm.

  1. Make an observation: observation = binary sequence
  1. Form a hypothesis that explains the observation: use a Universal Turing Machine to find all possible hypotheses
  1. Conduct an experiment that will test the hypothesis: through observation of the data you may be able to eliminate hypotheses as not explanatory
  1. If hypothesis is disconfirmed, return to step 2. Otherwise, provisionally accept hypothesis: instead of accepting an individual hypothesis provisionally, we accept all explanatory hypotheses with their according probabilities.

This finds the truth as best is possible, avoiding bias or requirement for creative discovery

Still not efficient

Approximations

technically all prediction methods are approximations of Solomonoff Induction

Use a Turing machine that halts (lose all non-halting explanations)

Use randomness and select a subset of all hypotheses. i.e. Monte Carlo method. Perhaps use an evolutionary algorithm that that starts with seed hypotheses and vary the ones that get closest to the observation

Jurgen Schmidhuber equates faster algorithms with high probability, and uses that to create a very fast approximation. But the assumption that faster algorithms are more likely isn’t a given.

Unresolved Details

Which Universal Turing Machine? There are actually infinite sets of rules that can simulate all other rules, which is to be used for Solomonoff Induction. This choice affects the length of the hypotheses, and thus the probability we place on them. (doesn’t Eliezer Yudkowsky have something to say about this? he seemed to have an explanation of Occam’s Razor that might assume a specific Universal Turing Machine). In any case, different Universal Turing Machine rules don’t significantly change the hypothesis length relative to the compiler.

It’s possible the true hypothesis is incomputable. If a Turing machine can’t output the hypothesis, then Solomonoff Induction can only converge to the correct output. This is a problem in the same way that nothing can predict the output (since Turing machine currently reflect the problem of a finite universe, as we understand them now)

Some argue the universe cannot be represented as a binary sequence (e.g. the problem of Consciousness)

“Fundamental theories have an effect of unifying previously separate ideas. After Turing discovered the basics of computability theory, it was only a matter of time before these ideas made their way across philosophy and mathematics. Statisticians could only find exact probabilities in simplified situations; now they know how to find them in all situations. Scientists wondered how to really know which hypothesis was simpler; now they can find a number. Philosophers wondered how induction could be justified. Solomonoff induction gives them an answer.”

^e24923

“Whether we are a detective trying to catch a thief, a scientist trying to discover a new physical law, or a businessman attempting to understand a recent change in demand, we are all in the process of collecting information and trying to infer the underlying causes.” (Shane Legg)

“If we construct our algorithm for truth, then we can make a computer program that finds truth—an Artificial Intelligence.” (lesswrong.com, An Intuitive Explanation of Solomonoff Induction - LessWrong)

“Having a common and simple language can sometimes be the key to progress. The ancient Greek mathematician Archimedes discovered many specific results of calculus, but could not generalize the methods because he did not have the language of calculus. After this language was developed in the late 1600s, hundreds of mathematicians were able to produce new results in the field. Now, calculus forms an important base of our modern civilization.” (lesswrong.com, An Intuitive Explanation of Solomonoff Induction - LessWrong)

Spurred a lot of my initial thoughts on A unifying language enables further discovery