Bayes' Theorem
↑ 10 References
“If you expect a strong probability of seeing weak evidence in one direction, it must be balanced by a weak expectation of seeing strong evidence in the other direction. If you’re very confident in your theory, and therefore anticipate seeing an outcome that matches your hypothesis, this can only provide a very small increment to your belief (it is already close to 1); but the unexpected failure of your prediction would (and must) deal your confidence a huge blow. On average, you must expect to be exactly as confident as when you started out. Equivalently, the mere expectation of encountering evidence—before you’ve actually seen it—should not shift your prior beliefs. (Again, if this is not intuitively obvious, see An Intuitive Explanation of Bayesian Reasoning.)” (Eliezer Yudkowsky, Rationality)
^b650a6
Can Probabilistic Thinking be a Unified theory for reason and belief updating? It would need to fully subordinate Bayes’ Theorem and Solomonoff Induction (the latter of which may already be a Unified theory of its own).
I believe Bayes’ Theorem is the tool to use for inductive logic, because A belief is a degree of certainty.
- Bayes’ Theorem: posteriors are determined by a combination of prior probabilities and data / likelihood
We can create markers of important shifting areas and call them orders of magnitude for probability. One can use Bayes’ Theorem to create these markers.
Bayes’ Theorem (whether implicitly or explicitly) is what is used to direct “belief energy” towards true beliefs. This is like how a machine needs energy input to continue running.
It may even be more than a metaphor, if we take the idea that Entropy is the tendency of a system to progress towards technical simplicity, and Bayes’ Theorem may be a tool for helping us see the flow of Entropy.