Bayes' Theorem
Right now to me, it feels like teaching Statistics and Probability to students that aren’t using the fields professionally is less effective than teaching them Probabilistic Thinking (i.e. practical Bayes)
↑ 19 References
“If you expect a strong probability of seeing weak evidence in one direction, it must be balanced by a weak expectation of seeing strong evidence in the other direction. If you’re very confident in your theory, and therefore anticipate seeing an outcome that matches your hypothesis, this can only provide a very small increment to your belief (it is already close to 1); but the unexpected failure of your prediction would (and must) deal your confidence a huge blow. On average, you must expect to be exactly as confident as when you started out. Equivalently, the mere expectation of encountering evidence—before you’ve actually seen it—should not shift your prior beliefs. (Again, if this is not intuitively obvious, see An Intuitive Explanation of Bayesian Reasoning.)” (Eliezer Yudkowsky, Rationality)
^b650a6
“If you expect a strong probability of seeing weak evidence in one direction, it must be balanced by a weak expectation of seeing strong evidence in the other direction. If you’re very confident in your theory, and therefore anticipate seeing an outcome that matches your hypothesis, this can only provide a very small increment to your belief (it is already close to 1); but the unexpected failure of your prediction would (and must) deal your confidence a huge blow. On average, you must expect to be exactly as confident as when you started out. Equivalently, the mere expectation of encountering evidence—before you’ve actually seen it—should not shift your prior beliefs. (Again, if this is not intuitively obvious, see An Intuitive Explanation of Bayesian Reasoning.)” (Eliezer Yudkowsky, Rationality)
^b650a6
It is possible that science is an “approximation to some probability-theoretic ideal of rationality”.^[ Rationality, From A to Z#^9281a1] In which case, it is possible that science is a social and approximate version of Bayes’ Theorem and Solomonoff Induction.
We can create markers of important shifting areas and call them orders of magnitude for probability. One can use Bayes’ Theorem to create these markers.
Can Probabilistic Thinking be a Unified theory for reason and belief updating? It would need to fully subordinate Bayes’ Theorem and Solomonoff Induction (the latter of which may already be a Unified theory of its own).
- Bayes’ Theorem: posteriors are determined by a combination of prior probabilities and data / likelihood
With evidence and insight we can update those beliefs, and the mathematically accurate way to update those beliefs is using Bayes’ Theorem.^[ Rationality, From A to Z#^0917ad] Even if we don’t choose to be mathematically precise when we update our beliefs, it is useful to have an internalized version of Bayes’ Theorem so that we can avoid making systematically incorrect updates to our beliefs.
Create a course that teaches Bayes’ Theorem and Solomonoff Induction. Integrate the course with Spaced Repetition learning.
An intuitive way to think about updating according to Bayes’ Theorem is to understand at least the following:
It may even be more than a metaphor, if we take the idea that Entropy is the tendency of a system to progress towards technical simplicity, and Bayes’ Theorem may be a tool for helping us see the flow of Entropy.
Using Bayes’ Theorem, if you have two phenomena that have any non-independence, then their influence on each other is symmetric.
I believe Bayes’ Theorem is the tool to use for inductive logic, because A belief is a degree of certainty.
Bayes’ Theorem (whether implicitly or explicitly) is what is used to direct “belief energy” towards true beliefs. This is like how a machine needs energy input to continue running.