Rationality, From A to Z

Rationality, From A to Z

Finished reading in 2020/04

The Book in 3 Sentences

  1. A sweeping tome of ideas about ways to improve the way we think.
  1. Strong focus on how to think about the world in a more scientific and probabilistic way.
  1. Pushes one to rethink what is knowable.

Impressions

This was an extraordinarily long read, but contained a lot of super interesting insights. Hard to put down contained impressions of this book…

Who Should Read It?

Folks who want to engage with the way they think.

Quotes

Oh, but arguing the real question would require work. You’d have to actually watch the wiggin to see if he reached for the ketchup. Or maybe see if you can find statistics on how many green-eyed black-haired people actually like ketchup. At any rate, you wouldn’t be able to do it sitting in your living room with your eyes closed. And people are lazy. They’d rather argue “by definition,” especially since they think “you can define a word any way you like.”

There are various other games you can also play with certainty effects. For example, if you offer someone a certainty of $400, or an 80% probability of $500 and a 20% probability of $300, they’ll usually take the $400. But if you ask people to imagine themselves $500 richer, and ask if they would prefer a certain loss of $100 or a 20% chance of losing $200, they’ll usually take the chance of losing $200.4 Same probability distribution over outcomes, different descriptions, different choices. (Eliezer Yudkowsky, Rationality)
^163113
Related to Dutch books

Not every change is an improvement, but every improvement is necessarily a change. That which you want to do better, you have no choice but to do differently. (Eliezer Yudkowsky, Rationality)
^16cded

If one does not quite understand that power which put footprints on the Moon, nonetheless, the footprints are still there—real footprints, on a real Moon, put there by a real power. If one were to understand deeply enough, one could create and shape that power. Intelligence is as real as electricity. It’s merely far more powerful, far more dangerous, has far deeper implications for the unfolding story of life in the universe—and it’s a tiny little bit harder to figure out how to build a generator. (Eliezer Yudkowsky, Rationality)

^6bb039

When you see people insisting that no amount of money whatsoever is worth a single human life, and then driving an extra mile to save $10; or when you see people insisting that no amount of money is worth a decrement of health, and then choosing the cheapest health insurance available; then you don’t think that their protestations reveal some deep truth about incommensurable utilities. (Eliezer Yudkowsky, Rationality)

^1239ba

Perhaps the machinery is evolutionarily optimized to purposes that actively oppose epistemic accuracy; for example, the machinery to win arguments in adaptive political contexts. Or the selection pressure ran skew to epistemic accuracy; for example, believing what others believe, to get along socially. (Eliezer Yudkowsky, Rationality)

id:: 62428739-1bf9-4b74-bb17-58e350cafca2

I fear that Traditional Rationality does not properly sensitize its users to the difference between forward flow and backward flow. In Traditional Rationality, there is nothing wrong with the scientist who arrives at a pet hypothesis and then sets out to find an experiment that proves it. A Traditional Rationalist would look at this approvingly, and say, “This pride is the engine that drives Science forward.” Well, it is the engine that drives Science forward. It is easier to find a prosecutor and defender biased in opposite directions, than to find a single unbiased human. But just because everyone does something, doesn’t make it okay. It would be better yet if the scientist, arriving at a pet hypothesis, set out to test that hypothesis for the sake of curiosity—creating experiments that would drive their own beliefs in an unknown direction. (Eliezer Yudkowsky, Rationality)

1

“Each piece of evidence shifts your beliefs by exactly the right amount, neither more nor less. What is exactly the right amount? To calculate this you must study probability theory.” (Eliezer Yudkowsky, Rationality)

^0917ad

id:: 62428739-b960-40ce-83c1-6bf04a1f5d95

“I have lost track of how many times I have heard people say, “Intelligence is an emergent phenomenon!” as if that explained intelligence. This usage fits all the checklist items for a mysterious answer to a mysterious question. What do you know, after you have said that intelligence is “emergent”? You can make no new predictions. You do not know anything about the behavior of real-world minds that you did not know before. It feels like you believe a new fact, but you don’t anticipate any different outcomes. Your curiosity feels sated, but it has not been fed. The hypothesis has no moving parts—there’s no detailed internal model to manipulate. Those who proffer the hypothesis of “emergence” confess their ignorance of the internals, and take pride in it; they contrast the science of “emergence” to other sciences merely mundane.” (Eliezer Yudkowsky, Rationality)

1

“To be clever in argument is not rationality but rationalization. Intelligence, to be useful, must be used for something other than defeating itself.” (Eliezer Yudkowsky, Rationality)

“What is above all needed is to let the meaning choose the word, and not the other way around. In prose, the worst thing one can do with words is surrender to them. When you think of a concrete object, you think wordlessly, and then, if you want to describe the thing you have been visualising you probably hunt about until you find the exact words that seem to fit it. When you think of something abstract you are more inclined to use words from the start, and unless you make a conscious effort to prevent it, the existing dialect will come rushing in and do the job for you, at the expense of blurring or even changing your meaning. Probably it is better to put off using words as long as possible and get one’s meaning as clear as one can through pictures and sensations.” (Eliezer Yudkowsky, Rationality)

^0af08f
Relevant for Use intentionally ambiguous naming to avoid restricting growing or uncertain ideas

“When human lives are at stake, we have a duty to maximize, not satisfice; and this duty has the same strength as the original duty to save lives. Whoever knowingly chooses to save one life, when they could have saved two—to say nothing of a thousand lives, or a world—they have damned themselves as thoroughly as any murderer.” (Eliezer Yudkowsky, Rationality)

^32bf5c

“But if you ask about greatness in the sense of revealed virtue, then someone who would risk their life to save only three lives reveals more courage than someone who would risk their life to save two hundred but not three.” (Eliezer Yudkowsky, Rationality)

An interesting point here about Revealed Virtue, more obviously clear moral scenarios “reveal less virtue”.

“We read history but we don’t live it, we don’t experience it. If only I had personally postulated astrological mysteries and then discovered Newtonian mechanics, postulated alchemical mysteries and then discovered chemistry, postulated vitalistic mysteries and then discovered biology. I would have thought of my Mysterious Answer and said to myself: No way am I falling for that again.” (Eliezer Yudkowsky, Rationality)

Pretend like you have lived history, so that you can realize the flaws in prior ways of thinking and not fall for those same mistakes again. I also like the idea of Mysterious Answer and trying to understand when you’re falling for one.

“Some things are worth dying for. Yes, really! And if we can’t get comfortable with admitting it and hearing others say it, then we’re going to have trouble caring enough—as well as coordinating enough—to put some effort into group projects. You’ve got to teach both sides of it, “That which can be destroyed by the truth should be,” and “That which the truth nourishes should thrive.”” (Eliezer Yudkowsky, Rationality)

Potentially usable phrases: That which can be destroyed by the truth should be and That which the truth nourishes should thrive.

“If you discount all harm done by the Catholic Church, and look only at the good . . . then does the average Catholic do more gross good than the average atheist, just by virtue of being more active? Perhaps if you are wiser but less motivated, you can search out interventions of high efficiency and purchase utilons on the cheap . . . But there are few of us who really do that, as opposed to planning to do it someday.” (Eliezer Yudkowsky, Rationality)

This is mostly interesting because of the idea of a “Utilitarian currency”, a Utilons

“I think the most important lesson to take away from Asch’s experiments is to distinguish “expressing concern” from “disagreement.” Raising a point that others haven’t voiced is not a promise to disagree with the group at the end of its discussion.” (Eliezer Yudkowsky, Rationality)

A good distinction between Concern vs Disagreement.

“I have already remarked that nothing is inherently mysterious—nothing that actually exists, that is. If I am ignorant about a phenomenon, that is a fact about my state of mind, not a fact about the phenomenon; to worship a phenomenon because it seems so wonderfully mysterious is to worship your own ignorance; a blank map does not correspond to a blank territory, it is just somewhere we haven’t visited yet, etc., etc. . . . Which is to say that everything—everything that actually exists—is liable to end up in “the dull catalogue of common things,” sooner or later.” (Eliezer Yudkowsky, Rationality)

Some good thoughts here regarding Mysterious Answer and Map and Territory

“Death. Complete the pattern: “Death gives meaning to life.” It’s frustrating, talking to good and decent folk—people who would never in a thousand years spontaneously think of wiping out the human species—raising the topic of existential risk, and hearing them say, “Well, maybe the human species doesn’t deserve to survive.” They would never in a thousand years shoot their own child, who is a part of the human species, but the brain completes the pattern.” (Eliezer Yudkowsky, Rationality)

Relevant ideas here are We seek to complete patterns and Death

“Yes, people are sometimes limited in their ability to trade time for money (underemployed), so that it is better for them if they can directly donate that which they would usually trade for money. If the soup kitchen needed a lawyer, and the lawyer donated a large contiguous high-priority block of lawyering, then that sort of volunteering makes sense—that’s the same specialized capability the lawyer ordinarily trades for money. But “volunteering” just one hour of legal work, constantly delayed, spread across three weeks in casual minutes between other jobs? This is not the way something gets done when anyone actually cares about it, or to state it near-equivalently, when money is involved.” (Eliezer Yudkowsky, Rationality)

Relevant ideas here are Specialized volunteering.

“But if you want to know why I might be reluctant to extend the graph of biological and economic growth over time, into the future and over the horizon of an AI that thinks at transistor speeds and invents self-replicating molecular nanofactories and improves its own source code, then there is my reason: you are drawing the wrong graph, and it should be optimization power in versus optimized product out, not optimized product versus time.” (Eliezer Yudkowsky, Rationality)

Rethinking optimization over time to optimization over “optimization power in” is interesting. Relates to Progress as optimization and Societal optimization power changes with time. Optimization power in versus optimized product out may be an argument against Inclinism.

There are no surprising facts, only models that are surprised by facts; and if a model is surprised by the facts, it is no credit to that model.” (Eliezer Yudkowsky, Rationality)

Great description of Map and Territory.

“In fact, it seems to me that to prevent public misunderstanding, maybe scientists should go around saying “We are not INFINITELY certain” rather than “We are not certain.” For the latter case, in ordinary discourse, suggests you know some specific reason for doubt.” (Eliezer Yudkowsky, Rationality)

Discussions of certainty are difficult and 100% certainty is infinite certainty

“We attribute our own actions to our situations, seeing our behaviors as perfectly normal responses to experience. But when someone else kicks a vending machine, we don’t see their past history trailing behind them in the air. We just see the kick, for no reason we know about, and we think this must be a naturally angry person—since they lashed out without any provocation.” (Eliezer Yudkowsky, Rationality)

Fundamental attribution error

“I used to be very confused about metaethics. After my confusion finally cleared up, I did a postmortem on my previous thoughts. I found that my object-level moral reasoning had been valuable and my meta-level moral reasoning had been worse than useless. And this appears to be a general syndrome—people do much better when discussing whether torture is good or bad than when they discuss the meaning of “good” and “bad.” Thus, I deem it prudent to keep moral discussions on the object level wherever I possibly can.” (Eliezer Yudkowsky, Rationality)

^eedaa7

Keep moral discussions on the object level

“But one of the primary lessons of this gigantic list is that saying “There’s no way my choice of X can be ‘wrong’” is nearly always an error in practice, whatever the theory. You can always be wrong. Even when it’s theoretically impossible to be wrong, you can still be wrong. There is never a Get Out of Jail Free card for anything you do. That’s life.” (Eliezer Yudkowsky, Rationality)

Relates to 100% certainty is infinite certainty.

“We can’t relax our grip on the future—let go of the steering wheel—and still end up with anything of value.” (Eliezer Yudkowsky, Rationality)

Relates to Entropy.

““A witty saying proves nothing,” as Voltaire said.” (Eliezer Yudkowsky, Rationality)

“And the experimental results on the field as a whole are commensurate. Yes, patients who see psychotherapists have been known to get better faster than patients who simply do nothing. But there is no statistically discernible difference between the many schools of psychotherapy. There is no discernible gain from years of expertise. And there’s also no discernible difference between seeing a psychotherapist and spending the same amount of time talking to a randomly selected college professor from another field. It’s just talking to anyone that helps you get better, apparently. In the entire absence of the slightest experimental evidence for their effectiveness, psychotherapists became licensed by states, their testimony accepted in court, their teaching schools accredited, and their bills paid by health insurance.” (Eliezer Yudkowsky, Rationality)

An interesting claim about psychotherapy. Is psychotherapy any better than talking to a random person?

“I occasionally run into people who say something like, “There’s a theoretical limit on how much you can deduce about the outside world, given a finite amount of sensory data.” Yes. There is. The theoretical limit is that every time you see 1 additional bit, it cannot be expected to eliminate more than half of the remaining hypotheses (half the remaining probability mass, rather). And that a redundant message cannot convey more information than the compressed version of itself. Nor can a bit convey any information about a quantity with which it has correlation exactly zero across the probable worlds you imagine. But nothing I’ve depicted this human civilization doing even begins to approach the theoretical limits set by the formalism of Solomonoff induction. It doesn’t approach the picture you could get if you could search through every single computable hypothesis, weighted by their simplicity, and do Bayesian updates on all of them.” (Eliezer Yudkowsky, Rationality)

Related to Solomonoff Induction, Limits of Knowability

“(I digress here to remark that the symmetry of the expression for the mutual information shows that Y must tell us as much about Z, on average, as Z tells us about Y. I leave it as an exercise to the reader to reconcile this with anything they were taught in logic class about how, if all ravens are black, being allowed to reason Raven(x) ⇒ Black(x) doesn’t mean you’re allowed to reason Black(x) ⇒ Raven(x). How different seem the symmetrical probability flows of the Bayesian, from the sharp lurches of logic—even though the latter is just a degenerate case of the former.)” (Eliezer Yudkowsky, Rationality)

^27ecf7

This is an example of Full Logic, and Evidence is symmetric

“Modest demeanors are cheap. Humble admissions of doubt are cheap. I’ve known too many people who, presented with a counterargument, say, “I am but a fallible mortal, of course I could be wrong,” and then go on to do exactly what they had planned to do previously.” (Eliezer Yudkowsky, Rationality)

Related to Humility in beliefs

“There is an old Jewish joke: During Yom Kippur, the rabbi is seized by a sudden wave of guilt, and prostrates himself and cries, “God, I am nothing before you!” The cantor is likewise seized by guilt, and cries, “God, I am nothing before you!” Seeing this, the janitor at the back of the synagogue prostrates himself and cries, “God, I am nothing before you!” And the rabbi nudges the cantor and whispers, “Look who thinks he’s nothing.”” (Eliezer Yudkowsky, Rationality)

#funny and a good example of The Most Modest

“The rule that “absence of evidence is evidence of absence” is a special case of a more general law, which I would name Conservation of Expected Evidence: The expectation of the posterior probability, after viewing the evidence, must equal the prior probability. P(H) = P(H,E) + P(H,¬E) P(H) = P(H|E) × P(E) + P(H|¬E) × P(¬E) Therefore, for every expectation of evidence, there is an equal and opposite expectation of counterevidence.” (Eliezer Yudkowsky, Rationality)

Conservation of Expected Evidence is a good term, worth fleshing this out at some point probably.

“The final upshot is that Science is not easily reconciled with probability theory. If you do a probability-theoretic calculation correctly, you’re going to get the rational answer. Science doesn’t trust your rationality, and it doesn’t rely on your ability to use probability theory as the arbiter of truth. It wants you to set up a definitive experiment.” (Eliezer Yudkowsky, Rationality)

Relevant for Probabilistic thinking is prediction, science is verification

“When you argue a seemingly factual question, always keep in mind which difference of anticipation you are arguing about. If you can’t find the difference of anticipation, you’re probably arguing about labels in your belief network—or even worse, floating beliefs, barnacles on your network. If you don’t know what experiences are implied by Wulky Wilkinsen being a post-utopian, you can go on arguing forever.” (Eliezer Yudkowsky, Rationality)

Relevant for Two types of arguments

“Your trust will not break, until you apply all that you have learned here and from other books, and take it as far as you can go, and find that this too fails you—that you have still been a fool, and no one warned you against it—that all the most important parts were left out of the guidance you received—that some of the most precious ideals you followed steered you in the wrong direction— —and if you still have something to protect, so that you must keep going, and cannot resign and wisely acknowledge the limitations of rationality— —then you will be ready to start your journey as a rationalist. To take sole responsibility, to live without any trustworthy defenses, and to forge a higher Art than the one you were once taught.” (Eliezer Yudkowsky, Rationality)

Peak Eliezer Yudkowsky poetry.

“If you only try to do what seems humanly possible, you will ask too little of yourself. When you imagine reaching up to some higher and inconvenient goal, all the convenient reasons why it is “not possible” leap readily to mind. The most important role models are dreams: they come from within ourselves. To dream of anything less than what you conceive to be perfection is to draw on less than the full power of the part of yourself that dreams.” (Eliezer Yudkowsky, Rationality)

^47d21b

Some great Eliezer Yudkowsky poetry.

“Of course, it is a severe error to say that a phenomenon is precise or vague, a case of what Jaynes calls the Mind Projection Fallacy.7 Precision or vagueness is a property of maps, not territories. Rather we should ask if the price in the supermarket stays constant or shifts about. A hypothesis of the “vague” sort is a good description of a price that shifts about. A precise map will suit a constant territory.” (Eliezer Yudkowsky, Rationality)

^74858f

Relates to Mind Projection Fallacy.

“If for many years you practice the techniques and submit yourself to strict constraints, it may be that you will glimpse the center. Then you will see how all techniques are one technique, and you will move correctly without feeling constrained. Musashi wrote: “When you appreciate the power of nature, knowing the rhythm of any situation, you will be able to hit the enemy naturally and strike naturally. All this is the Way of the Void.”” (Eliezer Yudkowsky, Rationality)

Rationality like a martial art

“In all human history, every great leap forward has been driven by a new clarity of thought. Except for a few natural catastrophes, every great woe has been driven by a stupidity. Our last enemy is ourselves; and this is a war, and we are soldiers.” (Eliezer Yudkowsky, Rationality)

I like the term A Clarity of Thought.

“(One of the key Rules For Doing The Impossible is that, if you can state exactly why something is impossible, you are often close to a solution.)” (Eliezer Yudkowsky, Rationality)

If you can state why something is impossible, you are often close to a solution

“But uncertainty exists in the map, not in the territory. If we are ignorant of a phenomenon, that is a fact about our state of mind, not a fact about the phenomenon itself. Empirical uncertainty, logical uncertainty, and indexical uncertainty are just names for our own bewilderment. The best current guess is that the world is math and the math is perfectly regular. The messiness is only in the eye of the beholder.” (Eliezer Yudkowsky, Rationality)

“This may seem like an obvious point, if you’ve been following Overcoming Bias this whole time; but if you look at Shane Legg’s collection of 71 definitions of intelligence, you’ll see that “squeezing the future into a constrained region” is a less obvious reply than it seems.” (Eliezer Yudkowsky, Rationality)

Relevant for Squeezing the future into a constrained region

“Whatever value is worth thinking about at all must be worth trading off against all other values worth thinking about, because thought itself is a limited resource that must be traded off. When you reveal a value, you reveal a utility.” (Eliezer Yudkowsky, Rationality)

Opportunity cost is a real cost, and Attention has an opportunity cost

“Fallacies of compression also underlie the bait-and-switch technique in philosophy—you argue about “consciousness” under one definition (like the ability to think about thinking) and then apply the conclusions to “consciousness” under a different definition (like subjectivity). Of course it may be that the two are the same thing, but if so, genuinely understanding this fact would require first a conceptual split and then a genius stroke of reunification.” (Eliezer Yudkowsky, Rationality)

Keep concepts stable and Fallacy of compression

“But suppose you lack the knowledge to so tightly bind together the levels of your map. For example, you could have a “hand scanner” that showed a “hand” as a dot on a map (like an old-fashioned radar display), and similar scanners for fingers/thumbs/palms; then you would see a cluster of dots around the hand, but you would be able to imagine the hand-dot moving off from the others. So, even though the physical reality of the hand (that is, the thing the dot corresponds to) was identical with / strictly composed of the physical realities of the fingers and thumb and palm, you would not be able to see this fact; even if someone told you, or you guessed from the correspondence of the dots, you would only know the fact of reduction, not see it. You would still be able to imagine the hand dot moving around independently, even though, if the physical makeup of the sensors were held constant, it would be physically impossible for this to actually happen.” (Eliezer Yudkowsky, Rationality)

^5db241

Description of the Different levels of concept-space

“I once said to a friend that I suspected the happiness of stupidity was greatly overrated. And she shook her head seriously, and said, “No, it’s not; it’s really not.” Maybe there are stupid happy people out there. Maybe they are happier than you are. And life isn’t fair, and you won’t become happier by being jealous of what you can’t have. I suspect the vast majority of Overcoming Bias readers could not achieve the “happiness of stupidity” if they tried. That way is closed to you. You can never achieve that degree of ignorance, you cannot forget what you know, you cannot unsee what you see.” (Eliezer Yudkowsky, Rationality)

Happiness of stupidity is a tough phrase. While there is a point to it, it is arrogant.

““The end does not justify the means” is just consequentialist reasoning at one meta-level up. If a human starts thinking on the object level that the end justifies the means, this has awful consequences given our untrustworthy brains; therefore a human shouldn’t think this way. But it is all still ultimately consequentialism. It’s just reflective consequentialism, for beings who know that their moment-by-moment decisions are made by untrusted hardware.” (Eliezer Yudkowsky, Rationality)

Relevant for Consequentialist thinking is already common

“Contrast this probabilistic situation to the qualitative reasoning where I just believe that snow is white, and believe that I believe that snow is white, and believe “‘snow is white’ is true,” and believe “my belief ‘“snow is white” is true’ is correct,” etc. Since all the quantities involved are 1, it’s easy to mix them up. Yet the nice distinctions of quantitative reasoning will be short-circuited if you start thinking “‘“snow is white” with 70% probability’ is true,” which is a type error. It is a true fact about you, that you believe “70% probability: ‘snow is white’”; but that does not mean the probability assignment itself can possibly be “true.” The belief scores either -0.51 bits or -1.73 bits of accuracy, depending on the actual state of reality.” (Eliezer Yudkowsky, Rationality)

Probabilistic thinking helps avoid type errors

“There is a shattering truth, so surprising and terrifying that people resist the implications with all their strength. Yet there are a lonely few with the courage to accept this satori. Here is wisdom, if you would be wise: Since the beginning Not one unusual thing Has ever happened.” (Eliezer Yudkowsky, Rationality)

Seeking the poetry again, but a good reminder that Only maps make territories surprising

“I am not the kind of straw Bayesian who says that you should make up probabilities to avoid being subject to Dutch books. I am the sort of Bayesian who says that in practice, humans end up subject to Dutch books because they aren’t powerful enough to avoid them; and moreover it’s more important to catch the ball than to avoid Dutch books. The math is like underlying physics, inescapably governing, but too expensive to calculate.” (Eliezer Yudkowsky, Rationality)

Dutch books and Probabilistic thinking is to logic what physics is to our observations.

“But the wonderful thing about unanswerable questions is that they are always solvable, at least in my experience. What went through Queen Elizabeth I’s mind, first thing in the morning, as she woke up on her fortieth birthday? As I can easily imagine answers to this question, I can readily see that I may never be able to actually answer it, the true information having been lost in time. On the other hand, “Why does anything exist at all?” seems so absolutely impossible that I can infer that I am just confused, one way or another, and the truth probably isn’t all that complicated in an absolute sense, and once the confusion goes away I’ll be able to see it. This may seem counterintuitive if you’ve never solved an unanswerable question, but I assure you that it is how these things work. Coming next: a simple trick for handling “wrong questions.”” (Eliezer Yudkowsky, Rationality)

Difference between unknowable and solvable and Confusion is a missing map

“Historically speaking, science won because it displayed greater raw strength in the form of technology, not because science sounded more reasonable. To this very day, magic and scripture still sound more reasonable to untrained ears than science. That is why there is continuous social tension between the belief systems. If science not only worked better than magic, but also sounded more intuitively reasonable, it would have won entirely by now.” (Eliezer Yudkowsky, Rationality)

Presents the question, how can we make An Intuitive Science?

“By far the best definition I’ve ever heard of the supernatural is Richard Carrier’s: A “supernatural” explanation appeals to ontologically basic mental things, mental entities that cannot be reduced to nonmental entities.” (Eliezer Yudkowsky, Rationality)

A nice definition of Magic

“At most we might find it worthwhile to distinguish between directly reproductive organs and indirectly reproductive organs.” (Eliezer Yudkowsky, Rationality)

From his good point which is that All organs are reproductive. Fun.

“But to praise evolution too highly destroys the real wonder, which is not how well evolution designs things, but that a naturally occurring process manages to design anything at all.” (Eliezer Yudkowsky, Rationality)

“I have touched before on the idea that a rationalist must have something they value more than “rationality”: the Art must have a purpose other than itself, or it collapses into infinite recursion. But do not mistake me, and think I am advocating that rationalists should pick out a nice altruistic cause, by way of having something to do, because rationality isn’t all that important by itself. No. I am asking: Where do rationalists come from? How do we acquire our powers? It is written in Twelve Virtues of Rationality:” (Eliezer Yudkowsky, Rationality)

Always have a goal other than process improvement

“What would convince me that 2 + 2 = 3, in other words, is exactly the same kind of evidence that currently convinces me that 2 + 2 = 4: The evidential crossfire of physical observation, mental visualization, and social agreement.” (Eliezer Yudkowsky, Rationality)

^3c3d73

The “crossfire” is an interesting one. What are we convinced by?

“If you saw a machine continually spinning a wheel, apparently without being plugged into a wall outlet or any other source of power, then you would look for a hidden battery, or a nearby broadcast power source—something to explain the work being done, without violating the laws of physics. So if a mind is arriving at true beliefs, and we assume that the Second Law of Thermodynamics has not been violated, that mind must be doing something at least vaguely Bayesian—at least one process with a sort-of Bayesian structure somewhere—or it couldn’t possibly work.” (Eliezer Yudkowsky, Rationality)

True beliefs require directed input

“Individual organisms are best thought of as adaptation-executers, not fitness-maximizers.” (Eliezer Yudkowsky, Rationality)

Useful differentiation between an organism executing its adaptation (sometimes to its own demise), versus actually making “choices” in consideration of its own fitness. Example, we’ve adapted to like sugar, but now we are just executing that adaptation rather than “choosing to eat sugar because it improves our fitness” or something like that. Organisms are adaptation executers, not fitness maximizers

id:: 62428739-edef-4db1-af64-6e1ba3d93bbc

“When you work in log odds, the distance between any two degrees of uncertainty equals the amount of evidence you would need to go from one to the other.” (Eliezer Yudkowsky, Rationality)

1

“I also realized that if I had actually experienced the past—if I had lived through past scientific revolutions myself, rather than reading about them in history books—I probably would not have made the same mistake again. I would not have come up with another mysterious answer; the first thousand lessons would have hammered home the moral. So (I thought), to feel sufficiently the force of history, I should try to approximate the thoughts of an Eliezer who had lived through history—I should try to think as if everything I read about in history books had actually happened to me. (With appropriate reweighting for the availability bias of history books—I should remember being a thousand peasants for every ruler.) I should immerse myself in history, imagine living through eras I only saw as ink on paper.” (Eliezer Yudkowsky, Rationality)

Relevant to Mysterious Answer and Appreciate lessons of history by imagining you lived through them

“Suppose that the Mind Projection Fallacy was not a fallacy, but simply true. Suppose that a 747 had a fundamental physical existence apart from the quarks making up the 747. What experimental observations would you expect to make, if you found yourself in such a universe? If you can’t come up with a good answer to that, it’s not observation that’s ruling out “non-reductionist” beliefs, but a priori logical incoherence. If you can’t say what predictions the “non-reductionist” model makes, how can you say that experimental evidence rules it out? My thesis is that non-reductionism is a confusion; and once you realize that an idea is a confusion, it becomes a tad difficult to envision what the universe would look like if the confusion were true. Maybe I’ve got some multi-level model of the world, and the multi-level model has a one-to-one direct correspondence with the causal elements of the physics? But once all the rules are specified, why wouldn’t the model just flatten out into yet another list of fundamental things and their interactions? Does everything I can see in the model, like a 747 or a human mind, have to become a separate real thing? But what if I see a pattern in that new supersystem? Supernaturalism is a special case of non-reductionism, where it is not 747s that are irreducible, but just (some) mental things. Religion is a special case of supernaturalism, where the irreducible mental things are God(s) and souls; and perhaps also sins, angels, karma, etc.” (Eliezer Yudkowsky, Rationality)

Mental models eventually flatten to the same level

“If you allow shops that sell otherwise banned products, some poor, honest, poorly educated mother of five kids is going to buy something that kills her. This is a prediction about a factual consequence, and as a factual question it appears rather straightforward—a sane person should readily confess this to be true regardless of which stance they take on the policy issue. You may also think that making things illegal just makes them more expensive, that regulators will abuse their power, or that her individual freedom trumps your desire to meddle with her life. But, as a matter of simple fact, she’s still going to die.” (Eliezer Yudkowsky, Rationality)

An illustrative example of People will make bad choices at scale and The Bad Button

“Special Relativity seems counterintuitive to us humans—like an arbitrary speed limit, which you could get around by going backward in time, and then forward again. A law you could escape prosecution for violating, if you managed to hide your crime from the authorities. But what Special Relativity really says is that human intuitions about space and time are simply wrong. There is no global “now,” there is no “before” or “after” across spacelike gaps. The ability to visualize a single global world, even in principle, comes from not getting Special Relativity on a gut level. Otherwise it would be obvious that physics proceeds locally with invariant states of distant entanglement, and the requisite information is simply not locally present to support a globally single world.” (Eliezer Yudkowsky, Rationality)

Nice point about missing Intuition regarding Special Relativity and Quantum Physics.

“So don’t oversimplify the relationship between loving truth and loving usefulness. It’s not one or the other. It’s complicated, which is not necessarily a defect in the moral aesthetics of single events” (Eliezer Yudkowsky, Rationality)

Some fun ideas here. Truth and usefulness are not mutually exclusive and Moral aesthetics (whatever that means…)

“As for “absolute certainty”—well, if you say that something is 99.9999% probable, it means you think you could make one million equally strong independent statements, one after the other, over the course of a solid year or so, and be wrong, on average, around once. This is incredible enough. (It’s amazing to realize we can actually get that level of confidence for “Thou shalt not win the lottery.”) So let us say nothing of probability 1.0. Once you realize you don’t need probabilities of 1.0 to get along in life, you’ll realize how absolutely ridiculous it is to think you could ever get to 1.0 with a human brain. A probability of 1.0 isn’t just certainty, it’s infinite certainty.” (Eliezer Yudkowsky, Rationality)

100% certainty is infinite certainty

“Another example: You flip a coin ten times and see the sequence HHTTH:TTTTH. Maybe you started out thinking there was a 1% chance this coin was fixed. Doesn’t the hypothesis “This coin is fixed to produce HHTTH:TTTTH” assign a thousand times the likelihood mass to the observed outcome, compared to the fair coin hypothesis? Yes. Don’t the posterior odds that the coin is fixed go to 10:1? No. The 1% prior probability that “the coin is fixed” has to cover every possible kind of fixed coin—a coin fixed to produce HHTTH:TTTTH, a coin fixed to produce TTHHT:HHHHT, etc. The prior probability the coin is fixed to produce HHTTH:TTTTH is not 1%, but a thousandth of one percent. Afterward, the posterior probability the coin is fixed to produce HHTTH:TTTTH is one percent. Which is to say: You thought the coin was probably fair but had a one percent chance of being fixed to some random sequence; you flipped the coin; the coin produced a random-looking sequence; and that doesn’t tell you anything about whether the coin is fair or fixed. It does tell you, if the coin is fixed, which sequence it is fixed to.” (Eliezer Yudkowsky, Rationality)

Fairly interesting point about how to intuitively manage Bayesian updating

“Part of the reason people get in trouble with words, is that they do not realize how much complexity lurks behind words. Can you visualize a “green dog”? Can you visualize a “cheese apple”? “Apple” isn’t just a sequence of two syllables or five letters. That’s a shadow. That’s the tip of the tiger’s tail. Words, or rather the concepts behind them, are paintbrushes—you can use them to draw images in your own mind. Literally draw, if you employ concepts to make a picture in your visual cortex. And by the use of shared labels, you can reach into someone else’s mind, and grasp their paintbrushes to draw pictures in their minds—sketch a little green dog in their visual cortex. But don’t think that, because you send syllables through the air, or letters through the Internet, it is the syllables or the letters that draw pictures in the visual cortex. That takes some complex instructions that wouldn’t fit in the sequence of letters. “Apple” is 5 bytes, and drawing a picture of an apple from scratch would take more data than that. “Apple” is merely the tag attached to the true and wordless apple concept, which can paint a picture in your visual cortex, or collide with “cheese,” or recognize an apple when you see one, or taste its archetype in apple pie, maybe even send out the motor behavior for eating an apple . . . And it’s not as simple as just calling up a picture from memory. Or how would you be able to visualize combinations like a “triangular lightbulb”—imposing triangleness on lightbulbs, keeping the essence of both, even if you’ve never seen such a thing in your life? Don’t make the mistake the behaviorists made. There’s far more to speech than sound in air. The labels are just pointers—“look in memory area 1387540.” Sooner or later, when you’re handed a pointer, it comes time to dereference it, and actually look in memory area 1387540. What does a word point to?” (Eliezer Yudkowsky, Rationality)

^36679b

Words are pointers to concept-space

“What you actually end up doing screens off the clever reason why you’re doing it. Contrast amazing clever reasoning that leads you to study many sciences, to amazing clever reasoning that says you don’t need to read all those books. Afterward, when your amazing clever reasoning turns out to have been stupid, you’ll have ended up in a much better position if your amazing clever reasoning was of the first type. When I look back upon my past, I am struck by the number of semi-accidental successes, the number of times I did something right for the wrong reason.” (Eliezer Yudkowsky, Rationality)

This is really just Hedging your bets

“But mostly I just hand you an open, unsolved problem: make it possible/easier for groups of strangers to coalesce into an effective task force over the Internet, in defiance of the usual failure modes and the default reasons why this is a non-ancestral problem.” (Eliezer Yudkowsky, Rationality)

“And that’s what I mean by putting my finger on qualitative reasoning as the source of the problem. The dichotomy between belief and disbelief, being binary, is confusingly similar to the dichotomy between truth and untruth. So let’s use quantitative reasoning instead. Suppose that I assign a 70% probability to the proposition that snow is white. It follows that I think there’s around a 70% chance that the sentence “snow is white” will turn out to be true. If the sentence “snow is white” is true, is my 70% probability assignment to the proposition, also “true”? Well, it’s more true than it would have been if I’d assigned 60% probability, but not so true as if I’d assigned 80% probability. When talking about the correspondence between a probability assignment and reality, a better word than “truth” would be “accuracy.” “Accuracy” sounds more quantitative, like an archer shooting an arrow: how close did your probability assignment strike to the center of the target?” (Eliezer Yudkowsky, Rationality)

Beliefs can have quantitative accuracy, not binary correctness

“So having a word “wiggin” for green-eyed black-haired people is more useful than just saying “green-eyed black-haired person” precisely when: Green-eyed people are more likely than average to be black-haired (and vice versa), meaning that we can probabilistically infer green eyes from black hair or vice versa; or Wiggins share other properties that can be inferred at greater-than-default probability. In this case we have to separately observe the green eyes and black hair; but then, after observing both these properties independently, we can probabilistically infer other properties (like a taste for ketchup). One may even consider the act of defining a word as a promise to this effect. Telling someone, “I define the word ‘wiggin’ to mean a person with green eyes and black hair,” by Gricean implication, asserts that the word “wiggin” will somehow help you make inferences / shorten your messages.” (Eliezer Yudkowsky, Rationality)

^6f364d

Naming a phenomenon should imply some inference

“Why are there schools of martial arts, but not rationality dojos? (This was the first question I asked in my first blog post.) Is it more important to hit people than to think? No, but it’s easier to verify when you have hit someone. That’s part of it, a highly central part.” (Eliezer Yudkowsky, Rationality)

Rationality like a martial art, Easier to build an education system when the goals are clear and measurable

“Albert says that people have “free will.” Barry says that people don’t have “free will.” Well, that will certainly generate an apparent conflict. Most philosophers would advise Albert and Barry to try to define exactly what they mean by “free will,” on which topic they will certainly be able to discourse at great length. I would advise Albert and Barry to describe what it is that they think people do, or do not have, without using the phrase “free will” at all. (If you want to try this at home, you should also avoid the words “choose,” “act,” “decide,” “determined,” “responsible,” or any of their synonyms.) This is one of the nonstandard tools in my toolbox, and in my humble opinion, it works way way better than the standard one. It also requires more effort to use; you get what you pay for.” (Eliezer Yudkowsky, Rationality)

Use reduction to clarify arguments, mentions of Free Will

“The ecosystem would make much more sense if it wasn’t designed by a unitary Who, but, rather, created by a horde of deities—say from the Hindu or Shinto religions. This handily explains both the ubiquitous purposefulnesses, and the ubiquitous conflicts: More than one deity acted, often at cross-purposes. The fox and rabbit were both designed, but by distinct competing deities. I wonder if anyone ever remarked on the seemingly excellent evidence thus provided for Hinduism over Christianity. Probably not.” (Eliezer Yudkowsky, Rationality)

Fun point about conflicts in nature being arguments for multi-deity religions.

“If God did speak plainly, and answer prayers reliably, God would just become one more boringly real thing, no more worth believing in than the postman. If God were real, it would destroy the inner uncertainty that brings forth outward fervor in compensation. And if everyone else believed God were real, it would destroy the specialness of being one of the elect.” (Eliezer Yudkowsky, Rationality)

Why belief in God is special because of uncertainty and not everyone believing.

“Do not ask which beliefs to profess, but which experiences to anticipate.” (Eliezer Yudkowsky, Rationality)

Knowledge is testable prediction

“Brains evolved from non-brainy matter by natural selection; they were not justified into existence by arguing with an ideal philosophy student of perfect emptiness. This does not make our judgments meaningless. A brain-engine can work correctly, producing accurate beliefs, even if it was merely built—by human hands or cumulative stochastic selection pressures—rather than argued into existence. But to be satisfied by this answer, one must see rationality in terms of engines, rather than arguments.” (Eliezer Yudkowsky, Rationality)

See rationality in terms of engines rather than arguments.

“Perhaps someone will see an opportunity to be clever, and say: “Okay. I believe in free will because I have free will. There, I’m done.” Of course it’s not that easy. My perception of socks on my feet is an event in the visual cortex. The workings of the visual cortex can be investigated by cognitive science, should they be confusing. My retina receiving light is not a mystical sensing procedure, a magical sock detector that lights in the presence of socks for no explicable reason; there are mechanisms that can be understood in terms of biology. The photons entering the retina can be understood in terms of optics. The shoe’s surface reflectance can be understood in terms of electromagnetism and chemistry. My feet getting cold can be understood in terms of thermodynamics. So it’s not as easy as saying, “I believe I have free will because I have it—there, I’m done!” You have to be able to break the causal chain into smaller steps, and explain the steps in terms of elements not themselves confusing.” (Eliezer Yudkowsky, Rationality)

Use reduction to clarify arguments

“John Kenneth Galbraith said: “Faced with the choice of changing one’s mind and proving that there is no need to do so, almost everyone gets busy on the proof.”1 And the greater the inconvenience of changing one’s mind, the more effort people will expend on the proof.” (Eliezer Yudkowsky, Rationality)

^5761af

John Kenneth Galbraith, Turn to face your beliefs, then throw rocks at them

“The original lie is only the beginning of the problem. Then you have all the ill habits of thought that have evolved to defend it. Religion is a poisoned chalice, from which we had best not even sip. Spirituality is the same cup after the original pellet of poison has been taken out, and only the dissolved portion remains—a little less directly lethal, but still not good for you.” (Eliezer Yudkowsky, Rationality)

Interesting, wonder what the Ill habits of thought for religion are (though I can imagine some perhaps)

“If you expect a strong probability of seeing weak evidence in one direction, it must be balanced by a weak expectation of seeing strong evidence in the other direction. If you’re very confident in your theory, and therefore anticipate seeing an outcome that matches your hypothesis, this can only provide a very small increment to your belief (it is already close to 1); but the unexpected failure of your prediction would (and must) deal your confidence a huge blow. On average, you must expect to be exactly as confident as when you started out. Equivalently, the mere expectation of encountering evidence—before you’ve actually seen it—should not shift your prior beliefs. (Again, if this is not intuitively obvious, see An Intuitive Explanation of Bayesian Reasoning.)” (Eliezer Yudkowsky, Rationality)

^b650a6

Bayes’ Theorem and Bayesian updating

“But that’s for one event. When it comes to multiplying by quantities and probabilities, complication is to be avoided—at least if you care more about the destination than the journey. When you’ve reflected on enough intuitions, and corrected enough absurdities, you start to see a common denominator, a meta-principle at work, which one might phrase as “Shut up and multiply.”” (Eliezer Yudkowsky, Rationality)

Shut up and multiply

“For it is only the action that matters, and not the reasons for doing anything. If you build the gun and load the gun and put the gun to your head and pull the trigger, even with the cleverest of arguments for carrying out every step—then, bang.” (Eliezer Yudkowsky, Rationality)

Action matters more than motivation or reasoning Use outcomes to cut through fog

“The first—that I drew on multiple sources to create my Art. I read many different authors, many different experiments, used analogies from many different fields. You will need to draw on multiple sources to create your portion of the Art. You should not be getting all your rationality from one author—though there might be, perhaps, a certain centralized website, where you went to post the links and papers that struck you as really important. And a maturing Art will need to draw from multiple sources. To the best of my knowledge there is no true science that draws its strength from only one person. To the best of my knowledge that is strictly an idiom of cults. A true science may have its heroes, it may even have its lonely defiant heroes, but it will have more than one.” (Eliezer Yudkowsky, Rationality)

Have multiple guiding sources for a discipline

“The more directly your arguments bear on a question, without intermediate inferences—the closer the observed nodes are to the queried node, in the Great Web of Causality—the more powerful the evidence. It’s a theorem of these causal graphs that you can never get more information from distant nodes, than from strictly closer nodes that screen off the distant ones. Jerry Cleaver said: “What does you in is not failure to apply some high-level, intricate, complicated technique. It’s overlooking the basics. Not keeping your eye on the ball.”1” (Eliezer Yudkowsky, Rationality)

^b8d732

Look for closer nodes when searching for causal chains

“Now there is an important sense in which we can legitimately move from evident characteristics to not-so-evident ones. You can, legitimately, see that Socrates is human-shaped, and predict his vulnerability to hemlock. But this probabilistic inference does not rely on dictionary definitions or common usage; it relies on the universe containing empirical clusters of similar things.” (Eliezer Yudkowsky, Rationality)

Definitions are clouds in concept-space

“The reason that educated religious people stay religious, I suspect, is that when they doubt, they are subconsciously very careful to attack their own beliefs only at the strongest points—places where they know they can defend. Moreover, places where rehearsing the standard defense will feel strengthening.” (Eliezer Yudkowsky, Rationality)

Important to note that this is speculation, but if true this is an example of backfiring, and also explains other examples of people Separating your professional rationality from your personal rationality

“But altruism isn’t the warm fuzzy feeling you get from being altruistic. If you’re doing it for the spiritual benefit, that is nothing but selfishness. The primary thing is to help others, whatever the means. So shut up and multiply!” (Eliezer Yudkowsky, Rationality)

Do good for others, not for yourself

“Regarding Science as a mere approximation to some probability-theoretic ideal of rationality . . . would certainly seem to be rational. There seems to be an extremely reasonable-sounding argument that Bayes’s Theorem is the hidden structure that explains why Science works. But to subordinate Science to the grand schema of Bayesianism, and let Bayesianism come in and override Science’s verdict when that seems appropriate, is not a trivial step! Science is built around the assumption that you’re too stupid and self-deceiving to just use Solomonoff induction. After all, if it was that simple, we wouldn’t need a social process of science . . . right? So, are you going to believe in faster-than-light quantum “collapse” fairies after all? Or do you think you’re smarter than that?” (Eliezer Yudkowsky, Rationality)

^9281a1

A potential reduction of science

“If a chain of reasoning doesn’t make me nervous, in advance, about waking up with a tentacle, then that reasoning would be a poor explanation if the event did happen, because the combination of prior probability and likelihood was too low to make me allocate any significant real-world probability mass to that outcome.” (Eliezer Yudkowsky, Rationality)

Our feelings should follow our logic

“When the unenlightened ones try to be profound, they draw endless verbal comparisons between this topic, and that topic, which is like this, which is like that; until their graph is fully connected and also totally useless. The remedy is specific knowledge and in-depth study. When you understand things in detail, you can see how they are not alike, and start enthusiastically subtracting edges off your graph.” (Eliezer Yudkowsky, Rationality)

Enthusiastically subtract edges off your graph

“In P(A|X), A is the thing we want to know about. X is how we’re observing it; X is the evidence we’re using to make inferences about A. Remember that for every expression P(Q|P), we want to know about the probability for Q given P, the degree to which P implies Q—a more sensible notation, which it is now too late to adopt, would be P(Q ← P).” (Eliezer Yudkowsky, Rationality)

Notation ideas

“But what about when our conscious motives for the search—the criteria we can admit to ourselves—don’t square with subconscious influences? When we are carrying out an allegedly altruistic search, a search for an altruistic policy, and we find a strategy that benefits others but disadvantages ourselves—well, we don’t stop looking there; we go on looking. Telling ourselves that we’re looking for a strategy that brings greater altruistic benefit, of course. But suppose we find a policy that has some defensible benefit, and also just happens to be personally convenient? Then we stop the search at once! In fact, we’ll probably resist any suggestion that we start looking again—pleading lack of time, perhaps. (And yet somehow, we always have cognitive resources for coming up with justifications for our current policy.)” (Eliezer Yudkowsky, Rationality)

Stopping bias

“But come on . . . doesn’t it seem a little . . . amazing . . . that hundreds of millions of years worth of evolution’s death tournament could cough up mothers and fathers, sisters and brothers, husbands and wives, steadfast friends and honorable enemies, true altruists and guardians of causes, police officers and loyal defenders, even artists sacrificing themselves for their art, all practicing so many kinds of love? For so many things other than genes? Doing their part to make their world less ugly, something besides a sea of blood and violence and mindless replication? “Are you claiming to be surprised by this? If so, question your underlying model, for it has led you to be surprised by the true state of affairs. Since the beginning, not one unusual thing has ever happened.”” (Eliezer Yudkowsky, Rationality)

Since the beginning, not one unusual thing has happened

“The eleventh virtue is scholarship. Study many sciences and absorb their power as your own. Each field that you consume makes you larger. If you swallow enough sciences the gaps between them will diminish and your knowledge will become a unified whole.” (Eliezer Yudkowsky, Rationality)

Swallow enough sciences and the gaps between them will diminish

“The way a belief feels from inside is that you seem to be looking straight at reality. When it actually seems that you’re looking at a belief, as such, you are really experiencing a belief about belief.” (Eliezer Yudkowsky, Rationality)

Beliefs feel like reality

“For it is written: If you can lighten your burden you must do so. There is no straw that lacks the power to break your back.” (Eliezer Yudkowsky, Rationality)

^f56eca

Focus

“But the deeper failure is supposing that an answer can be mysterious. If a phenomenon feels mysterious, that is a fact about our state of knowledge, not a fact about the phenomenon itself. The vitalists saw a mysterious gap in their knowledge, and postulated a mysterious stuff that plugged the gap. In doing so, they mixed up the map with the territory. All confusion and bewilderment exist in the mind, not in encapsulated substances.” (Eliezer Yudkowsky, Rationality)

Another good explanation about Mysterious Answer

“The conjunction fallacy is when humans rate the probability P(A,B) higher than the probability P(B), even though it is a theorem that P(A,B) ≤ P(B). For example, in one experiment in 1981, 68% of the subjects ranked it more likely that “Reagan will provide federal support for unwed mothers and cut federal support to local governments” than that “Reagan will provide federal support for unwed mothers.”” (Eliezer Yudkowsky, Rationality)

Conjunction fallacy

“It would concede far too much (indeed, concede the whole argument) to agree with the premise that you need absolute knowledge of absolutely good options and absolutely evil options in order to be moral. You can have uncertain knowledge of relatively better and relatively worse options, and still choose. It should be routine, in fact, not something to get all dramatic about.” (Eliezer Yudkowsky, Rationality)

Moral choices are based on relative value and imperfect data

“When this woman was in high school, she thought she was an atheist. But she decided, at that time, that she should act as if she believed in God. And then—she told me earnestly—over time, she came to really believe in God. So far as I can tell, she is completely wrong about that. Always throughout our conversation, she said, over and over, “I believe in God,” never once, “There is a God.” When I asked her why she was religious, she never once talked about the consequences of God existing, only about the consequences of believing in God. Never, “God will help me,” always, “my belief in God helps me.” When I put to her, “Someone who just wanted the truth and looked at our universe would not even invent God as a hypothesis,” she agreed outright. She hasn’t actually deceived herself into believing that God exists or that the Jewish religion is true. Not even close, so far as I can tell.” (Eliezer Yudkowsky, Rationality)

Belief in God vs existence of God

“(I strongly suspect that a major part of science’s PR problem in the population at large is people who instinctively believe that if knowledge is given away for free, it cannot be important. If you had to undergo a fearsome initiation ritual to be told the truth about evolution, maybe people would be more satisfied with the answer.)” (Eliezer Yudkowsky, Rationality)

^7135a0

Scarcity and value, Undervaluing science due to ease of access

“And so I wouldn’t say that a well-designed Friendly AI must necessarily refuse to push that one person off the ledge to stop the train. Obviously, I would expect any decent superintelligence to come up with a superior third alternative. But if those are the only two alternatives, and the FAI judges that it is wiser to push the one person off the ledge—even after taking into account knock-on effects on any humans who see it happen and spread the story, etc.—then I don’t call it an alarm light, if an AI says that the right thing to do is sacrifice one to save five. Again, I don’t go around pushing people into the paths of trains myself, nor stealing from banks to fund my altruistic projects. I happen to be a human. But for a Friendly AI to be corrupted by power would be like it starting to bleed red blood. The tendency to be corrupted by power is a specific biological adaptation, supported by specific cognitive circuits, built into us by our genes for a clear evolutionary reason. It wouldn’t spontaneously appear in the code of a Friendly AI any more than its transistors would start to bleed.” (Eliezer Yudkowsky, Rationality)
^747282

Friendly Artificial Intelligence, good points about AI likely finding previously unseen alternatives and how Being corrupted by power is a biological adaptation.

“The Second Law of Thermodynamics is a consequence of a theorem which can be proven in the standard model of physics: If you take a volume of phase space, and develop it forward in time using standard physics, the total volume of the phase space is conserved. For example, let there be two systems, X and Y, where X has 8 possible states, Y has 4 possible states, and the joint system (X,Y) has 32 possible states. The development of the joint system over time can be described as a rule that maps initial points onto future points. For example, the system could start out in X7Y2, then develop (under some set of physical laws) into the state X3Y3 a minute later. Which is to say: if X started in state X7, and Y started in state Y2, and we watched it for 1 minute, we would see X go to X3 and Y go to Y3. Such are the laws of physics.” (Eliezer Yudkowsky, Rationality)

Second law of thermodynamics

“A useful model isn’t just something you know, as you know that an airplane is made of atoms. A useful model is knowledge you can compute in reasonable time to predict real-world events you know how to observe.” (Eliezer Yudkowsky, Rationality)

Useful model

“We can also set up a line of retreat for those afraid to allow a causal role for evolution, in their account of how morality came to be. (Note that this is extremely distinct from granting evolution a justificational status in moral theories.) Love has to come into existence somehow—for if we cannot take joy in things that can come into existence, our lives will be empty indeed. Evolution may not be a particularly pleasant way for love to evolve, but judge the end product—not the source. Otherwise you would be committing what is known (appropriately) as The Genetic Fallacy: causation is not the same concept as justification. It’s not like you can step outside the brain evolution gave you; rebelling against nature is only possible from within nature” (Eliezer Yudkowsky, Rationality)

Genetic fallacy

“can intuitively visualize that a hand is made of fingers (and thumb and palm). To ask whether it’s really our hand that picks something up, or merely our fingers, thumb, and palm, is transparently a wrong question. But the gap between physics and cognition cannot be crossed by direct visualization. No one can visualize atoms making up a person, the way they can see fingers making up a hand. And so it requires constant vigilance to maintain your perception of yourself as an entity within physics.” (Eliezer Yudkowsky, Rationality)

^6d72d6

Layers of abstraction

“The genetic fallacy is formally a fallacy, because the original cause of a belief is not the same as its current justificational status, the sum of all the support and antisupport currently known.” (Eliezer Yudkowsky, Rationality)

Genetic fallacy

“You cannot rely on anyone else to argue you out of your mistakes; you cannot rely on anyone else to save you; you and only you are obligated to find the flaws in your positions; if you put that burden down, don’t expect anyone else to pick it up. And I wonder if that advice will turn out not to help most people, until they’ve personally blown off their own foot, saying to themselves all the while, correctly, “Clearly I’m winning this argument.”” (Eliezer Yudkowsky, Rationality)

Turn to face your beliefs, then throw rocks at them

“Also crucial was that my listeners could see immediately that my reply made sense. They might or might not have agreed with the thought, but it was not a complete non-sequitur unto them. I know transhumanists who are unable to seem deep because they are unable to appreciate what their listener does not already know. If you want to sound deep, you can never say anything that is more than a single step of inferential distance away from your listener’s current mental state. That’s just the way it is. To seem deep, study nonstandard philosophies. Seek out discussions on topics that will give you a chance to appear deep. Do your philosophical thinking in advance, so you can concentrate on explaining well. Above all, practice staying within the one-inferential-step bound.” (Eliezer Yudkowsky, Rationality)

One Inferential Step

“Ultimately, reductionism is just disbelief in fundamentally complicated things. If “fundamentally complicated” sounds like an oxymoron . . . well, that’s why I think that the doctrine of non-reductionism is a confusion, rather than a way that things could be, but aren’t. You would be wise to be wary, if you find yourself supposing such things.” (Eliezer Yudkowsky, Rationality)

^ff0d2d

Notes

Arguing about definitions instead of concepts: lazy to argue about definitions, but more important to argue about actual outcomes (irrespective of how the definitions map to the outcomes). Focus on outcomes not definitions.

Forward flow vs backward flow, using experimentation and curiosity to develop hypotheses rather than having a hypothesis and using that to design an experiment. The backwards flow is more likely to succumb to biases.

A good discussion on Emergent phenomena that highlights that term as one of the ways we trick ourselves into thinking we understand or know something.

Strongly argues for strict utilitarianism when it comes to dealing with saving lives.

Pushes a self-improvement strategy of constant improvement, where you always do your best to make your map better reflect the territory.

Defines intelligence as Squeezing the future into a constrained region, which is a useful characterization.