Only Eliezer Yudkowsky could write an entire book about how great he is for buying some lightbulbs for his girlfriend. Just kidding...sort of.

Seriously, I think this book has a really interesting premise, but isn't executed well. Yudkowsky starts from a generalization of the efficient markets principle: to paraphrase, if something can be done that is valuable and cost-effective, someone else will have already figured out how to do it. Taken to the extreme, however, this idea would result in the conclusion that you shouldn't really bother doing anything. We know, based on unscientific observation, that people do achieve good things. So given these two points that are apparently in tension, how can we discern when there may be opportunities that are both worthwhile and feasible? This is a very important question for a person trying to plan their life.

A lot of this book is spent going over the Econ 201 concepts that often explain why a thing appears to work suboptimally: principal-agent problems, asymmetric information, and collective action problems. I don't think Yudkowsky adds much to the reams that have already been written on these topics, but it's a reasonable enough treatment. Basically in this section, Yudkowsky establishes that we can often easily identify "inadequate" situations that are nonetheless not exploitable--the inadequacy doesn't come from people being dumb or irrational, just from unfortunate information and/or incentive structures. In this part, Yudkowsky talks a lot about how he concluded that the Bank of Japan's monetary policy was flawed, which was later borne out by a policy change that improved things. I'm willing to believe he actually did reach a correct conclusion, but it's sort of odd to me that he spends so much time talking about it. Yudkowsky is a big proponent of "rationality-as-winning," and I can hardly think of something that matters less, in terms of winning, than a non-Japanese non-central-banker's view on Japanese monetary policy.

The other anecdote Yudkowsky talks a lot about, as I mentioned above, is his purchase of lightbulbs for his girlfriend. I'll spell it out a little more, since I was being a little cheeky before. As Yudkowsky recounts it, his girlfriend suffered severely from seasonal affective disorder, and the standard light-box treatments weren't working. He got the bright idea (ha) to just buy a ton more lightbulbs to increase the amount of light, and lo and behold, his girlfriend was cured. His conclusion is basically, I couldn't find any academic studies about this, but I'm not super surprised because academia is full of bad incentives--so when you get an idea like this that seems so obvious it must have been tried, don't be too quick to assume that.

I actually think this is good advice, although not necessarily reached for the right reasons. One thing Yudkowsky doesn't mention at all is the placebo effect, which could easily explain this "prescription" working in an individual case while not being provable in a scientific sense. (Heck, it could even be that his girlfriend was so touched by her boyfriend coming up with crazy schemes just to try to help her, that it helped to cure her.) Another is simply that sometimes things that don't work on average in statistically detectable ways can still work for individuals! Individual outcomes are determined by whole hosts of interacting factors that we can't understand fully. But these don't invalidate the point--they're just further evidence that, even if you don't think you're able to come up with generally applicable innovative ideas, it's still worth trying different stuff in individual cases.

A related and very topical discussion arose recently around face mask effectiveness, and whether public health bodies should recommend their use as protection from SARS-CoV2. As you may recall, there were initially clear communications in the US that in general, people should not wear masks; only later was this reversed to the current state where masks are generally recommended and indeed required in many public places. Scott Alexander had an extensive discussion of this issue on his blog (https://slatestarcodex.com/2020/03/23/face-masks-much-more-than-you-wanted-to-know/). His conclusion is basically that the initial non-recommendation of masks was based on the evidence for their efficacy not reaching a sufficiently scientific burden of proof--"not proven effective beyond a reasonable doubt."

I think these discussions add up to some good advice that Yudkowsky sort of communicates in the book, but that I think could have been outlined more clearly. When it comes to individual judgments, a rational person should be weighing evidence in a Bayesian way, such that our determination of the "right" course of action always comes down to a degree of belief. This is a significantly different standard than is used in most of our society's information-generating institutions--science and medicine may pose stricter rules that filter out more false positives but also some true positives; conversely, academic publishing may reward p-hacking, which results in a lot of non-replicable results being published. Or on the other hand, journalism works based on representativeness and availability. So it's not responsible for us to outsource our decisionmaking entirely to any of these information-generating institutions, although we should of course consider their outputs in our own Bayesian decision process. And the corollary is that there may be opportunities for achievement that have not been ratified by our society's information-generating structures. We can begin to identify these by thinking about the biases inherent in these structures.

We should also impose a heavy filter on "what to try" based on cost--it's obviously not a good idea to start taking unproven medications based on the above reasoning, because they could have huge and/or irreversible drawbacks. However, something like "exposing myself to more lightbulbs" seems pretty low-cost and easily reversible. Yudkowsky does communicate this conclusion well--just try stuff if it's low-cost, and don't worry too much about whether you are second-guessing the (generalized) market.

The back end of the book seems like a lot of inside baseball relating to the specific circles Yudkowsky interacts with. The discussion focuses on the responsible use of "the outside view"--for those not familiar with the concept, taking the outside view on a problem means trying to abstract from your individual circumstances and reason by analogy with a related class of instances, whereas taking the inside view means trying to reason through the specifics of your circumstances. The relevant literature identifies that using the outside view can reduce bias, for example helping us to avoid the "planning fallacy" where we assume a project we are working on will go smoothly because we imagine the best possible outcome, rather than thinking of things that could go wrong--which will more naturally come to mind if we think about similar things that happened in the past.

Basically Yudkowsky argues that people use the outside view excessively, resulting in excessive modesty and suboptimal achievement. He cites, for example, conversations with startup founders who use the outside view to estimate their potential market rather than thinking about how it could be a thousand times bigger, and thus end up not shooting for anything more ambitious than others have already achieved. I think this must be a very specific problem with people in the circles Yudkowsky runs in, because I rarely if ever hear people reasoning based on the outside view. (To be fair, only a very specific subset of people are going to read an Eliezer Yudkowsky book either, and he points this out.) It's true that the outside view has a serious drawback, in that it's impossible to definitively identify the correct reference class. But I've never understood the outside view to be taken as a unique planning tool; rather, it's useful to get a "second opinion" on whether you're being too optimistic about something. It also shouldn't prevent you for trying to achieve something greater than your reference class; it should just dissuade you from making plans that will have very bad outcomes if you don't (for example, overleveraging a company)--which I think remains good advice. I feel like the people Yudkowsky is talking about are using the outside view in an idiosyncratically bad way.

Ultimately, I would boil down the message of this book to: think for yourself, and be willing to try unorthodox stuff if the cost is pretty low. Both very sound pieces of advice, just reached in what felt like a roundabout way.

I'm just going to get it out there - Yudkowsky, along with Scott Alexander (and SSC, LessWrong-ers, rationalists, etc), irritates me on a personal level. Is my review biased based on this? Yeah, probably, so you can consider it with that in mind. That being said, there are at least snippets of wisdom in this book.

"...Usually, when things suck, it's because they suck in a way that's a Nash equilibrium."

"So far, every time I've asked you why someone is acting insane, you've claimed that it's secretly a sane response to someone else acting insane. Where does this process bottom out?"

The Gell-Mann Amnesia effect

"You will detect inadequacy every time you go looking for it, whether or not it's there. If you see the same vision wherever you look, that's the same as being blind."

"...You can say 'holy shit, everyone in the world is fucking insane. However, none of them seem to realize that they're insane. By extension, I am probably insane. I should take careful steps to minimize the damage I do.'"

"...When you previously just had a lot of prior reasoning, or you were previously trying to generalize from other people's not-quite-similar experiences, and then you collide directly with reality for the first time, one data point is huge."

"If you and a trusted peer don't converge on identical beliefs once you have a full understanding of one another's positions, at least one of you must be making some kind of mistake. If we were fully rational (and fully honest), then we would always eventually reach consensus on questions of fact."

"Hey! Guys! I found out how to take over the world using only the power of my mind and a toothpick." "You can't do that. Nobody's done that before." "Of course they didn't, they were completely irrational." "But they thought they were rational too." "The difference is that I'm right." "They thought that too!"

"If just anyone could find some easy sentences to say that let them get higher status than God, then your system for allocating status would be too easy to game."

"Try to make sure you'd arrive at different beliefs in different worlds. You don't want to think in such a way that you wouldn't believe in a conclusion in a world where it were true, just because a fallacious argument could support it. Emotionally appealing mistakes are not invincible cognitive traps that nobody can ever escape from. Sometimes they're not even that hard to escape."

'If you’re trying to do something unusually well (a common enough goal for ambitious scientists, entrepreneurs, and effective altruists), then this will often mean that you need to seek out the most neglected problems. You’ll have to make use of information that isn’t widely known or accepted, and pass into relatively uncharted waters. And modesty is especially detrimental for that kind of work, because it discourages acting on private information, making less-than-certain bets, and breaking new ground.'
adventurous hopeful informative inspiring reflective medium-paced

Is This An Overview?
In an efficient market, in an efficient civilization, the individual cannot do better than the collective power of the many who have a lot more available information.  Even if the individual has information that others do not, the individual cannot make an improvement, gain any benefits by fixing the problem, and cannot exploit the system.  Common problems within adequate systems are supposed to be resolved by the community, as good ideas have already been tried by the community.  The collective might not get the exact answer, but no individual can predict the average value of the error, the average value of the change.   

Alternatively, there are inadequate systems in which individuals can do better that the community, as problems exist but do not get resolved.  Civilization gets stuck with inadequate equilibria as they are systemically unfixable.  There are various reasons for how an inadequate system, an inadequate civilization can develop. 

Central decision makers can prevent others from fixing the problem.  Decisions makers are not the beneficiaries.  There is asymmetric information as decision makers cannot know what or whose information to trust.  Systems might be inadequate, but that does not make them exploitable as there are many competitors trying to benefit from available opportunities, a competitive equilibrium.  To improve the system would require large scale coordination action, but they are difficult to facilitate.
 
How To And Not To Think About Inadequate Systems?
Wrong guesses and false cynicism do exist.  Different systems are dysfunctional in different ways.  No individual is better at everything, but individuals can be better at somethings and worse at others.  There is a lot of variation in expert views.

Although there are inadequate systems, just assuming inadequacy can make people see inadequacy in everything with a lot of arguments.  Concluding inadequacy from a problem is not an adequate rule.  Even though systems have inadequate equilibria, a blanket distrust of inadequacy arguments does not get far.  Civilization cannot be beat all the time, but its good to be skeptical and check for inadequacy. 
 
Caveats?
The explanations can be improved.  The organizational quality is mixed.  There are practical examples and abstract reasoning.  The abstract reasoning and conversations can become confusing.  There are parts that would be better understood with prerequisite knowledge.   

This book is based on the dichotomy of perfect and imperfect information theory, an improvement on them.  Tailored to reduce the strictness of perfect information.  

Some very good ideas, at times not very well written and repetitive. Might change the rating to 4 stars after some reflection and noticing if reading has an impact on my thinking.
challenging lighthearted slow-paced
challenging informative fast-paced
informative medium-paced
informative reflective slow-paced

Having already read HPMOR and about half of The Sequences, I still hadn't made up my mind about whether EY had anything genuinely useful to say. Certainly his writing is interesting, and there are lots of interesting facts in it. But it's another question altogether as to whether the whole package of his writing is worthwhile. He is not at all shy about prescribing certain ways of thinking, and simply knowing lots of interesting trivia about the evolutionary psychology and the history of science is no basis for dispensing life advice. His writing does always bear the whiff of egotism-fodder. [Not an exact quote:] "Ah, my dear reader, because you have been initiated into the Bayesian Conspiracy, you too are far smarter than those so-called-academics, with their frequentist statistics and use of 'emergence' as a fake explanation."

This book convinced me that EY does actually have something genuinely enriching to say. Like Haidt's "The Righteous Mind", this book gives provides you with a shiny new tool for understanding the world. Whereas TRH allowed you to understand why other people have such perverse political opinions, this book allows you to understand why society can act so stupid sometimes, and how you should respond as an individual to such bewildering incompetence. In particular: When is it ok to think you're right and everyone else is wrong, and when can you expect to be able to do better than everyone else? Intellectual modesty is an over-correction for Dunning-Kruger. You can expect to know better than even experts (gasp!) if you 1) pay attention to predictive track records, and 2) pay attention to the dynamics of a system: whether you would expect a genuine improvement to actually be adopted.

Like most of EY's writing, could do with some trimming (I think the last couple of chapters could have been cut), but it's a huge improvement on the usual 1600 pages.