Take a photo of a barcode or cover
tombuoni 's review for:
Science Fictions
by Stuart Ritchie
Excellent eye-opening book about modern science’s “replication crisis” that has resulted from a large percentage of published studies being unable to be replicated, and how various systematic factors can undermine the scientific method and search for truth. A great book for learning more about the various flaws that can creep into science (and any discipline), including fraud, negligence, bias, and hype. Clearly organized with tons of examples and references to learn more - including a guide on how to read a scientific paper, which includes the following top-ten list of questions:
1. Is everything above board? (Trustworthy source)
2. How transparent is it?
3. Is the study well-designed?
4. How big is the sample?
5. How big is the effect?
6. Are the inferences appropriate?
7. Is there bias?
8. How plausible is it, really?
9. Has it been replicated?
10. What do other scientists think about it?
Here’s a few other interesting selections:
“Discovering the serious problems with the way we do science will be disconcerting. How many intriguing results that you've read about in the news and popular science books, or seen in documentaries, discoveries you've been excited enough to share with friends, or that made you rethink how the world works - are based on weak research that can't be replicated? How many times has your doctor prescribed you a drug or other treatment that rests on flawed evidence? How many times have you changed your diet, your purchasing habits, or some other aspect of your lifestyle on the basis of a scientific study, only for the evidence to be completely overturned by a new study a few months later? How many times have politicians made laws or policies that directly impact people's lives, citing science that won't stand up to scrutiny? In each case, the answer
is: a lot more than you'd like to think.”
“In general, though, the effect on psychology has been devastating. This wasn't just a case of fluffy, flashy research like priming and power posing being debunked: a great deal of far more ‘serious' psychological research (like the Stanford Prison Experiment, and much else besides) was also thrown into doubt... Studies that failed to
replicate continued to be routinely cited both by scientists and other writers: entire lines of research, and bestselling popular books, were being built on their foundation. ‘Crisis'
seems to be an apt description.”
“There are countless other examples: almost every case I'll
describe in this book involves a scientific "finding' that, upon closer scrutiny, turned out to be either less solid than it seemed or to be completely untrue. But more worryingly still, these examples are drawn just from the studies that have received that all-important scrutiny. These are just the ones we know about. How many other results, we must ask ourselves, would prove unreplicable if anyone happened to make the attempt?”
“For a scientific finding to be worth taking seriously, it can't
be something that occurred because of random chance, or a glitch in the equipment, or because the scientist was cheating or dissembling. It has to have really happened. And if it did then in principle I should be able to go out and find broadly the same results as yours. In many ways, that's the essence of science, and something that sets it apart from other ways of knowing about the world: if it won't replicate, then it's hard to describe what you've done as scientific at all.”
“In 1942, Merton set out four scientific values, now known as the Mertonian Norms, None of them have snappy names, but all of them are good aspirations for scientists. First, universalism: scientific knowledge is scientific knowledge, no matter who comes up with it… Second, and
relatedly, disinterestedness: scientists aren't in it for the money, for political or ideological reasons, or to enhance their own ego or reputation (or the reputation of their university, country, or anything else)… The third is communality: scientists should share knowledge
with each other… Lastly, there's organised scepticism: nothing is sacred, and a scientific claim should never be accepted at face value.”
“The typical paper starts with an Introduction, where you summarise what's known on the topic and what your study adds. There follows a Method section, where you describe exactly what you did - in enough detail so that anyone could, in theory, run exactly the same experiment again.
You'll then move on to a Results section, where you present the numbers, tables, graphs and statistical analyses that document your findings, and you'll end with a Discussion section where you speculate wildly - er, I mean, provide thoughtful, informed consideration - about what it all means. You'll top the whole thing with an Abstract: a brief statement, usually of around 150 words, that summarises the whole study and its results. The Abstract is always available for anyone to read, even if the full paper is behind the journal's subscription paywall, so you'll want to use it to make your results sound compelling. Papers come in all lengths and sizes, and sometimes mix up the above order, but in general your paper will end up along these lines.”
“There's one field of research that consistently generates more hype, inspires more media interest and suffers more from the deficiencies outlined in this book than any other. It is, of course, nutrition. The media has a ravenous appetite for its supposed findings: The ‘Scary New Science That Shows Milk is Bad For You'; ‘Killer Full English: Bacon Ups Cancer Risk'; ‘New Study Finds Eggs Will Break Your Heart'. Given the sheer volume of coverage, and the number of conflicting assertions about how we should change our diets, little wonder the public are confused about what they should be eating. After years of exaggerated findings the public now lacks confidence and is sceptical of the field's research. Nutritional science, like psychology, has been going through its own replication crisis.”
“Rather like psychology, nutritional epidemiology is hard. An incredibly complex physiological and mental machinery is involved in the way we process food and decide what to eat; observational data are subject to enormous noise and the vagaries of human memory; randomised trials can be tripped up by the complexities of their own administration. Given that context, the sheer amount of media interest in nutritional research is particularly unfortunate. Perhaps the very scientific questions that the public wants to have answered the most - what to eat, how to educate children, how to talk to potential employers, and so on - are the ones where the science is the murkiest, most difficult, and most self-contradictory. All the more reason that scientists in those fields need to take more seriously the task of sensibly communicating their findings to the public.”
“It’s not just that the system fails to deal with all the kinds of malpractice we've discussed. In fact, the way academic research is currently set up incentivises these problems, encouraging researchers to obsess about prestige, fame, funding and reputation at the expense of rigorous, reliable results.”
“Goodhart's Law: ‘when a measure becomes the target, it ceases to be a good measure.’… once you begin to chase the numbers themselves rather than the principles that they stand for - in this case, the principle of finding research that makes a big contribution to our knowledge - you've completely lost your way.”
“To paraphrase the biologist Ottoline Leyser, the point of
breaking ground is to begin to build something; if all you do is groundbreaking, you end up with a lot of holes in the ground but no buildings. How do we reverse the prioritisation of novel results over solid ones? How do we combat publication bias, ensuring that all results get published, no matter whether they're groundbreaking or null?”
“Somewhat scandalously, the majority of science frames exploratory results as though they were confirmatory; as though they were the results of tests planned before the study started.”
“The usual reaction I received when I told my friends about this book was a broader concern regarding trust in science: 'Isn't it irresponsible to write something like that? Won't you encourage a free-for-all, where people use your arguments to justify their disbelief in evolution, or in the safety of vaccines, or in man-made global warming? After all, if mainstream science is so biased, and its results so hyped, why should the average person believe what scientists are telling them?'”
“It's with more science that we can discover where our research has gone wrong and work out how to fix it. The ideals of the scientific process aren’t the problem: the problem is the betrayal of those ideals by the way we do research in practice. If we can only begin to align the practice with the values, we can regain any wavering trust and stand back to marvel at all those wondrous discoveries with a clear conscience.”
“The fundamental lesson is to be humbler about what we do and do not know. At first this might appear to be antithetical to the idea of scientific research, which is surely about uncovering new facts about the world and always adding to our knowledge. But if you think about it for longer, it turns out to be the very essence of science itself.”
1. Is everything above board? (Trustworthy source)
2. How transparent is it?
3. Is the study well-designed?
4. How big is the sample?
5. How big is the effect?
6. Are the inferences appropriate?
7. Is there bias?
8. How plausible is it, really?
9. Has it been replicated?
10. What do other scientists think about it?
Here’s a few other interesting selections:
“Discovering the serious problems with the way we do science will be disconcerting. How many intriguing results that you've read about in the news and popular science books, or seen in documentaries, discoveries you've been excited enough to share with friends, or that made you rethink how the world works - are based on weak research that can't be replicated? How many times has your doctor prescribed you a drug or other treatment that rests on flawed evidence? How many times have you changed your diet, your purchasing habits, or some other aspect of your lifestyle on the basis of a scientific study, only for the evidence to be completely overturned by a new study a few months later? How many times have politicians made laws or policies that directly impact people's lives, citing science that won't stand up to scrutiny? In each case, the answer
is: a lot more than you'd like to think.”
“In general, though, the effect on psychology has been devastating. This wasn't just a case of fluffy, flashy research like priming and power posing being debunked: a great deal of far more ‘serious' psychological research (like the Stanford Prison Experiment, and much else besides) was also thrown into doubt... Studies that failed to
replicate continued to be routinely cited both by scientists and other writers: entire lines of research, and bestselling popular books, were being built on their foundation. ‘Crisis'
seems to be an apt description.”
“There are countless other examples: almost every case I'll
describe in this book involves a scientific "finding' that, upon closer scrutiny, turned out to be either less solid than it seemed or to be completely untrue. But more worryingly still, these examples are drawn just from the studies that have received that all-important scrutiny. These are just the ones we know about. How many other results, we must ask ourselves, would prove unreplicable if anyone happened to make the attempt?”
“For a scientific finding to be worth taking seriously, it can't
be something that occurred because of random chance, or a glitch in the equipment, or because the scientist was cheating or dissembling. It has to have really happened. And if it did then in principle I should be able to go out and find broadly the same results as yours. In many ways, that's the essence of science, and something that sets it apart from other ways of knowing about the world: if it won't replicate, then it's hard to describe what you've done as scientific at all.”
“In 1942, Merton set out four scientific values, now known as the Mertonian Norms, None of them have snappy names, but all of them are good aspirations for scientists. First, universalism: scientific knowledge is scientific knowledge, no matter who comes up with it… Second, and
relatedly, disinterestedness: scientists aren't in it for the money, for political or ideological reasons, or to enhance their own ego or reputation (or the reputation of their university, country, or anything else)… The third is communality: scientists should share knowledge
with each other… Lastly, there's organised scepticism: nothing is sacred, and a scientific claim should never be accepted at face value.”
“The typical paper starts with an Introduction, where you summarise what's known on the topic and what your study adds. There follows a Method section, where you describe exactly what you did - in enough detail so that anyone could, in theory, run exactly the same experiment again.
You'll then move on to a Results section, where you present the numbers, tables, graphs and statistical analyses that document your findings, and you'll end with a Discussion section where you speculate wildly - er, I mean, provide thoughtful, informed consideration - about what it all means. You'll top the whole thing with an Abstract: a brief statement, usually of around 150 words, that summarises the whole study and its results. The Abstract is always available for anyone to read, even if the full paper is behind the journal's subscription paywall, so you'll want to use it to make your results sound compelling. Papers come in all lengths and sizes, and sometimes mix up the above order, but in general your paper will end up along these lines.”
“There's one field of research that consistently generates more hype, inspires more media interest and suffers more from the deficiencies outlined in this book than any other. It is, of course, nutrition. The media has a ravenous appetite for its supposed findings: The ‘Scary New Science That Shows Milk is Bad For You'; ‘Killer Full English: Bacon Ups Cancer Risk'; ‘New Study Finds Eggs Will Break Your Heart'. Given the sheer volume of coverage, and the number of conflicting assertions about how we should change our diets, little wonder the public are confused about what they should be eating. After years of exaggerated findings the public now lacks confidence and is sceptical of the field's research. Nutritional science, like psychology, has been going through its own replication crisis.”
“Rather like psychology, nutritional epidemiology is hard. An incredibly complex physiological and mental machinery is involved in the way we process food and decide what to eat; observational data are subject to enormous noise and the vagaries of human memory; randomised trials can be tripped up by the complexities of their own administration. Given that context, the sheer amount of media interest in nutritional research is particularly unfortunate. Perhaps the very scientific questions that the public wants to have answered the most - what to eat, how to educate children, how to talk to potential employers, and so on - are the ones where the science is the murkiest, most difficult, and most self-contradictory. All the more reason that scientists in those fields need to take more seriously the task of sensibly communicating their findings to the public.”
“It’s not just that the system fails to deal with all the kinds of malpractice we've discussed. In fact, the way academic research is currently set up incentivises these problems, encouraging researchers to obsess about prestige, fame, funding and reputation at the expense of rigorous, reliable results.”
“Goodhart's Law: ‘when a measure becomes the target, it ceases to be a good measure.’… once you begin to chase the numbers themselves rather than the principles that they stand for - in this case, the principle of finding research that makes a big contribution to our knowledge - you've completely lost your way.”
“To paraphrase the biologist Ottoline Leyser, the point of
breaking ground is to begin to build something; if all you do is groundbreaking, you end up with a lot of holes in the ground but no buildings. How do we reverse the prioritisation of novel results over solid ones? How do we combat publication bias, ensuring that all results get published, no matter whether they're groundbreaking or null?”
“Somewhat scandalously, the majority of science frames exploratory results as though they were confirmatory; as though they were the results of tests planned before the study started.”
“The usual reaction I received when I told my friends about this book was a broader concern regarding trust in science: 'Isn't it irresponsible to write something like that? Won't you encourage a free-for-all, where people use your arguments to justify their disbelief in evolution, or in the safety of vaccines, or in man-made global warming? After all, if mainstream science is so biased, and its results so hyped, why should the average person believe what scientists are telling them?'”
“It's with more science that we can discover where our research has gone wrong and work out how to fix it. The ideals of the scientific process aren’t the problem: the problem is the betrayal of those ideals by the way we do research in practice. If we can only begin to align the practice with the values, we can regain any wavering trust and stand back to marvel at all those wondrous discoveries with a clear conscience.”
“The fundamental lesson is to be humbler about what we do and do not know. At first this might appear to be antithetical to the idea of scientific research, which is surely about uncovering new facts about the world and always adding to our knowledge. But if you think about it for longer, it turns out to be the very essence of science itself.”