A review by inquiry_from_an_anti_library
Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth by Stuart Ritchie

adventurous challenging hopeful informative inspiring reflective tense fast-paced

5.0

Is This An Overview?
Science is a collaborative effort in error correcting information and improving on the knowledge that is available.  As a collaborative effort, as a social field, the research needs to be shared and people convinced.  Scientists are humans themselves, who have human biases.  Scientists choose how to approach their research, they choose how to interpret their research and competing research, choose whether to publish or not, and choose how to persuade others.  Each choice contains biases that can and has led to the spread of misinformation. 

Scientists have been trusted, and are trusting themselves, but the system has enabled those who can exploit the system of science to wield power.  The scientific community has perverse incentives as those who are untrustworthy are more likely to be promoted for they are willing to compromise the research process, than the trustworthy who seek to improve the knowledge base.  Incentives that reduce the reliability of research. 

Research is shared through a publication, but what is wanted for publishing is not necessarily what is needed to be published.  What often gets published are the exciting results, exaggerated, misleading, and often wrong.  The research that challenges or replicates other research are not welcome in publishing, even though they are needed to provide the limitations and legitimacy for the claims.  Not publishing seemingly unimportant research, distorts the scientific record and enables harmful outcomes.  There are costs to time, effort, and money when using and providing research that is uninformative.

The practice of science has been corrupted.  Rather than error correcting, science enables misinformation to spread.  Science needs to change how it is practiced to enable trust in the community.  This book provides guidance on how science has been exploited, and methods to improve the practice of science.
 
Is Science An Ideal Field?
Science depends on a communal process to find errors and faults to determine whether claims are reliable and important.  Being a communal process, requires persuading peers.  But by focusing too much on persuading peers, scientists lose track of the purpose of science which is to get closer to truth.  Persuading peers can take on various human biases that reduce the validity of the scientific process. 

Skepticism is supposed to be the basic norm of science, but has enabled incompetence, delusion, lies, and self-deception.  The very ideal that scientists hold about science, that of an error correcting system, has given space to research done with human biases while claiming to be objective and unbiased. 
 
Which Research Is Published?
Scientific studies need to be replicated to prove that the results did not come by chance, fraud, or equipment error.  Replication is meant to prevent false findings, bad experiments, and inappropriate data.  But replication is not taken seriously, and studies are not often replicated.  Claims are accepted without checking for replication.  There are barely any attempts to replicate prior results.  Creating a replication crisis, in various fields.  When replication is attempted, many results fail to replicate.  Various research results are used to make policy and health choices that have immediate negative consequences when the results have not been replicated.

News and journals focus on the new and exciting research, which tend to be primarily positive results with a few null results.  Positive results are those in which discoveries are made, while null results are those in which no discovery is made.  Repeat studies are usually rejected from publications, even if they show a different or contradictory result than the original.  Scientist choose to publish results when they have positive research while not publishing null results.  As positive, flashy, novel, newsworthy results are rewarded much more, scientist are incentivized to produce those results, and convince others that the research has the wanted attributes.  Creating a publication bias.  By failing to publish null results, there is an exaggerated importance of effects that create misleading beliefs.  Publication bias distorts the information that is used to make decisions, leading to making decisions based on partial information.  Decisions that are liable to create problems. 

To get hired and promoted, scientists need published papers with appropriate journals.  Universities are ranked by the papers they produce, which results in a publish or perish mentality.  As scientists have limited time to publish papers along with the rest of their responsibilities, the scientific standards become bypassed.  Quantity matters more than quality.  Scientists can split their research into many papers, providing an artificially better CV.  Without knowing the content of the papers, readers of one or few can think there is more evidence for results than there actually is.  Low citation count can be an underappreciated work, but scientists are willing to publish useless works to secure jobs and grants rather than advance science. 

Hype can be very harmful in science.  Many press releases give recommendations to change behavior based on results that the research could not support.  Press releases are important because journalists are time-pressed and therefore closely copy the language of the press release.  This is known as churnalism.  The problem with hyped science is that while the hyped research gets a lot of attention, the refutations are barely able to catch up.  The scientific system incentivizes the lack of caution, restraint, and skepticism.

Peer review is enough to prevent flawed ideas from being published.  Peer-review researchers can prevent alternative conclusions from being published.  The h-index ranks citations based on number of studies, but this measure can be corrupted.  Reviewers created conditions to make sure that papers they published listed the reviewer’s papers.  Researchers have even created a citation cartel with editors collaborating with others for citations. 

There are even problems with reproducibility.  Results do not reproduce using the same data.  Often because the method of reporting was not clear enough, or steps were left out of the report. 

Papers that have been proved to be wrong are retracted.  They remain in the literature with a retracted mark indicating that the paper is no longer considered legitimate. 
 
How Can Science Go Wrong?
Not even highly respected scientific institutions are exempt from protecting their reputation by protecting the activities of fraudsters.  Fraud comes about by exploiting trust.  There will always be those who want fame and success above other concerns.  Fraud does disproportionate damage to science because it takes time to investigate the findings, which takes researchers away from their own research.  Fraud also wastes money through theft, people spending money trying to obtain results that were never real, and researches waste their funds trying to replicate fraudulent research.  Fraud damages the reputation of scientists. 

Although relatively few papers are retracted, for various reasons that include fraud.  Anonymous surveys asking scientists if they committed fraud results in a relatively large portion of scientists admitting to fraud.  Worse, as the portion of fraud increased when asked about known other researchers committing fraud.  The actual numbers are higher, because not everyone would be willing to admit to fraud even anonymously. 

Researchers can put in fake numbers into their papers to make their paper appear more attractive than it actually is.  But that means that everyone who is looking at the paper and making use of the paper, are using wrong information.  There are instances when measurements are accidentally incorrectly recorded, known as measurement error.  There is an expectation that numbers are noisy.  But, made up numbers do not have the properties of genuinely collected data. 

There is sampling error which means generating wrong interpretations about the population from the sample.  The different samples can have different averages, along with chance providing very different averages. 

P-value indicates the potential randomness of getting a result if the hypothesis was not true.  It does not indicate if the result is true or important.  Statistical significance is given a p-value of 0.05, which is an arbitrary number.  Significance does not indicate a worthy result.  Scientists can also p-hack.  They can run a plethora of tests until they find a test that is statistically significant.  Alternatively retroactively come up with a hypothesis after they find a result they approve of.   Both versions of p-hacking invalidates the p-value as they create methods of getting results through random chance.  Running many tests increases the likelihood of getting a significant result by random chance.  Without sharing the results that were not significant, leads to people being convinced of fake results.  More opportunities means more chances for false-positive results.  P-hacking is a way to make noise appear valuable. 
 
How To Improve Science?
What is measured gets focused on.  Creating conditions that make the metric meaningless, which overrides genuine scientific progress.  Removing arbitrary measures is not necessarily going to resolve bad research practices, for that might introduce other sources of subjectivity. 

Pre-registration enables researchers to be accountable to what they are planning to do.  If a paper has the condition of being published no matter the results, as long as they maintain the pre-registration plan, then that eliminates many incentives for bias and fraud. 
 
Caveats?
The listed problems of science are common in life.  What the author does is reference the problems with scientists as their source.  This book is critical of how science operates, for by knowing where science can go wrong, can science be corrected.

The author references the lack of publications on replication and null results from which no discoveries are made.  Both types are needed in science, but they can also be corrupted.