Welcome to another Still in the Storm post!
A Quick Announcement
To ensure that I’m putting the most relevant and valuable information possible in the most accessible format(s) I am trying to adjust the newsletter on the fly. I’ve come to realize that perhaps video hosting on Substack is not ready for prime time and also not the preferred option for video streaming.
To that end, and since Substack was created for writers, I am going to try a few articles to see how that goes. If I decide to stick with that format I will still try to put out regular videos on my Rumble and Odysee channels. As well, keep an eye out for podcast episodes for paid subscribers.
To thank you for your support, I’m offering a special discount on new subscriptions for a limited time. This is a FOREVER discount, not just for one month or year. The offer is further down in the post so, keep reading.
Now let’s get on with it!
The Truth About the Current State of Scientific Research
The current state of science is dire. Over the last 40 years or so we have effectively witnessed the death of science. It is hard to say, but what is currently put forth in most labs bears little resemblance to true science.
In the next three articles, I am going to share the truth about the current state of science. There are three main issues that I am going to cover. They are:
The fact that most published research findings are more likely to be false than not.
The reproducibility crisis, where upwards of 90% of scientific papers cannot be replicated.
The broken peer review process.
Are Most Published Research Findings False?
Approximately 18 years ago John Ioannidis put out his now landmark essay entitled, “Why Most Published Research Findings Are False”. In it he outlined a number of factors which, when present, decrease the probability that a given finding is true. The reality is that in most cases many of these factors are present, thus Ioannidis’ conclusion that most findings are false.
This is where I will begin.
Buckle up! We are going for quite a ride…
Ioannidis Backed Up His Claims
There is increasing concern that most current published research findings are false.
Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias.
The above quotes are from the summary section of Ioannidis’ essay. He wasted no time in going right at the heart of the matter. According to him there are simulations that show it is more likely for a given research claim to be false than true. It is even possible that many are just measures of the prevailing bias.
In the essay, he clearly backed up these seemingly audacious claims. Many of the things he highlights I have observed myself over my 20 year career as a scientist.
It’s All About the p-value or At Least That’s What the Scientists Think
The first thing that has a huge impact on the probably of a finding being true is statistical significance and the over use of what is known as the p-value. Here is how the New Oxford American Dictionary defines p-value:
the probability that a particular statistical measure, such as the mean or standard deviation, of an assumed probability distribution will be greater than or equal to observed results.
Scientists commonly use a p-value of less than 0.05 to establish statistical significance. They claim that if a given set of data has such a p-value then the finding is real and likely to be true. Ioannidis had a different opinion on the matter.
Several methodologists have pointed out that the high rate of nonreplication of research discoveries is a consequence of the convenient, yet ill-founded strategy of claiming conclusive research findings solely on the basis of a single study assessed by formal statistical significance, typically for a p-value less than 0.05.
Here he starts by bringing up the notion of non-replication which we will hit on in an upcoming post. The rest of the quote reinforces that it is, in his opinion, a poor strategy to rely on formal statistical significance as the basis for assessing a given research finding.
Many scientists have and still do heavily rely on the use of statistical significance which leads to most of their findings being false. It is not just Ioannidis that has made such a claim regarding the over use of the p-value.
Enter David Colquhoun
In 2014, David Colquhoun a pharmacologist and statistician from UCL, published a review article in the Royal Society’s Open Science. As you will see, he very clearly backs up Ioannidis’ claims and starts off with a doozy.
If you use p=0.05 to suggest that you have made a discovery, you will be wrong at least 30% of the time. If, as is often the case, experiments are underpowered, you will be wrong most of the time.
For the first part to be true, you would need to have an ideal experimental scenario. The sad truth is that for the great majority of the time that is just not the case and therefore, most experiments end up being underpowered, which in turn means you will be wrong if you use p=0.05.
I’m sure you are wondering what is meant by the power of an experiment or how it can be underpowered and I will get to that shortly.
Before returning to Ioannidis I want to share one more quote from Colquhoun’s paper.
You make a fool of yourself if you declare that you have discovered something, when all you are observing is random chance. From this point of view, what matters is the probability that, when you find that a result is ‘statistically significant’, there is actually a real effect. If you find a ‘significant’ result when there is nothing but chance at play, your result is a false positive, and the chance of getting a false positive is often alarmingly high.
Hard not to agree with Colquhoun there. He really drives home the issue with overly relying on the use of the p-value when you have an underpowered experiment.
What’s this About Bias?
Next up, Ioannidis sets his sights on bias. Here is how he defines it:
First, let us define bias as the combination of various design, data, analysis, and presentation factors that tend to produce research findings when they should not be produced.
Bias can entail manipulation in the analysis or reporting of findings. Selective or distorted reporting is a typical form of such bias.
Note that he mentions manipulation of analysis or reporting as a means where bias comes in. This, in my opinion, is a huge problem. Many scientists will do what they refer to as “massaging” the data to eliminate any outliers that could compromise their desired conclusion.
He goes on to say that, “with increasing bias, the chances that a research finding is true diminish considerably.” Therefore, bias is a considerable factor in determining the veracity of research findings. As well, bias is pervasive in scientific research today and takes many forms.
The Corollaries
Based on the preceding, Ioannidis goes on to deduce several interesting corollaries that when they are present decrease the probability that a research finding is true. They are the following:
Corollary 1: The smaller the studies conducted in a scientific field, the less likely the research findings are to be true.
Corollary 2: The smaller the effect sizes in a scientific field, the less likely the research findings are to be true.
Corollary 3: The greater the number and the lesser the selection of tested relationships in a scientific field, the less likely the research findings are to be true.
Corollary 4: The greater the flexibility in designs, definitions, outcomes, and analytical modes in a scientific field, the less likely the research findings are to be true.
Corollary 5: The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true.
Corollary 6: The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true.
I want to make a few observations based on these corollaries. The first one refers to sample size. This would be where only a limited number of replicate samples are tested.
One thing I’ve noted in a number of Covid articles is that when they are analyzing patient samples the size of the groups of patients is quite small. This is particularly surprising considering the fact that it is supposed to be a global pandemic and thus there should be no shortage of patients to acquire samples from.
Sample size also happens to be one factor that plays into the power of the experiment. Another is the effect size. Corollary 2 highlights this.
If you have a small sample size and are observing a small effect this would be an underpowered experiment and if you use a p-value = 0.05 to establish whether you have a valid finding then Colquhoun would say that you would be wrong most of the time.
Over my career I have also observed a huge amount of flexibility offered to scientists with respect to experimental design and analytical methods. What occurs more often than not is if you have two different lab mates conducting the same experiment they will yield different results. Thus any findings will not be valid.
Corollary 5 should surprise no one. There are a massive and ever growing amount of conflicts of interest in scientific research today.
Another important aspect is the funding. I’ve know scientists to appeal directly to those with the money to tell them what to do or what to find within their experiments. Needless to say this isn’t science and it clearly has an impact upon whether a finding is true.
The last corollary is particularly applicable to the present situation with covid. I mean has there ever been a hotter field where new data is constantly being released.
We now are even seeing much coming out in pre-prints. This is where a paper is published without even going through the full peer review process. Not to say there isn’t a lot wrong with peer review but we will hit on that in an upcoming article.
The point is that the data is coming out so fast that it’s not even being reviewed and is still accepted as true by many in the scientific community. I’m sure you can appreciate the problem with this.
Here’s a little more from Ioannidis that further drives home the problem with data coming out of a hot scientific field, like covid.
This may explain why we occasionally see major excitement followed rapidly by severe disappointments in fields that draw wide attention. With many teams working on the same field and with massive experimental data being produced, timing is of the essence in beating competition. Thus, each team may prioritize on pursuing and disseminating its most impressive “positive” results.
Ioannidis wraps up his essay by discussing how to fix these issues. Unfortunately, many have not been quick to heed his advice. There are a few more quotes that I’d be remiss if I didn’t mention.
A major problem is that it is impossible to know with 100% certainty what the truth is in any research question. In this regard, the pure “gold” standard is unattainable.
Despite what “the experts” would like you to believe, the fact is that science does not provide absolute truth. You can never know anything with 100% certainty. It is never, ever settled and in fact is always changing.
The Editor of the Lancet Drives it Home
Richard Horton is the editor-in-chief of The Lancet, a highly regarded medical journal. In 2015, ten years after Ioannidis published his essay, Horton put out a powerful commentary. It dealt with something he overheard at a symposium “on reproducibility and reliability of biomedical research” held by none other than the Wellcome Trust.
If you aren’t familiar with the Wellcome Trust please look them up. This group was particularly instrumental in pushing many of the tyrannical measures during the early days of covid.
Back to Horton’s piece. He starts by quoting the remark he overheard which I think very simply sums up the entire situation.
“A lot of what is published is incorrect.”
Short but, very much to the point. Let it not fall on deaf ears that this was likely said by someone with a lot of influence in world of science. I take this as a frank admission despite the fact that may scientists want to say that the system is not flawed.
Horton continues on by providing his own thoughts which clearly back up Ioannidis’ claims. He further seems to agree with the reasoning used as well.
The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness.
There is not much to disagree with there but, Horton continues.
In their quest for telling a compelling story, scientists too often sculpt data to fit their preferred theory of the world. Or they retrofit hypotheses to fit their data.
This has become incredibly common and I saw it a lot throughout my career. It is all about generating data to fit the preconceived hypothesis or deciding on the story a paper will tell before all the data has been generated.
This clearly is not science and operating in this manner will undoubtedly lead to invalid findings being put forth at true.
Horton finishes by asking whether these poor practices can be fixed and then he goes on to hit on one of the bigger parts of the problem.
Part of the problem is that no-one is incentivised to be right. Instead, scientists are incentivised to be productive and innovative.
This is 100% true now as it was back then and likely will continue to be since no one has any incentive to work towards a solution. The business of science has ruined true science. It has become no more than a machine for innovation.
The CDC Said What?
I want to wrap this up with what I believe is a rather frank statement by a group from the CDC that published a response to Ioannidis’ essay in 2007.
Essentially they claimed to be able to demonstrate that if a study can be replicated that increases the likelihood that a given finding is true and that this would, in effect, solve Ioannidis’ issue with statistical significance.
They start out with admitting that what Ioannidis claimed was likely true.
He showed elegantly that most claimed research findings are false.
Just to be crystal clear here, this is a group from the CDC admitting that most published research findings are false. Read that a few times to make sure it sinks in.
Then they go on to talk about replication and how it could potentially solve this problem.
As part of the scientific enterprise, we know that replication—the performance of another study statistically confirming the same hypothesis—is the cornerstone of science and replication of findings is very important before any causal inference can be drawn.
Again, here is the CDC (the very same one that pushed draconian measures upon us during covid) that is stating that replication is crucial before it can be determined whether a finding is true.
Well, as you will see in the next article we have a slight problem with replication. In fact so much so that it has been dubbed a crisis, “the reproducibility crisis”.
Just out of curiosity, did the CDC use studies that had been replicated to justify any of the measures they put forward? I didn’t think so.
Discernment is Key
I hope you can appreciate from this why it is so important that we use a lot of discernment when evaluating any new scientific claims, especially those that have significant implications for our way of life.
Stay tuned for next time when we dig deep into The Replication Crisis. You don’t want to miss it.
I truly hope you find your still in this storm that is raging all around us. And, of course, don’t stop questioning the science! It is never settled.
Thank you and God Bless.
References:
Ioannidis, J. P. A. (2005). Why Most Published Research Findings Are False. PLOS Medicine, 2(8), e124. https://doi.org/10.1371/JOURNAL.PMED.0020124
Colquhoun, D. (2014). An investigation of the false discovery rate and the misinterpretation of p-values. Royal Society Open Science, 1(3). https://doi.org/10.1098/RSOS.140216
Horton, R. (2015). Offline: What is medicine’s 5 sigma? The Lancet, 385(9976), 1380. https://doi.org/10.1016/S0140-6736(15)60696-1
Moonesinghe, R., Khoury, M. J., & Janssens, A. C. J. W. (2007). Most Published Research Findings Are False—But a Little Replication Goes a Long Way. PLoS Medicine, 4(2), e28. https://doi.org/10.1371/JOURNAL.PMED.0040028
Here’s the Last Post In Case You Missed It
Easter Special
For a limited time, get 20% off a subscription to Still in the Storm.
Note: this is a FOREVER discount.
Just click the button below👇
I've been paid to work on building a quantum computer for 20 years. The pay was steady. It is impossible. Our lab probably laundered 1 billion dollars. It was a perfect scam. The lab management would say it isn't working but it might work when the researchers knew it would never work. When the scamdemic cam down the line I was very ready to see through the lies after about a month. When the lab director mandated vaccines, I retired. I have first hand experience on the lies of modern science, some of which I learned recently. First I learn beyond any doubt that the Apollo missions were faked. The big one, for me as a physicist, was learning, beyond a doubt, that nuclear bombs are a hoax. Hiroshima and Nagasaki were fire bombed. The WTC came down with explosives (good scientific papers written on this(and some giant lies)) on Sept 11,2001.
Mike, the problem is far more serious;; Alll journals are rigged, including Lancet. All will suppress the best research that runs against the official conspiracy/narrative. All nerds know that journals are funded by big corporations and governments through NGO/Foundations/Trilateral, CFR, that is why peer review fails, not because the evaluators fail to understand your solid points, but because everyone knows the system is rigged by outsiders.