It is commonly believed that the line between knowledge and ignorance is clear and that scientific research systematically expands knowledge by reducing ignorance. However, what we think we know is a mixture of facts and beliefs. Here I use the term fact to describe something that is based on empirical evidence and has been convincingly shown to be true. A belief, on the other hand, may appear to be true but represents a subjective interpretation of something, an opinion. While the uncertainty of a fact is known or can be objectively calculated, the veracity or relevance of a belief is unknown. The uncertainty of a belief cannot be objectively quantified because it is not based on evidence but on intuition, tradition, religion or even superstition. One belief may be right, another one may be wrong. The problem is that without evidence, no one can tell which belief is right and which is wrong.

In mass media, the subjective opinion of experts is often presented as the truth. However, as stated by Richard Feynman, science is the belief in the ignorance of experts, and from a scientific viewpoint, it would be ludicrous to consider consensus opinions among experts to be equivalent with objective evidence.

For example, the true treatment effect (or efficacy) of a drug may be reliably estimated, and the uncertainty of the efficacy estimation calculated, in a well-performed randomised clinical trial. However, all practically used treatments that are generally believed to be effective have not been evaluated in clinical trials. When the efficacy eventually is properly evaluated, the results are often disappointing.

Empirical observations are, however, uncertain and can support different conclusions. It is therefore necessary to quantify the uncertainty in an objective and reproducible manner. This is why statistics and statisticians play a key role in medical research, but this is not always appreciated by the physicians engaged in medical research. Stephen Senn describes this, somewhat ironically, with the statement that “Statistics is a subject which most statisticians find difficult but in which nearly all physicians are expert”. The statement by Ronald Fisher that “To consult the statistician after an experiment is finished is often merely to ask him to conduct a post mortem examination. He can perhaps say what the experiment died of” further demonstrates the importance of insights into statistics.


In common language, the word statistics refers to numbers and tables describing a particular phenomenon, and this is also the meaning of the word statistics in the plural. However, statistics has another meaning in the singular. Like other -ics words (mathematics, economics, physics, etc.) it stands for a scientific discipline. Statistics or statistical science has also been described as the science of uncertainty. Broadly speaking, it can be divided into three major areas, description, inference, and prediction.

Statistical description focuses on principles of how to describe empirical phenomena graphically and numerically with as little loss of information as possible.

Statistical inference is used to clarify the relationship between observations in a sample and the population which these observations represent. In medical research, the sample may be the patients that have participated in a randomised trial of a specific drug and the population all future patients that could be treated with this drug. When the findings in the sample are interpreted, sampling variation needs to be evaluated because repeated samples from the same population are not necessarily identical. This implies that the findings in any single sample are uncertain, and this uncertainty can be quantified using statistical inference. The uncertainty is commonly presented in terms of p-values and confidence intervals.

Statistical prediction, sometimes described as machine learning, can also be seen as a form of inference, but it is based on data (pattern recognition) instead of reasoning about cause and effect. The uncertainty of a prediction, the predictive accuracy, is usually presented in terms of sensitivity and specificity.

Conflicts of interestThis section presents random comments, suggestions, and tips on statistical reviewing.

Let us assume that science is a search for truth and that questioning of beliefs is necessary. The work is difficult, mistakes are common, because new facts can only be developed on the basis of current facts and current beliefs, and many of these are false or at least too simplistic. Distinguishing between facts and beliefs is, therefore, crucial. However, many scientists are biased because scientific research is generally an activity with few facts and many hypotheses, and the confirmation of some of these hypotheses may be especially rewarding for the scientist personally. Such conflicts of interest are to be acknowledged when research findings are reported. The important question is whether or not the findings’ strengths and weaknesses have been objectively and reliably evaluated and described.

Furthermore, evidence-based research, finding empirical support for or against medical hypotheses, is a complicated process. It usually takes time; it is expensive, and it is logistically problematic. Whether research projects are observational or experimental, several different kinds of expertise are required and teamwork necessary. If the primary ambition is to develop personal fame, this is not a very attractive research environment. Authority-based research may be a more attractive alternative. Authority-based researchers are easy to recognise as they usually do not distinguish between beliefs and facts, and their findings are never uncertain. All that is needed is authority, the power to persuade the reader. One of the tools in the authority-based researcher’s toolbox is (alledged) statistical expertise. This is typically demonstrated by the presentation of numerous p-values and references to statistical significance.

However, personal computers and user-friendly statistics software (such as SPSS) are now generally available. A report based on the use of statistical methodology no longer needs to have been performed in collaboration with a statistician. P-values can be quickly computed en mass and without statistical insights. This is problematic because, as explained again by Richard Feynman, “The first principle is that you must not fool yourself, and you are the easiest person to fool.”

It is also well-known that statistical misconceptions have a far too great impact on the importance of research findings. One of the most obvious and frequent confusions is to consider statistical significance a token of practical importance and clinical relevance. The mistake includes a misunderstanding of statistical non-significance as evidence of equivalence. This problem is so ubiquitous that it cannot be explained by anything other than a systematically inadequate education. Instead of focusing on how to produce empirical evidence, medical researchers are obsessed by calculating p-values. Independently of the research question, study design, sample size, data collection, and methodological approach, the outcomes are considered equally important. This is a practice that replaces scientific reasoning with meaningless computation. It promotes results that are based on the researcher’s beliefs instead of rational reasoning.

Towards a big mess

These and similar problems are, however, not restricted to medical research. They appear everywhere important decisions are made. Treating patients is, for example, fundamentally different from performing medical research, not least with regard to the roles of evidence and uncertainty. A tentative diagnosis is not always be based on unambiguous evidence, and the difference between a successful and failed treatment may well depend on the physician’s subjective beliefs. This is at least the central theme in the popular TV series House MD.

The Prussian general Carl von Clausewitz describes in his book “On War” the similar problem for military leaders to make the right decisions “in the fog of war”, i.e. with incomplete information and under time pressure. Authorities developing a strategy in the response to an imminent pandemic face the same problems. Developing evidence takes too long; decisions have to be made now.

Clausewitz considered the necessary skill of making the right decisions to be a sort of intuition enabling military commanders to outmanoeuvre their opponents. Good decisions are undoubtedly easier to arrive at when they can be based on a reasonable balance of facts and beliefs. Facts, however, take more time to develop than beliefs, and these can change quickly. The technological development of internet media, in social, scientific, and professional contexts, tends to favour interesting beliefs instead of boring facts, and this leads to increasing imbalances between facts and beliefs. The development also makes it increasingly difficult to distinguish between facts and beliefs. Scientific reasoning is increasingly integrated with beliefs.

Statistical reviewers therefore play an increasingly important role in the publishing of scientific reports. The task is to ensure that the reader can distinguish between the author’s subjective opinion and objective empirical evidence. In order to achieve this, it is necessary to make certain that the limitations of the described investigation’s aim, study design, data collection, statistical analysis, and results interpretation are clearly presented to the reader.

Liked this post? Follow this blog to get more.