Experimental vs observational

The statistical analysis of a dataset is always dependent on how the data has been collected. An experimental study, e.g. a randomised clinical trial, can be designed in such a way that validity problems (selection bias, misclassification bias, and confounding) are prevented, for example with concealed treatment allocation, randomisation of patients to treatment, and masking of treatment. The statistical work can then focus entirely on precision issues such as estimating sample size and statistical power, testing null hypotheses and estimating effect size. Such interventions are, however, impossible in an observational study. Validity issues need instead to be addressed in the statistical analysis, usually by bias adjustment, but this is successful only when the analyst knows what to adjust for and if the necessary data are available in the database. This is not generally the case.

One consequence of these differences in study design is that the results from experimental studies are considered more accurate and reliable than results from observational studies. Another consequence is that even with the same statistical methods, different analyses strategies may be necessary. For example, regression analyses is often used in an experimental study to account for randomisation stratification factors and for baseline value when estimating change from baseline, but the purpose of using regression analysis in an observational study is usually to adjust estimated effect sizes from confounding by association with competing risk factors.

Common mistakes in manuscripts include the usage of trial terminology (e.g. primary and secondary outcomes and intention-to-treat) in observational studies, where these terms have no relevant definition, and the usage of observational analysis strategies (confounding adjustments) in randomised trials, where confounding already has been prevented by randomisation.

 

Liked this post? Follow this blog to get more. 

Published
Categorised as Comment