World Library  
Flag as Inappropriate
Email this Article

Publication bias


Publication bias

Publication bias is a type of bias with regard to what academic research is likely to be published, among what is available to be published. Publication bias is of interest because literature reviews of claims about support for a hypothesis, or values for a parameter will themselves be biased if the original literature is contaminated by publication bias.[1] While some preferences are desirable – for instance a bias against publication of flawed studies –a tendency of researchers, and journal editors to prefer some outcomes rather than others e.g. results showing a significant finding, leads to a problematic bias in the published literature.[2]

Studies with significant results often do not appear to be superior to studies with a null result with respect to quality of design.[3] However, statistically significant results have been shown to be three times more likely to be published compared to papers with null results.[4] Multiple factors contribute to publication bias.[1] For instance, once a result is well established, it may become newsworthy to publish papers affirming the null result.[5] It has been found that the most common reason for non-publication is investigators declining to submit results for publication. Factors cited as underlying this effect include investigators assuming they must have made a mistake, to not find a known finding, loss of interest in the topic, or anticipation that others will be uninterested in the null results.[3]

Attempts to identify unpublished studies often prove difficult or are unsatisfactory.[1] One effort to decrease this problem is the reflected in the move by some journals to require that studies submitted for publication are pre-registered (registering a study prior to collection of data and analysis). Several such registries exist, for instance the Center for Open Science.

Strategies are being developed to detect and control for publication bias,[1] for instance down-weighting small and non-randomised studies because of their demonstrated high susceptibility to error and bias,[3] and p-curve analysis [6]


  • Definition 1
  • Evidence 2
  • Effects on meta-analyses 3
  • Examples 4
  • Risks 5
  • Remedies 6
  • Study registration 7
  • See also 8
  • References 9
  • External links 10


Publication bias occurs when the publication of research results depends not just on the quality of the research but on the hypothesis tested, and the significance and direction of effects detected.[7] The term "publication bias" appears to have been first used in 1959 by statistician Theodore Sterling to refer to fields in which successful research is more likely to be published. As a result, "the literature of such a field consists in substantial part of false conclusions resulting from [type-I errors]".[8]

Publication bias is sometimes called the "file drawer effect", or "file drawer problem". The origin of this term is that results not supporting the hypotheses of researchers often go no further than the researchers' file drawers, leading to a bias in published research.[9] The term "file drawer problem" was coined by the psychologist Robert Rosenthal in 1979.[10]

Positive-results bias, a type of publication bias, occurs when authors are more likely to submit, or editors accept, positive compared to negative or inconclusive results.[11] Outcome-reporting bias occurs when multiple outcomes are measured and analyzed, but where reporting of these outcomes is dependent on the strength and direction of the result for that outcome. A generic term coined to describe these post-hoc choices is HARKing ("Hypothesizing After the Results are Known").[12]


Meta-analysis of stereotype threat on girls' math scores showing asymmetry typical of publication bias. From Flore, P. C., & Wicherts, J. M. (2015)[13]

The presence of publication bias in the literature has been most extensively studied in biomedical research. Investigators following clinical trials from the submission of their protocols to ethics committees or regulatory authorities until the publication of their results observed that those with positive results are more likely to be published.[14][15][16] In addition, studies often fail to report negative results when published, as demonstrated by research comparing study protocols with published articles.[17][18]

The presence of publication bias has also been investigated in meta-analyses. The largest study on publication bias in meta-analyses to date investigated the presence of publication bias in systematic reviews of medical treatments from the Cochrane Library.[19] The study showed that positive statistically significant findings are more likely to be included in meta-analyses of efficacy than other findings and that results showing no evidence of adverse effects have a greater probability to enter meta-analyses of safety than statistically significant results showing that adverse effects exist. Evidence of publication bias has also been found in meta-analyses published in prominent medical journals.[20]

Effects on meta-analyses

Where publication bias is present, published studies will not be representative of the valid studies undertaken. Unless controlled, this bias will distort the results of meta-analyses and systematic reviews. This is a severe problem as cumulative science. For example, evidence-based medicine is increasingly reliant on meta-analysis to assess evidence. The problem is particularly significant because research is often conducted by entities (people, research groups, government and corporate sponsors) having a financial or ideological interest in achieving favorable results.

Those undertaking meta-analyses and systematic reviews need to take account of publication bias by performing a thorough search for unpublished studies. Additionally, a number of publication bias methods have been developed, including selection models [19][21][22] and methods based on the funnel plot, such as Begg's test,[23] Egger's test,[24] and the trim and fill method.[25] However, since all publication bias methods are characterized by a relatively low power and are based on strong and unverifiable assumptions , their use does not guarantee the validity of conclusions from a meta-analysis.[26][27]


Two meta-analyses of the efficacy of Reboxetine as an antidepressant provide an example of attempts to detect publication bias in clinical trials. Based on positive trial data, Reboxetine was originally passed as a treatment for depression in many countries in Europe and the UK in 2001 (though in practice it is rarely used for this indication). A 2010 meta-analysis concluded Reboxetine was ineffective and that the preponderance of positive-outcome trials reflected publication bias, mostly due to trials published by the drug manufacturer Pfizer. A subsequent meta-analysis published in 2011, and also based on the original data, found flaws in the 2010 analyses and suggested that the data indicated Reboxetine was effective in severe depression (see Reboxetine - Efficacy). Examples of publication bias are given by Ben Goldacre[28] and Peter Wilmhurst.[29]

In the social sciences, a study of published papers on the relationship between Corporate Social and Financial Performance found that

"In economics, finance, and accounting journals, the average correlations were only about half the magnitude of the findings published in Social Issues Management, Business Ethics, or Business and Society journals".[30]

One example cited as an instance of publication bias is the failure to accept for publication attempted replications of work by Daryl Bem claiming evidence for pre-cognition by The Journal of Personality and Social Psychology (which published the Bem paper).[31]

A study[32] comparing studies of gene-disease associations originating in China to those originating outside China found that "Chinese studies in general reported a stronger gene-disease association and more frequently a statistically significant result".[33] One interpretation of this result is selective publication (publication bias).


John Ioannidis argues that "claimed research findings may often be simply accurate measures of the prevailing bias".[34] Factors he enumerates as making positive paper likely to enter the literature and causing negative papers to be suppressed are:

  1. the studies conducted in a field are smaller;
  2. effect sizes are smaller;
  3. there is a greater number and lesser preselection of tested relationships;
  4. there is greater flexibility in designs, definitions, outcomes, and analytical modes;
  5. there is greater financial and other interest and prejudice;
  6. more teams are involved in a scientific field in chase of statistical significance.

Other factors include experimenter bias, and white hat bias.


Ioannidis' remedies include:

  1. Better powered studies
    • Low-bias meta-analysis
    • Large studies where they can be expected to give very definitive results or test major, general concepts
  2. Enhanced research standards including
    • Pre-registration of protocols (as for randomized trials)
    • Registration or networking of data collections within fields (as in fields where researchers are expected to generate hypotheses after collecting data)
    • Adopting from randomized controlled trials the principles of developing and adhering to a protocol.
  3. Considering, before running an experiment, what they believe the chances are that they are testing a true or non-true relationship.
    • Properly assessing the false positive report probability based on the statistical power of the test[35]
    • Reconfirming (whenever ethically acceptable) established findings of "classic" studies, using large studies designed with minimal bias

Study registration

In September 2004, editors of several prominent medical journals (including the

  • The Truth Wears Off: Is there something wrong with the scientific method? -- Jonah Lehrer
  • Register of clinical trials conducted in the US and around the world, maintained by the National Library of Medicine, Bethesda
  • Skeptic's Dictionary: positive outcome bias.
  • Skeptic's Dictionary: file-drawer effect.
  • Journal of Negative Results in Biomedicine
  • The All Results Journals
  • Journal of Articles in Support of the Null Hypothesis
  • Article on 'the decline effect' and the role of publication bias in that
  • Archive for replication attempts in experimental psychology

External links

  1. ^ a b c d H. Rothstein, A. J. Sutton and M. Borenstein. (2005). Publication bias in meta-analysis: prevention, assessment and adjustments. Wiley. Chichester, England ; Hoboken, NJ.
  2. ^ Song, F.; Parekh, S.; Hooper, L.; Loke, Y. K.; Ryder, J.; Sutton, A. J.; Hing, C.; Kwok, C. S.; Pang, C.; Harvey, I. (2010). "Dissemination and publication of research findings: An updated review of related biases". Health technology assessment (Winchester, England) 14 (8): iii, iix–xi, iix–193.  
  3. ^ a b c Easterbrook, P. J.; Berlin, J. A.; Gopalan, R.; Matthews, D. R. (1991). "Publication bias in clinical research".  
  4. ^ Dickersin, K.; Chan, S.; Chalmers, T. C.; et al. (1987). "Publication bias and clinical trials".  
  5. ^ Luijendijk, HJ; Koolman, X (May 2012). "The incentive to publish negative studies: how beta-blockers and depression got stuck in the publication cycle.". J Clin Epidemiol 65 (5): 488–92.  
  6. ^ "". 
  7. ^ K. Dickersin (March 1990). "The existence of publication bias and risk factors for its occurrence".  
  8. ^ Sterling, Theodore D. (March 1959). "Publication decisions and their possible effects on inferences drawn from tests of significance—or vice versa". Journal of the American Statistical Association 54 (285): 30–34.  
  9. ^ Jeffrey D. Scargle (2000). "Publication bias: the "file-drawer problem" in scientific inference" (PDF).  
  10. ^ Rosenthal R. File drawer problem and tolerance for null results. Psychol Bull 1979;86:638-41.
  11. ^ D.L. Sackett (1979). "Bias in analytic research".  
  12. ^ N.L. Kerr (1998). "HARKing: Hypothesizing After the Results are Known".  
  13. ^ P. C. Flore and J. M. Wicherts. (2015). Does stereotype threat influence performance of girls in stereotyped domains? A meta-analysis. J Sch Psychol, 53, 25-44. doi
  14. ^ Dickersin, K.; Min, Y.I. (1993). "NIH clinical trials and publication bias". Online J Curr Clin Trials.  
  15. ^ Decullier E, Lheritier V, Chapuis F. Fate of biomedical research protocols and publication bias in France: retrospective cohort study. BMJ 2005;331:19-22
  16. ^ Song F, Parekh-Bhurke S, Hooper L, Loke Y, Ryder J, Sutton A, et al. Extent of publication bias in different categories of research cohorts: a meta-analysis of empirical studies. BMC Med Res Methodol 2009;9:79
  17. ^ Chan AW, Altman DG. Identifying outcome reporting bias in randomised trials on PubMed: review of publications and survey of authors. BMJ 2005;330:753.
  18. ^ Riveros C, Dechartres A, Perrodeau E, Haneef R, Boutron I, Ravaud P. Timing and completeness of trial results posted at and published in journals. PLoS Med 2013;10:e1001566.
  19. ^ a b c Kicinski, M; Springate, D. A.; Kontopantelis, E (2015). "Publication bias in meta-analyses from the Cochrane Database of Systematic Reviews". Statistics in Medicine: n/a.  
  20. ^ Kicinski M. Publication bias in recent meta-analyses. PLoS ONE 2013;8:e81823
  21. ^ Silliman N. Hierarchical selection models with applications in meta-analysis. Journal of American Statistical Association 1997; 92(439):926-936. DOI: 10.1080/01621459.1997.10474047.
  22. ^ 36. Hedges L, Vevea J. Estimating effect size under publication bias: small sample properties and robustness of a random effects selection model. Journal of Educational and Behavioral Statistics 1996; 21(4):299-332. DOI: 10.3102/10769986021004299
  23. ^ Begg C, Mazumdar M. Operating characteristics of a rank correlation test for publication bias. Biometrics 1994; 50(4):1088-1101.
  24. ^ Egger M, Smith G, Schneider M, Minder C. Bias in meta-analysis detected by a simple, graphical test. British Medical Journal 1997; 315:629-634. DOI: 10.1136/bmj.315.7109.629
  25. ^ Duval S, Tweedie R. Trim and fill: A simple funnel-plot-based method of testing and adjusting for publication bias in meta-analysis. Biometrics 2000; 56(2):455-463. DOI: 10.1111/j.0006-341X.2000.00455.x.
  26. ^ Sutton AJ, Song F, Gilbody SM, Abrams KR (2000) Modelling publication bias in meta-analysis: a review. Stat Methods Med Res 9:421-445
  27. ^ Kicinski, M (2014). "How does under-reporting of negative and inconclusive results affect the false-positive rate in meta-analysis? A simulation study". BMJ Open 4 (8): e004831.  
  28. ^ Ben Goldacre What doctors don't know about the drugs they prescribe
  29. ^ Wilmshurst, Peter. "Dishonesty in Medical Research" (PDF). 
  30. ^ Marc Orlitzky Institutional Logics in the Study of Organizations: The Social Construction of the Relationship between Corporate Social and Financial Performance
  31. ^ Ben Goldacre Backwards step on looking into the future The Guardian, Saturday 23 April 2011
  32. ^ Zhenglun Pan, Thomas A. Trikalinos, Fotini K. Kavvoura, Joseph Lau, John P.A. Ioannidis, "Local literature bias in genetic epidemiology: An empirical evaluation of the Chinese literature". PLoS Medicine, 2(12):e334, 2005 December.
  33. ^ Jin Ling Tang, "Selection Bias in Meta-Analyses of Gene-Disease Associations", PLoS Medicine, 2(12):e409, 2005 December.
  34. ^  
  35. ^ Wacholder, S.; Chanock, S; Garcia-Closas, M; El Ghormli, L; Rothman, N (March 2004). "Assessing the Probability That a Positive Report is False: An Approach for Molecular Epidemiology Studies".  
  36. ^ ( 
  37. ^ "Instructions for Trials authors — Study protocol". 2009-02-15. 
  38. ^ Dickersin, K.; Chalmers, I. (2011). "Recognizing, investigating and dealing with incomplete and biased reporting of clinical research: from Francis Bacon to the WHO". J R Soc Med 104 (12): 532–538.  


See also

[19] of more recent studies, supporting the effectiveness of the measures used to reduce publication bias in clinical trials.meta-analyses A recent study showed that publication bias is smaller in [38]

This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.

Copyright © World Library Foundation. All rights reserved. eBooks from Hawaii eBook Library are sponsored by the World Library Foundation,
a 501c(4) Member's Support Non-Profit Organization, and is NOT affiliated with any governmental agency or department.