From discovery to justification: Outline of an ideal research program in empirical psychology

Research output: Contribution to journalArticle


The gold standard for an empirical science is the replicability of its research results. But the estimated average replicability rate of key-effects that top-tier psychology journals report falls between 36 and 39% (objective vs. subjective rate; Open Science Collaboration, 2015). So the standard mode of applying null-hypothesis significance testing (NHST) fails to adequately separate stable from random effects. Therefore, NHST does not fully convince as a statistical inference strategy. We argue that the replicability crisis is "home-made" because more sophisticated strategies can deliver results the successful replication of which is sufficiently probable. Thus, we can overcome the replicability crisis by integrating empirical results into genuine research programs. Instead of continuing to narrowly evaluate only the stability of data against random fluctuations (discovery context), such programs evaluate rival hypotheses against stable data (justification context).


External organisations
  • University of Hamburg
  • University of Konstanz
  • Sun Yat-sen University
  • Institute of Philosophy, SAS
Research areas and keywords

Subject classification (UKÄ) – MANDATORY

  • Psychology (excluding Applied Psychology)


  • Confirmation, Knowledge accumulation, Meta-analysis, Psi-hypothesis, Replicability crisis, Research programs, Significance-test, Test-power
Original languageEnglish
Article number1847
JournalFrontiers in Psychology
Issue numberOCT
Publication statusPublished - 2017 Oct 27
Publication categoryResearch