From discovery to justification: Outline of an ideal research program in empirical psychology

Erich H. Witte, Frank Zenker

Research output: Contribution to journalArticlepeer-review

16 Citations (SciVal)


The gold standard for an empirical science is the replicability of its research results. But the estimated average replicability rate of key-effects that top-tier psychology journals report falls between 36 and 39% (objective vs. subjective rate; Open Science Collaboration, 2015). So the standard mode of applying null-hypothesis significance testing (NHST) fails to adequately separate stable from random effects. Therefore, NHST does not fully convince as a statistical inference strategy. We argue that the replicability crisis is "home-made" because more sophisticated strategies can deliver results the successful replication of which is sufficiently probable. Thus, we can overcome the replicability crisis by integrating empirical results into genuine research programs. Instead of continuing to narrowly evaluate only the stability of data against random fluctuations (discovery context), such programs evaluate rival hypotheses against stable data (justification context).

Original languageEnglish
Article number1847
JournalFrontiers in Psychology
Issue numberOCT
Publication statusPublished - 2017 Oct 27

Subject classification (UKÄ)

  • Psychology (excluding Applied Psychology)


  • Confirmation
  • Knowledge accumulation
  • Meta-analysis
  • Psi-hypothesis
  • Replicability crisis
  • Research programs
  • Significance-test
  • Test-power


Dive into the research topics of 'From discovery to justification: Outline of an ideal research program in empirical psychology'. Together they form a unique fingerprint.

Cite this