I was rereading the paper ‘The New Statistics: Why and How‘ published in Psychological Science the other day. It’s a great paper and I can highly recommend reading it. If you are busy (and I guess you are), make sure to at least read the 25 guidelines for improving psychological research (in Table 1). Here are the guidelines:
- Promote research integrity: (a) a public research literature that is complete and trustworthy and (b) ethical practice, including full and accurate reporting of research.
- Understand, discuss, and help other researchers appreciate the challenges of (a) complete reporting, (b) avoiding selection and bias in data analysis, and (c) replicating studies.
- Make sure that any study worth doing properly is reported, with full details.
- Make clear the status of any result—whether it deserves the confidence that arises from a fully prespecified study or is to some extent speculative.
- Carry out replication studies that can improve precision and test robustness, and studies that provide converging perspectives and investigate alternative explanations.
- Build a cumulative quantitative discipline.
- Whenever possible, adopt estimation thinking and avoid dichotomous thinking.
- Remember that obtained results are one possibility from an infinite sequence.
- Do not trust any p value.
- Whenever possible, avoid using statistical significance or p values; simply omit any mention of null-hypothesis significance testing (NHST).
- Move beyond NHST and use the most appropriate methods, whether estimation or other approaches.
- Use knowledgeable judgment in context to interpret observed effect sizes (ESs).
- Interpret your single confidence interval (CI), but bear in mind the dance. Your 95% CI just might be one of the 5% that miss.
- Prefer 95% CIs to SE bars. Routinely report 95% CIs, and use error bars to depict them in figures.
- If your ES of interest is a difference, use the CI on that difference for interpretation. Only in the case of independence can the separate CIs inform interpretation.
- Consider interpreting ESs and CIs for preselected comparisons as an effective way to analyze results from randomized control trials and other multiway designs.
- When appropriate, use the CIs on correlations and proportions, and their differences, for interpretation.
- Use small- or large-scale meta-analysis whenever that helps build a cumulative discipline.
- Use a random-effects model for meta-analysis and, when possible, investigate potential moderators.
- Publish results so as to facilitate their inclusion in future meta-analyses.
- Make every effort to increase the informativeness of planned research.
- If using NHST, consider and perhaps calculate power to guide planning.
- Beware of any power statement that does not state an ES; do not use post hoc power.
- Use a precision-for-planning analysis whenever that may be helpful.
- Adopt an estimation perspective when considering issues of research integrity.
I do not agree with all recommendations (e.g. number 10), but there is a lot of great points in the paper.
Last, the paper also formulates an eight-step strategy for how to conduct research with integrity: 1) Formulate research questions in estimation terms. 2) Identify the ESs that will best answer the research questions. 3) Declare full details of the intended procedure and data analysis. 4) After running the study, calculate point estimates and CIs for the chosen ESs. 5) Make one or more figures, including CIs. 6) Interpret the ESs and CIs. 7) Use meta-analytic thinking throughout. 8) Report.