Image from Google Jackets

Selection bias, vote counting, money-priming effects : a comment on Rohrer , Pashler , and Harris (2015) and Vohs (2015) / Miguel A. Vadillo, Tom E. Hardwicke, David R. Shanks

By: Contributor(s): Series: Journal of Experimental Psychology : General. 145 : 5, page 655-663 Publication details: May 2016Content type:
  • txt
Media type:
  • unmediated
Carrier type:
  • volumes
Subject(s): Summary: When a series of studies fails to replicate a well-documented effect, researchers might be tempted to use a "vote counting" approach to decide whether the effect is reliable-that is, simply comparing the number of successful and unsuccessful replications. Vohs's (2015) response to the absence of money priming effects reported by Rohrer, Pashler, and Harris (2015) provides an example of this approach. Unfortunately, vote counting is a poor strategy to assess the reliability of psychological findings because it neglects the impact of selection bias and questionable research practices. In the present comment, we show that a range of meta-analytic tools indicate irregularities in the money priming literature discussed by Rohrer et al. and Vohs, which all point to the conclusion that these effects are distorted by selection bias, reporting biases, or p-hacking. This could help to explain why money-priming effects have proven unreliable in a number of direct replication attempts in which biases have been minimized through preregistration or transparent reporting. Our major conclusion is that the simple proportion of significant findings is a poor guide to the reliability of research and that preregistered replications are an essential means to assess the reliability of money-priming effects.
Item type: Articles
Star ratings
    Average rating: 0.0 (0 votes)

When a series of studies fails to replicate a well-documented effect, researchers might be tempted to use a "vote counting" approach to decide whether the effect is reliable-that is, simply comparing the number of successful and unsuccessful replications. Vohs's (2015) response to the absence of money priming effects reported by Rohrer, Pashler, and Harris (2015) provides an example of this approach. Unfortunately, vote counting is a poor strategy to assess the reliability of psychological findings because it neglects the impact of selection bias and questionable research practices. In the present comment, we show that a range of meta-analytic tools indicate irregularities in the money priming literature discussed by Rohrer et al. and Vohs, which all point to the conclusion that these effects are distorted by selection bias, reporting biases, or p-hacking. This could help to explain why money-priming effects have proven unreliable in a number of direct replication attempts in which biases have been minimized through preregistration or transparent reporting. Our major conclusion is that the simple proportion of significant findings is a poor guide to the reliability of research and that preregistered replications are an essential means to assess the reliability of money-priming effects.

Psychology

There are no comments on this title.

to post a comment.
Share
Manila Tytana Colleges Library | Metropolitan Park, Pres. Diosdado Macapagal Blvd., Pasay City, 1300
Tel.(+63-2) 859-0826 | E-mail library@mtc.edu.ph