Login / Signup

A comparison of conventional and resampled personal reliability in detecting careless responding.

Philippe GoldammerPeter Lucas StöckliHubert AnnenAnnika Schmitz-Wilhelmy
Published in: Behavior research methods (2024)
Detecting careless responding in survey data is important to ensure the credibility of study findings. Of several available detection methods, personal reliability (PR) is one of the best-performing indices. Curran, Journal of Experimental Social Psychology, 66, 4-19, (2016) proposed a resampled version of personal reliability (RPR). Compared to the conventional PR or even-odd consistency, in which just one set of scale halves is used, RPR is based on repeated calculation of PR across several randomly rearranged sets of scale halves. RPR should therefore be less affected than PR by random errors that may occur when a specific set of scale half pairings is used for the PR calculation. In theory, RPR should outperform PR, but it remains unclear whether it in fact does, and under what conditions the potential gain in detection accuracy is the most pronounced. We conducted two studies: a simulation study examined the performance of the conventional PR and RPR in detecting simulated careless responding, and a real data example study analyzed their performance when detecting human-generated careless responding. In both studies, RPR turned out to be a significantly better careless response indicator than PR. The results also revealed that using 25 resamples for the RPR computation is sufficient to obtain the expected gain in detection accuracy over the conventional PR. We therefore recommend using RPR instead of the conventional PR when screening questionnaire data for careless responding.
Keyphrases
  • electronic health record
  • endothelial cells
  • healthcare
  • big data
  • cross sectional
  • machine learning
  • mental health
  • real time pcr
  • single cell
  • artificial intelligence
  • induced pluripotent stem cells