A quasi-experimental study examining the efficacy of multimodal bot screening tools and recommendations to preserve data integrity in online psychological research.
Melissa SimoneCory J CascalheiraBenjamin G PiercePublished in: The American psychologist (2023)
Bots are automated software programs that pose an ongoing threat to psychological research by invading online research studies and their increasing sophistication over time. Despite this growing concern, research in this area has been limited to bot detection in existing data sets following an unexpected encounter with bots. The present three-condition, quasi-experimental study aimed to address this gap in the literature by examining the efficacy of three types of bot screening tools across three incentive conditions ($0, $1, and $5). Data were collected from 444 respondents via Twitter advertisements between July and September 2021. The efficacy of five task-based (i.e., anagrams, visual search), question-based (i.e., attention checks, ReCAPTCHA), and data-based (i.e., consistency, metadata) tools was examined with Bonferroni-adjusted univariate and multivariate logistic regression analyses. In general, study results suggest that bot screening tools function similarly for participants recruited across incentive conditions. Moreover, the present analyses revealed heterogeneity in the efficacy of bot screening tool subtypes. Notably, the present results suggest that the least effective bot screening tools were among the most commonly used tools in existing literature (e.g., ReCAPTCHA). In sum, the study findings revealed highly effective and highly ineffective bot screening tools. Study design and data integrity recommendations for researchers are provided. (PsycInfo Database Record (c) 2023 APA, all rights reserved).