Login / Signup

A systematic review and meta-analysis of test-retest reliability and stability of delay and probability discounting.

Brett W GelinoRebekah D SchlitzerDerek D ReedJustin C Strickland
Published in: Journal of the experimental analysis of behavior (2024)
In this meta-analysis, we describe a benchmark value of delay and probability discounting reliability and stability that might be used to (a) evaluate the meaningfulness of clinically achieved changes in discounting and (b) support the role of discounting as a valid and enduring measure of intertemporal choice. We examined test-retest reliability, stability effect sizes (d z ; Cohen, 1992), and relevant moderators across 30 publications comprising 39 independent samples and 262 measures of discounting, identified via a systematic review of PsychInfo, PubMed, and Google Scholar databases. We calculated omnibus effect-size estimates and evaluated the role of proposed moderators using a robust variance estimation meta-regression method. The meta-regression output reflected modest test-retest reliability, r = .670, p < .001, 95% CI [.618, .716]. Discounting was most reliable when measured in the context of temporal constraints, in adult respondents, when using money as a medium, and when reassessed within 1 month. Testing also suggested acceptable stability via nonsignificant and small changes in effect magnitude over time, d z  = 0.048, p = .31, 95% CI [-0.051, 0.146]. Clinicians and researchers seeking to measure discounting can consider the contexts when reliability is maximized for specific cases.
Keyphrases
  • systematic review
  • mental health
  • machine learning
  • deep learning
  • meta analyses
  • case control