Login / Signup

Reliability and Validity of Indirect Assessment Outcomes: Experts versus Caregivers.

Joseph D DracoblyClaudia L DozierAdam M BriggsJessica F Juanico
Published in: Learning and motivation (2017)
Clinicians often conduct indirect assessments (IAs; e.g., Durand & Crimmins, 1988; Iwata, DeLeon, & Roscoe, 2013; Matson & Vollmer, 1995) such as questionnaires and interviews with caregivers to gain information about the variables influencing problem behavior. However, researchers have found poor reliability and validity of IAs with respect to determining functional variables. There are numerous variables that might influence the efficacy of IAs as an assessment tool, one of which is the skill set of the person completing the IA. For example, it may be possible to increase the validity and reliability of IAs by having individuals with certain skill sets such as a background in behavior analysis and FBA ("experts") complete them. Thus, the purpose of this study was to compare the reliability (i.e., agreement with respect to function and specific IA questions) and validity (i.e., agreement between the outcome of IAs and a functional analysis) of IAs completed by caregivers and "experts" for each of eight children who emitted problem behavior. We found that experts were more likely than caregivers to agree on IA outcomes with respect to (a) overall interrater agreement, (b) item-by-item agreement, and (c) the highest-rated function(s) of problem behavior. Experts were also more likely to correctly identify the function(s), based on comparisons of the results of the IAs and FAs. In addition, caregivers were more likely to (a) disagree on hypothesized functions and (b) identify multiple incorrect functions. The use of experts for completing IAs could have significant impact on their utility and provide a novel method for more rapidly completing the FBA process and developing a function-based treatment.
Keyphrases
  • palliative care
  • young adults
  • healthcare
  • metabolic syndrome
  • skeletal muscle