Login / Signup

Modeling VI and VDRL feedback functions: Searching normative rules through computational simulation.

Paulo Sergio Panse SilveiraJose Oliveira SiqueiraJoão Lucas BernardyJéssica B SantiagoThiago Cersosimo MenesesBianca Sanches PortelaMarcelo Frota Lobato Benvenuti
Published in: Journal of the experimental analysis of behavior (2023)
We present the mathematical description of feedback functions of variable interval and variable differential reinforcement of low rates as functions of schedule size only. These results were obtained using an R script named Beak, which was built to simulate rates of behavior interacting with simple schedules of reinforcement. Using Beak, we have simulated data that allow an assessment of different reinforcement feedback functions. This was made with unparalleled precision, as simulations provide huge samples of data and, more importantly, simulated behavior is not changed by the reinforcement it produces. Therefore, we can vary response rates systematically. We've compared different reinforcement feedback functions for random interval schedules, using the following criteria: meaning, precision, parsimony, and generality. Our results indicate that the best feedback function for the random interval schedule was published by Baum (1981). We also propose that the model used by Killeen (1975) is a viable feedback function for the random differential reinforcement of low rates schedule. We argue that Beak paves the way for greater understanding of schedules of reinforcement, addressing still open questions about quantitative features of simple schedules. Also, Beak could guide future experiments that use schedules as theoretical and methodological tools.
Keyphrases
  • electronic health record
  • big data
  • systematic review
  • minimally invasive
  • mass spectrometry
  • machine learning
  • palliative care
  • data analysis
  • deep learning
  • advanced cancer