A quantitative approach to the choice of number of samples for percentile estimation in bootstrap and visual predictive check analyses.
E Niclas JonssonJoakim NybergPublished in: CPT: pharmacometrics & systems pharmacology (2022)
Understanding the uncertainty in parameter estimates or in derived secondary variables is important in all data analysis activities. In pharmacometrics, this is often done based on the standard errors from the variance-covariance matrix of the estimates. Confidence intervals derived in this way are by definition symmetrical, which may lead to implausible outcomes, and will require translation to generate uncertainties in derived variables. An often-used alternative is numerical percentile estimation by, for example, nonparametric bootstraps to circumvent these issues. Visual predictive checks (VPCs), which is a commonly used model diagnostic tool in pharmacometric analyses, also rely on the estimation of percentiles through numerical approaches. Given the cost in terms of run times and processing times for these methods, it is important to consider the trade-off between the number of bootstrap samples or simulated data sets in the VPCs, to the increase in precision related to a large number of bootstrap samples or simulated data sets. The objective with this tutorial is to provide a quantitative framework for assessing the precision in estimated percentile limits in bootstrap and visual predictive checks analyses to facilitate an informed choice of confidence interval width, number of bootstrap samples/simulated data sets, and required level of precision.