Login / Signup

Simulation Tests of Methods in Evolution, Ecology, and Systematics: Pitfalls, Progress, and Principles.

Katie E LotterhosMatthew C FitzpatrickHeath Blackmon
Published in: Annual review of ecology, evolution, and systematics (2022)
Complex statistical methods are continuously developed across the fields of ecology, evolution, and systematics (EES). These fields, however, lack standardized principles for evaluating methods, which has led to high variability in the rigor with which methods are tested, a lack of clarity regarding their limitations, and the potential for misapplication. In this review, we illustrate the common pitfalls of method evaluations in EES, the advantages of testing methods with simulated data, and best practices for method evaluations. We highlight the difference between method evaluation and validation and review how simulations, when appropriately designed, can refine the domain in which a method can be reliably applied. We also discuss the strengths and limitations of different evaluation metrics. The potential for misapplication of methods would be greatly reduced if funding agencies, reviewers, and journals required principled method evaluation.
Keyphrases
  • healthcare
  • randomized controlled trial
  • systematic review
  • machine learning
  • molecular dynamics
  • climate change
  • electronic health record
  • artificial intelligence