Login / Signup

Meta-analysis of (single-cell method) benchmarks reveals the need for extensibility and interoperability.

Anthony SonrelAlmut LuetgeCharlotte SonesonIzaskun MallonaPierre-Luc GermainSergey KnyazevJeroen GilisReto GerberRuth SeurinckDominique PaulEmanuel SonderHelena L CrowellImran FanaswalaAhmad Al-AjamiElyas HeidariStephan SchmeingStefan MilosavljevicYvan SaeysSerghei MangulMark D Robinson
Published in: Genome biology (2023)
Computational methods represent the lifeblood of modern molecular biology. Benchmarking is important for all methods, but with a focus here on computational methods, benchmarking is critical to dissect important steps of analysis pipelines, formally assess performance across common situations as well as edge cases, and ultimately guide users on what tools to use. Benchmarking can also be important for community building and advancing methods in a principled way. We conducted a meta-analysis of recent single-cell benchmarks to summarize the scope, extensibility, and neutrality, as well as technical features and whether best practices in open data and reproducible research were followed. The results highlight that while benchmarks often make code available and are in principle reproducible, they remain difficult to extend, for example, as new methods and new ways to assess methods emerge. In addition, embracing containerization and workflow systems would enhance reusability of intermediate benchmarking results, thus also driving wider adoption.
Keyphrases
  • single cell
  • electronic health record
  • systematic review
  • mental health
  • randomized controlled trial
  • minimally invasive
  • artificial intelligence
  • data analysis