Login / Signup

A closer look at cross-validation for assessing the accuracy of gene regulatory networks and models.

Shayan Tabe-BordbarAmin EmadSihai Dave ZhaoSaurabh Sinha
Published in: Scientific reports (2018)
Cross-validation (CV) is a technique to assess the generalizability of a model to unseen data. This technique relies on assumptions that may not be satisfied when studying genomics datasets. For example, random CV (RCV) assumes that a randomly selected set of samples, the test set, well represents unseen data. This assumption doesn't hold true where samples are obtained from different experimental conditions, and the goal is to learn regulatory relationships among the genes that generalize beyond the observed conditions. In this study, we investigated how the CV procedure affects the assessment of supervised learning methods used to learn gene regulatory networks (or in other applications). We compared the performance of a regression-based method for gene expression prediction estimated using RCV with that estimated using a clustering-based CV (CCV) procedure. Our analysis illustrates that RCV can produce over-optimistic estimates of the model's generalizability compared to CCV. Next, we defined the 'distinctness' of test set from training set and showed that this measure is predictive of performance of the regression method. Finally, we introduced a simulated annealing method to construct partitions with gradually increasing distinctness and showed that performance of different gene expression prediction methods can be better evaluated using this method.
Keyphrases
  • gene expression
  • dna methylation
  • electronic health record
  • single cell
  • minimally invasive
  • rna seq
  • big data
  • genome wide
  • bioinformatics analysis
  • deep learning
  • atomic force microscopy
  • single molecule