Login / Signup

Maximizing the information learned from finite data selects a simple model.

Henry H MattinglyMark K TranstrumMichael C AbbottBenjamin B Machta
Published in: Proceedings of the National Academy of Sciences of the United States of America (2018)
We use the language of uninformative Bayesian prior choice to study the selection of appropriately simple effective models. We advocate for the prior which maximizes the mutual information between parameters and predictions, learning as much as possible from limited data. When many parameters are poorly constrained by the available data, we find that this prior puts weight only on boundaries of the parameter space. Thus, it selects a lower-dimensional effective theory in a principled way, ignoring irrelevant parameter directions. In the limit where there are sufficient data to tightly constrain any number of parameters, this reduces to the Jeffreys prior. However, we argue that this limit is pathological when applied to the hyperribbon parameter manifolds generic in science, because it leads to dramatic dependence on effects invisible to experiment.
Keyphrases
  • electronic health record
  • big data
  • public health
  • healthcare
  • autism spectrum disorder
  • body mass index
  • weight loss
  • body weight
  • weight gain