Login / Signup

Near-optimal matrix recovery from random linear measurements.

Elad RomanovMatan Gavish
Published in: Proceedings of the National Academy of Sciences of the United States of America (2018)
In matrix recovery from random linear measurements, one is interested in recovering an unknown M-by-N matrix [Formula: see text] from [Formula: see text] measurements [Formula: see text], where each [Formula: see text] is an M-by-N measurement matrix with i.i.d. random entries, [Formula: see text] We present a matrix recovery algorithm, based on approximate message passing, which iteratively applies an optimal singular-value shrinker-a nonconvex nonlinearity tailored specifically for matrix estimation. Our algorithm typically converges exponentially fast, offering a significant speedup over previously suggested matrix recovery algorithms, such as iterative solvers for nuclear norm minimization (NNM). It is well known that there is a recovery tradeoff between the information content of the object [Formula: see text] to be recovered (specifically, its matrix rank r) and the number of linear measurements n from which recovery is to be attempted. The precise tradeoff between r and n, beyond which recovery by a given algorithm becomes possible, traces the so-called phase transition curve of that algorithm in the [Formula: see text] plane. The phase transition curve of our algorithm is noticeably better than that of NNM. Interestingly, it is close to the information-theoretic lower bound for the minimal number of measurements needed for matrix recovery, making it not only state of the art in terms of convergence rate, but also near optimal in terms of the matrices it successfully recovers.
Keyphrases
  • machine learning
  • smoking cessation
  • deep learning
  • human milk
  • neural network
  • healthcare
  • magnetic resonance
  • magnetic resonance imaging