Login / Signup

Accelerate Training of Restricted Boltzmann Machines via Iterative Conditional Maximum Likelihood Estimation.

Mingqi WuYe LuoFaming Liang
Published in: Statistics and its interface (2019)
Restricted Boltzmann machines (RBMs) have become a popular tool of feature coding or extraction for unsupervised learning in recent years. However, there still lacks an efficient algorithm for training the RBM due to that its likelihood function contains an intractable normalizing constant. The existing algorithms, such as contrastive divergence and its variants, approximate the gradient of the likelihood function using Markov chain Monte Carlo. However, the approximation is time consuming and, moreover, the approximation error often impedes the convergence of the training algorithm. This paper proposes a fast algorithm for training RBMs by treating the hidden states as missing data and then estimating the parameters of the RBM via an iterative conditional maximum likelihood estimation approach, which avoids the issue of intractable normalizing constants. The numerical results indicate that the proposed algorithm can provide a drastic improvement over the contrastive divergence algorithm in RBM training. This paper also presents an extension of the proposed algorithm for how to cope with missing data in RBM training and illustrates its application using an example about drug-target interaction prediction.
Keyphrases
  • machine learning
  • deep learning
  • virtual reality
  • big data
  • artificial intelligence
  • neural network
  • gene expression
  • computed tomography
  • drug induced
  • data analysis