Efficient Learning of Restricted Boltzmann Machines Using Covariance Estimates

25 Oct 2018  ·  Vidyadhar Upadhya, P. S. Sastry ·

Learning RBMs using standard algorithms such as CD(k) involves gradient descent on the negative log-likelihood. One of the terms in the gradient, which involves expectation w.r.t. the model distribution, is intractable and is obtained through an MCMC estimate. In this work we show that the Hessian of the log-likelihood can be written in terms of covariances of hidden and visible units and hence, all elements of the Hessian can also be estimated using the same MCMC samples with small extra computational costs. Since inverting the Hessian may be computationally expensive, we propose an algorithm that uses inverse of the diagonal approximation of the Hessian, instead. This essentially results in parameter-specific adaptive learning rates for the gradient descent process and improves the efficiency of learning RBMs compared to the standard methods. Specifically we show that using the inverse of diagonal approximation of Hessian in the stochastic DC (difference of convex functions) program approach results in very efficient learning of RBMs.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here