深度学习--bolzmann machine

bm可以看做是hopfield的一个特例。rbm又是bm的一个特例。下面的代码,看了很久才恍然大悟,好还前面看过bm的理论文章。

def sample_h_given_v(self, v0_sample):

''' This function infers state of hidden units given visible units '''

        # compute the activation of the hidden units given a sample of the visibles

        pre_sigmoid_h1, h1_mean = self.propup(v0_sample)

        # get a sample of the hiddens given their activation

        # Note that theano_rng.binomial returns a symbolic sample of dtype

        # int64 by default. If we want to keep our computations in floatX

        # for the GPU we need to specify to return the dtype floatX

        h1_sample = self.theano_rng.binomial(size = h1_mean.shape, n = 1, p = h1_mean,

                dtype = theano.config.floatX)

        return [pre_sigmoid_h1, h1_mean, h1_sample]

 

这里对h1_sample采用2项分布,根据概率p做采用,是融合了模拟退火的思想的。

大家可以用sigmoid参数一个近似的概率[0,1],然后用binomial做个简单实验。

def propup(self, vis):

        ''' This function propagates the visible units activation upwards to

        the hidden units

        Note that we return also the pre-sigmoid activation of the layer. As

        it will turn out later, due to how Theano deals with optimizations,

        this symbolic variable will be needed to write down a more

        stable computational graph (see details in the reconstruction cost function)

        '''

        pre_sigmoid_activation = T.dot(vis, self.W) + self.hbias

        return [pre_sigmoid_activation, T.nnet.sigmoid(pre_sigmoid_activation)]

 

这个函数和上面的函数一起实现Gibbs 采样。不过文章中对一个样本各个分量的采样不是一起做的。这里用wx+b来一起实现。还得再看下文章。

 

你可能感兴趣的:(function,Graph)