感知器算法对偶形式实现

  • 感知器算法对偶形式学习策略

        详见李航 《统计学习方法》

 

  • 原始形式与对偶形式二者区别

        梯度下降:一次将误分类集合中所有误分类点的梯度下降;----对偶形式

        随机梯度下降:随机选取一个误分类点使其梯度下降;   ----原始形式

 

  • 代码如下:
class dualPerceptron(object):

    def __init__(self):
        self.learning_rate = 1
        self.epoch = 10

    def train(self, features, labels):
        self.alpha = [0.0] * (len(features))
        self.bias = 0.0

        self.gram = [[0 for i in xrange(len(features))] for j in xrange(len(features))]

        print 'calc gram'
        # calc gram matrix
        for i in xrange(len(features)):
            for j in xrange(len(features)):
                sum = 0.0
                for k in xrange(len(features[0])):
                    sum += features[i][k] * features[j][k]
                self.gram[i][j] = sum
        print 'gram over'
        print self.gram
        idx = 0

        while idx < self.epoch:
            idx += 1
            print 'epoch: {}'.format(idx)
            print self.alpha
            print self.bias
            for i in xrange(len(features)):
                yi = labels[i]

                sum = 0.0
                for j in xrange(len(features)):
                    yj = labels[j]
                    sum += self.alpha[j]*yj*self.gram[j][i]

                if yi*(sum + self.bias) <= 0:
                    self.alpha[i] = self.alpha[i] + self.learning_rate
                    self.bias = self.bias + self.learning_rate * yi


        print self.alpha
        print self.bias


if __name__ == '__main__':

    p = dualPerceptron()
    data = [[3, 3,], [4, 3], [1, 1]]
    label = [1, 1, -1]
    p.train(data, label)

 

你可能感兴趣的:(感知器算法对偶形式实现)