逻辑回归手写(初步实现)

逻辑回归手写(初步实现)

import numpy as np


def sig_mod(inx):
    return 1/(1 + np.exp(-inx))


def j():
    return 1/len(data)*(-y.T@np.log(sig_mod(x@weights)) - (1 - y).T@np.log(1 - sig_mod(x@weights)))


alpha = float(input())
data = np.loadtxt(r'D:\data\machine-learning-ex2\machine-learning-ex2\ex2\ex2data2.txt', delimiter=',')
x = np.hstack((np.ones((len(data), 1)), data[:, [0, 1]]))
y = data[:, [2]]
weights = np.ones((data.shape[1], 1))
print(weights)
for _ in range(1500):
    h = sig_mod(x@weights)
    error = y - h
    weights = weights + alpha*(x.T@error)
print(weights)

C:\Users\G3\Anaconda3\python.exe D:/test/word/logic.py
0.01
[[1.]
 [1.]
 [1.]]
[[0.93903696]]
[[-0.01418412]
 [-0.30352113]
 [-0.01813178]]
[[0.69024112]]

Process finished with exit code 0

测试样例一,超参数明显改变,代价函数从0.94降到了0.7。未做其他检测,不知道拟合效果如何。(感觉一般)

0.01
[[1.]
D:/test/word/logic.py:9: RuntimeWarning: divide by zero encountered in log
 [1.]
 [1.]]
[[nan]]
  return 1/len(data)*(-y.T@np.log(sig_mod(x@weights)) - (1 - y).T@np.log(1 - sig_mod(x@weights)))
D:/test/word/logic.py:9: RuntimeWarning: invalid value encountered in matmul
  return 1/len(data)*(-y.T@np.log(sig_mod(x@weights)) - (1 - y).T@np.log(1 - sig_mod(x@weights)))
[[-114.26013395]
 [  36.59101548]
 [ -11.95281657]]
[[nan]]

Process finished with exit code 0

测试样例2,超参数明显改变,代价函数因为log0而报错。不知道拟合效果。(感觉不理想,或者说可能根本不拟合?)还可优化。

果然造轮子很麻烦。。

附上大佬代码

你可能感兴趣的:(有点意思)