我的网站:红色石头的机器学习之路
我的CSDN:红色石头的专栏
我的知乎:红色石头
我的微博:RedstoneWill的微博
我的GitHub:RedstoneWill的GitHub
我的微信公众号:红色石头的机器学习之路(ID:redstonewill)
Softmax线性分类器的损失函数(Loss function)为:
其中, s=WTx s = W T x ,表示得分函数。 yi y i 表示正确样本的标签label,C表示总类别个数。该损失函数也称作cross-entropy loss。
实际运算过程中,因为有幂指数,为了避免数值过大,我们一般对得分函数s进行一些数值处理。处理原理如下:
令:
相应的python示例代码为:
scores = np.array([123, 456, 789]) # example with 3 classes and each having large scores
scores -= np.max(scores) # scores becomes [-666, -333, 0]
p = np.exp(scores) / np.sum(np.exp(scores))
Softmax分类器计算每个类别的概率,其损失函数反应的是真实样本标签label的预测概率,概率越接近1,则loss越接近0。由于引入正则项,超参数 λ λ 越大,则对权重W的惩罚越大,使得W更小,分布趋于均匀,造成不同类别之间的概率分布也趋于均匀。下面举个例子来说明。
由上述例子可见,由于强正则化的影响,概率分布也趋于均匀。但是必须注意的是,概率之间的相对大小即顺序并没有改变。
线性SVM分类器和Softmax线性分类器的主要区别在于损失函数不同。SVM更关注分类正确样本和错误样本之间的距离( Δ=1 Δ = 1 ),只要距离大于 Δ Δ ,就不在乎到底距离相差多少,忽略细节。而Softmax中每个类别的得分函数都会影响其损失函数的大小。举个例子来说明,类别个数C=3,两个样本的得分函数分别为[10, -10, -10],[10, 9, 9],真实标签为第0类。对于SVM来说,这两个 Li L i 都为0;但对于Softmax来说,这两个 Li L i 分别为0.00和0.55,差别很大。
下面是Softmax线性分类器的实例代码,本文详细代码请见我的:
Github
码云
# Load the raw CIFAR-10 data.
cifar10_dir = 'CIFAR10/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print('Training data shape: ', X_train.shape)
print('Training labels shape: ', y_train.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
Training data shape: (50000, 32, 32, 3)
Training labels shape: (50000,)
Test data shape: (10000, 32, 32, 3)
Test labels shape: (10000,)
classes = ['plane', 'car', 'bird', 'cat', 'dear', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
num_each_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, num_each_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + (y + 1)
plt.subplot(num_each_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Split the data into train, val, test sets and dev sets
num_train = 49000
num_val = 1000
num_test = 1000
num_dev = 500
# Validation set
mask = range(num_train, num_train + num_val)
X_val = X_train[mask]
y_val = y_train[mask]
# Train set
mask = range(num_train)
X_train = X_train[mask]
y_train = y_train[mask]
# Test set
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Development set
mask = np.random.choice(num_train, num_dev, replace=False)
X_dev = X_train[mask]
y_dev = y_train[mask]
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
print('Development data shape: ', X_dev.shape)
print('Development labels shape: ', y_dev.shape)
Train data shape: (49000, 32, 32, 3)
Train labels shape: (49000,)
Validation data shape: (1000, 32, 32, 3)
Validation labels shape (1000,)
Test data shape: (1000, 32, 32, 3)
Test labels shape: (1000,)
Development data shape: (500, 32, 32, 3)
Development labels shape: (500,)
# Preprocessing: reshape the images data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_val = np.reshape(X_val, (X_val.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
X_dev = np.reshape(X_dev, (X_dev.shape[0], -1))
print('Train data shape: ', X_train.shape)
print('Validation data shape: ', X_val.shape)
print('Test data shape: ', X_test.shape)
print('Development data shape: ', X_dev.shape)
Train data shape: (49000, 3072)
Validation data shape: (1000, 3072)
Test data shape: (1000, 3072)
Development data shape: (500, 3072)
# Processing: subtract the mean images
mean_image = np.mean(X_train, axis=0)
plt.figure(figsize=(4,4))
plt.imshow(mean_image.reshape((32,32,3)).astype('uint8'))
plt.show()
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
X_dev -= mean_image
# append the bias dimension of ones (i.e. bias trick)
X_train = np.hstack([X_train, np.ones((X_train.shape[0], 1))])
X_val = np.hstack([X_val, np.ones((X_val.shape[0], 1))])
X_test = np.hstack([X_test, np.ones((X_test.shape[0], 1))])
X_dev = np.hstack([X_dev, np.ones((X_dev.shape[0], 1))])
print('Train data shape: ', X_train.shape)
print('Validation data shape: ', X_val.shape)
print('Test data shape: ', X_test.shape)
print('Development data shape: ', X_dev.shape)
Train data shape: (49000, 3073)
Validation data shape: (1000, 3073)
Test data shape: (1000, 3073)
Development data shape: (500, 3073)
class Softmax(object):
def __init__(self):
self.W = None
def loss_naive(self, X, y, reg):
"""
Structured Softmax loss function, naive implementation (with loops).
Inputs:
- X: A numpy array of shape (num_train, D) contain the training data
consisting of num_train samples each of dimension D
- y: A numpy array of shape (num_train,) contain the training labels,
where y[i] is the label of X[i]
- reg: float, regularization strength
Return:
- loss: the loss value between predict value and ground truth
- dW: gradient of W
"""
# Initialize loss and dW
loss = 0.0
dW = np.zeros(self.W.shape)
# Compute the loss and dW
num_train = X.shape[0]
num_classes = self.W.shape[1]
for i in range(num_train):
scores = np.dot(X[i], self.W)
scores -= np.max(scores)
correct_class = y[i]
correct_score = scores[correct_class]
loss_i = -correct_score + np.log(np.sum(np.exp(scores)))
loss += loss_i
for j in range(num_classes):
softmax_output = np.exp(scores[j]) / np.sum(np.exp(scores))
if j == correct_class:
dW[:,j] += (-1 + softmax_output) * X[i,:]
else:
dW[:,j] += softmax_output * X[i,:]
loss /= num_train
loss += 0.5 * reg * np.sum(self.W * self.W)
dW /= num_train
dW += reg * self.W
return loss, dW
def loss_vectorized(self, X, y, reg):
"""
Structured Softmax loss function, vectorized implementation (without loops).
Inputs:
- X: A numpy array of shape (num_train, D) contain the training data
consisting of num_train samples each of dimension D
- y: A numpy array of shape (num_train,) contain the training labels,
where y[i] is the label of X[i]
- reg: float, regularization strength
Return:
- loss: the loss value between predict value and ground truth
- dW: gradient of W
"""
# Initialize loss and dW
loss = 0.0
dW = np.zeros(self.W.shape)
# Compute the loss and dW
num_train = X.shape[0]
num_classes = self.W.shape[1]
# loss
scores = np.dot(X, self.W)
scores -= np.max(scores, axis=1).reshape(-1, 1)
softmax_output = np.exp(scores) / np.sum(np.exp(scores), axis=1).reshape(-1, 1)
loss = np.sum(-np.log(softmax_output[range(softmax_output.shape[0]), list(y)]))
loss /= num_train
loss += 0.5 * reg * np.sum(self.W * self.W)
# dW
dS = softmax_output
dS[range(dS.shape[0]), list(y)] += -1
dW = np.dot(X.T, dS)
dW /= num_train
dW += reg * self.W
return loss, dW
def train(self, X, y, learning_rate = 1e-3, reg = 1e-5, num_iters = 100,
batch_size = 200, print_flag = False):
"""
Train Softmax classifier using SGD
Inputs:
- X: A numpy array of shape (num_train, D) contain the training data
consisting of num_train samples each of dimension D
- y: A numpy array of shape (num_train,) contain the training labels,
where y[i] is the label of X[i], y[i] = c, 0 <= c <= C
- learning rate: (float) learning rate for optimization
- reg: (float) regularization strength
- num_iters: (integer) numbers of steps to take when optimization
- batch_size: (integer) number of training examples to use at each step
- print_flag: (boolean) If true, print the progress during optimization
Outputs:
- loss_history: A list containing the loss at each training iteration
"""
loss_history = []
num_train = X.shape[0]
dim = X.shape[1]
num_classes = np.max(y) + 1
# Initialize W
if self.W == None:
self.W = 0.001 * np.random.randn(dim, num_classes)
# iteration and optimization
for t in range(num_iters):
idx_batch = np.random.choice(num_train, batch_size, replace=True)
X_batch = X[idx_batch]
y_batch = y[idx_batch]
loss, dW = self.loss_vectorized(X_batch, y_batch, reg)
loss_history.append(loss)
self.W += -learning_rate * dW
if print_flag and t%100 == 0:
print('iteration %d / %d: loss %f' % (t, num_iters, loss))
return loss_history
def predict(self, X):
"""
Use the trained weights of Softmax to predict data labels
Inputs:
- X: A numpy array of shape (num_train, D) contain the training data
Outputs:
- y_pred: A numpy array, predicted labels for the data in X
"""
y_pred = np.zeros(X.shape[0])
scores = np.dot(X, self.W)
y_pred = np.argmax(scores, axis=1)
return y_pred
def loss_naive1(X, y, W, reg):
"""
Structured Softmax loss function, naive implementation (with loops).
Inputs:
- X: A numpy array of shape (num_train, D) contain the training data
consisting of num_train samples each of dimension D
- y: A numpy array of shape (num_train,) contain the training labels,
where y[i] is the label of X[i]
- W: A numpy array of shape (D, C) contain the weights
- reg: float, regularization strength
Return:
- loss: the loss value between predict value and ground truth
- dW: gradient of W
"""
# Initialize loss and dW
loss = 0.0
dW = np.zeros(W.shape)
# Compute the loss and dW
num_train = X.shape[0]
num_classes = W.shape[1]
for i in range(num_train):
scores = np.dot(X[i], W)
scores -= np.max(scores)
correct_class = y[i]
correct_score = scores[correct_class]
loss_i = -correct_score + np.log(np.sum(np.exp(scores)))
loss += loss_i
for j in range(num_classes):
softmax_output = np.exp(scores[j]) / np.sum(np.exp(scores))
if j == correct_class:
dW[:,j] += (-1 + softmax_output) * X[i,:]
else:
dW[:,j] += softmax_output * X[i,:]
loss /= num_train
loss += 0.5 * reg * np.sum(W * W)
dW /= num_train
dW += reg * W
return loss, dW
def loss_vectorized1(X, y, W, reg):
"""
Structured Softmax loss function, vectorized implementation (without loops).
Inputs:
- X: A numpy array of shape (num_train, D) contain the training data
consisting of num_train samples each of dimension D
- y: A numpy array of shape (num_train,) contain the training labels,
where y[i] is the label of X[i]
- W: A numpy array of shape (D, C) contain the weights
- reg: float, regularization strength
Return:
- loss: the loss value between predict value and ground truth
- dW: gradient of W
"""
# Initialize loss and dW
loss = 0.0
dW = np.zeros(W.shape)
# Compute the loss and dW
num_train = X.shape[0]
num_classes = W.shape[1]
# loss
scores = np.dot(X, W)
scores -= np.max(scores, axis=1).reshape(-1, 1)
softmax_output = np.exp(scores) / np.sum(np.exp(scores), axis=1).reshape(-1, 1)
loss = np.sum(-np.log(softmax_output[range(softmax_output.shape[0]), list(y)]))
loss /= num_train
loss += 0.5 * reg * np.sum(W * W)
# dW
dS = softmax_output
dS[range(dS.shape[0]), list(y)] += -1
dW = np.dot(X.T, dS)
dW /= num_train
dW += reg * W
return loss, dW
from gradient_check import grad_check_sparse
import time
# generate a random SVM weight matrix of small numbers
W = np.random.randn(3073, 10) * 0.0001
# Without regularization
loss, dW = loss_naive1(X_dev, y_dev, W, 0)
f = lambda W: loss_naive1(X_dev, y_dev, W, 0.0)[0]
grad_numerical = grad_check_sparse(f, W, dW)
# With regularization
loss, dW = loss_naive1(X_dev, y_dev, W, 5e1)
f = lambda W: loss_naive1(X_dev, y_dev, W, 5e1)[0]
grad_numerical = grad_check_sparse(f, W, dW)
numerical: 1.382074 analytic: 1.382074, relative error: 2.603780e-08
numerical: 0.587997 analytic: 0.587997, relative error: 2.764543e-08
numerical: 2.466843 analytic: 2.466843, relative error: 8.029571e-09
numerical: -1.840196 analytic: -1.840196, relative error: 1.781980e-09
numerical: 1.444645 analytic: 1.444645, relative error: 6.200972e-08
numerical: -1.381959 analytic: -1.381959, relative error: 1.643225e-08
numerical: 1.122692 analytic: 1.122692, relative error: 1.600617e-08
numerical: 1.249459 analytic: 1.249459, relative error: 2.936177e-09
numerical: 1.556929 analytic: 1.556929, relative error: 1.452262e-08
numerical: 1.976238 analytic: 1.976238, relative error: 1.619212e-08
numerical: 2.308430 analytic: 2.308430, relative error: 7.769452e-10
numerical: -2.698441 analytic: -2.698440, relative error: 2.672068e-08
numerical: 1.991475 analytic: 1.991475, relative error: 3.035301e-08
numerical: -1.891048 analytic: -1.891048, relative error: 1.407403e-08
numerical: 1.409085 analytic: 1.409085, relative error: 1.916174e-08
numerical: 1.688600 analytic: 1.688600, relative error: 6.298778e-10
numerical: -0.140043 analytic: -0.140043, relative error: 7.654000e-08
numerical: -0.563577 analytic: -0.563577, relative error: 5.109196e-08
numerical: 0.224879 analytic: 0.224879, relative error: 1.218421e-07
numerical: -5.497099 analytic: -5.497099, relative error: 1.992705e-08
softmax = Softmax()
loss_history = softmax.train(X_train, y_train, learning_rate = 1e-7, reg = 2.5e4, num_iters = 1500,
batch_size = 200, print_flag = True)
iteration 0 / 1500: loss 389.013148
iteration 100 / 1500: loss 235.704700
iteration 200 / 1500: loss 142.948192
iteration 300 / 1500: loss 87.236112
iteration 400 / 1500: loss 53.494956
iteration 500 / 1500: loss 33.153764
iteration 600 / 1500: loss 20.907861
iteration 700 / 1500: loss 13.442687
iteration 800 / 1500: loss 8.929345
iteration 900 / 1500: loss 6.238832
iteration 1000 / 1500: loss 4.559590
iteration 1100 / 1500: loss 3.501153
iteration 1200 / 1500: loss 2.924789
iteration 1300 / 1500: loss 2.552109
iteration 1400 / 1500: loss 2.370926
# Plot the loss_history
plt.plot(loss_history)
plt.xlabel('Iteration number')
plt.ylabel('loss value')
plt.show()
# Use softmax classifier to predict
# Training set
y_pred = softmax.predict(X_train)
num_correct = np.sum(y_pred == y_train)
accuracy = np.mean(y_pred == y_train)
print('Training correct %d/%d: The accuracy is %f' % (num_correct, X_train.shape[0], accuracy))
# Test set
y_pred = softmax.predict(X_test)
num_correct = np.sum(y_pred == y_test)
accuracy = np.mean(y_pred == y_test)
print('Test correct %d/%d: The accuracy is %f' % (num_correct, X_test.shape[0], accuracy))
Training correct 17023/49000: The accuracy is 0.347408
Test correct 359/1000: The accuracy is 0.359000
learning_rates = [1.4e-7, 1.5e-7, 1.6e-7]
regularization_strengths = [8000.0, 9000.0, 10000.0, 11000.0, 18000.0, 19000.0, 20000.0, 21000.0]
results = {}
best_lr = None
best_reg = None
best_val = -1 # The highest validation accuracy that we have seen so far.
best_softmax = None # The LinearSVM object that achieved the highest validation rate.
for lr in learning_rates:
for reg in regularization_strengths:
softmax = Softmax()
loss_history = softmax.train(X_train, y_train, learning_rate = lr, reg = reg, num_iters = 3000)
y_train_pred = softmax.predict(X_train)
accuracy_train = np.mean(y_train_pred == y_train)
y_val_pred = softmax.predict(X_val)
accuracy_val = np.mean(y_val_pred == y_val)
results[(lr, reg)] = accuracy_train, accuracy_val
if accuracy_val > best_val:
best_lr = lr
best_reg = reg
best_val = accuracy_val
best_softmax = softmax
print('lr: %e reg: %e train accuracy: %f val accuracy: %f' %
(lr, reg, results[(lr, reg)][0], results[(lr, reg)][1]))
print('Best validation accuracy during cross-validation:\nlr = %e, reg = %e, best_val = %f' %
(best_lr, best_reg, best_val))
lr: 1.400000e-07 reg: 8.000000e+03 train accuracy: 0.376388 val accuracy: 0.381000
lr: 1.400000e-07 reg: 9.000000e+03 train accuracy: 0.378061 val accuracy: 0.393000
lr: 1.400000e-07 reg: 1.000000e+04 train accuracy: 0.375061 val accuracy: 0.394000
lr: 1.400000e-07 reg: 1.100000e+04 train accuracy: 0.370918 val accuracy: 0.389000
lr: 1.400000e-07 reg: 1.800000e+04 train accuracy: 0.361857 val accuracy: 0.378000
lr: 1.400000e-07 reg: 1.900000e+04 train accuracy: 0.354327 val accuracy: 0.373000
lr: 1.400000e-07 reg: 2.000000e+04 train accuracy: 0.357531 val accuracy: 0.370000
lr: 1.400000e-07 reg: 2.100000e+04 train accuracy: 0.351837 val accuracy: 0.374000
lr: 1.500000e-07 reg: 8.000000e+03 train accuracy: 0.380429 val accuracy: 0.387000
lr: 1.500000e-07 reg: 9.000000e+03 train accuracy: 0.375959 val accuracy: 0.393000
lr: 1.500000e-07 reg: 1.000000e+04 train accuracy: 0.373857 val accuracy: 0.397000
lr: 1.500000e-07 reg: 1.100000e+04 train accuracy: 0.371918 val accuracy: 0.386000
lr: 1.500000e-07 reg: 1.800000e+04 train accuracy: 0.359735 val accuracy: 0.379000
lr: 1.500000e-07 reg: 1.900000e+04 train accuracy: 0.359796 val accuracy: 0.373000
lr: 1.500000e-07 reg: 2.000000e+04 train accuracy: 0.352041 val accuracy: 0.365000
lr: 1.500000e-07 reg: 2.100000e+04 train accuracy: 0.356531 val accuracy: 0.372000
lr: 1.600000e-07 reg: 8.000000e+03 train accuracy: 0.378265 val accuracy: 0.394000
lr: 1.600000e-07 reg: 9.000000e+03 train accuracy: 0.377980 val accuracy: 0.391000
lr: 1.600000e-07 reg: 1.000000e+04 train accuracy: 0.371429 val accuracy: 0.389000
lr: 1.600000e-07 reg: 1.100000e+04 train accuracy: 0.374224 val accuracy: 0.391000
lr: 1.600000e-07 reg: 1.800000e+04 train accuracy: 0.360796 val accuracy: 0.386000
lr: 1.600000e-07 reg: 1.900000e+04 train accuracy: 0.355592 val accuracy: 0.371000
lr: 1.600000e-07 reg: 2.000000e+04 train accuracy: 0.356122 val accuracy: 0.368000
lr: 1.600000e-07 reg: 2.100000e+04 train accuracy: 0.354143 val accuracy: 0.367000
Best validation accuracy during cross-validation:
lr = 1.500000e-07, reg = 1.000000e+04, best_val = 0.397000
import math
x_scatter = [math.log10(x[0]) for x in results]
y_scatter = [math.log10(x[1]) for x in results]
maker_size = 100
plt.figure(figsize=(10,10))
# training accuracy
plt.subplot(2, 1, 1)
colors = [results[x][0] for x in results]
plt.scatter(x_scatter, y_scatter, maker_size, c=colors)
plt.colorbar()
plt.xlabel('log learning rate')
plt.ylabel('log regularization strength')
# validation accuracy
plt.subplot(2, 1, 2)
colors = [results[x][1] for x in results]
plt.scatter(x_scatter, y_scatter, maker_size, c=colors)
plt.colorbar()
plt.xlabel('log learning rate')
plt.ylabel('log regularization strength')
plt.show()
y_pred = best_softmax.predict(X_test)
num_correct = np.sum(y_pred == y_test)
accuracy = np.mean(y_pred == y_test)
print('Test correct %d/%d: The accuracy is %f' % (num_correct, num_test, accuracy))
Test correct 379/1000: The accuracy is 0.379000
W = best_softmax.W[:-1,:] # delete the bias
W = np.reshape(W, (32, 32, 3, 10))
W_max, W_min = np.max(W), np.min(W)
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
plt.figure(figsize=(10,5))
for i in range(10):
plt.subplot(2, 5, i+1)
# Rescale the weights to be between 0 and 255
imgW = 255.0 * (W[:,:,:,i] - W_min) / (W_max - W_min)
plt.imshow(imgW.astype('uint8'))
plt.axis('off')
plt.title(classes[i])