【实战篇】——使用keras构建简单的神经网络

新手上路的小白,之前一直在改人家建好的模型,从来没有自己手写过一个完整的神经网络,这次就从一个简单的两层神经网络的分类任务开始吧!
代码参考:https://www.cnblogs.com/hhh5460/p/10195269.html

手动生成一批数据:

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

#================================
#准备数据
N = 100  #每个类别100个样本
D = 2  #维度
K = 3  #3类
X = np.zeros((N*K,D))  #一个300乘2的矩阵
y = np.zeros((N*K),dtype='uint8')

for j in range(K):
    ix = list(range(N*j, N*(j+1)))
    r = np.linspace(0.0,1,N)
    t = np.linspace(j*4,(j+1)*4,N) + np.random.randn(N)*0.2

    X[ix] = np.c_[r*np.sin(t), r*np.cos(t)]  #对矩阵每个元素取正弦
    y[ix] = j

# 展示数据
# x1 = X
# x2 = x1[:100]
# x2 = x2.T
# y2 = y[:100]
# x3 = x1[100:200]
# x3 = x3.T
# y3 = y[100:200]
# x4= x1[200:]
# x4 = x4.T
# y4 = y[200:]
# plt.plot(x2[0],x2[1],'o')
# plt.plot(x3[0],x3[1],'o')
# plt.plot(x4[0],x4[1],'o')
# plt.show()

生成数据的亚子
【实战篇】——使用keras构建简单的神经网络_第1张图片
构造网络,注意使用Dense构造第一层网络的时候要设置输入数据类型:

Input(shape = ())
以及Dense(input_dim = ())

from keras.layers import Input,Dense
from keras.models import Model

# 使用sequential定义两层模型
model_in = Input(shape=(2,))
model_out = Dense(10,input_dim=(2),activation='relu')(model_in)
model_out = Dense(3,activation='softmax')(model_out)
model = Model(model_in,model_out)

# 二分类提问使用二元交叉熵(binary crossentropy)
# 多元分类,使用分类交叉熵(categorical crossentropy)
# 回归问题,使用均方差(meansquared error)

from keras import optimizers
model.compile(optimizer=optimizers.RMSprop(lr=0.01),
              loss='sparse_categorical_crossentropy',
              metrics=['accuracy'])
model.fit(X,y,batch_size=50,epochs=1000)

按照同样的方法生成测试数据

# 生成测试集数据

X1 = np.zeros((N*K,D))  #一个300乘2的矩阵
y1 = np.zeros((N*K),dtype='uint8')

for j in range(K):
    ix = list(range(N*j, N*(j+1)))
    r = np.linspace(0.0,1,N)
    t = np.linspace(j*4,(j+1)*4,N) + np.random.randn(N)*0.2

    X1[ix] = np.c_[r*np.sin(t), r*np.cos(t)]  #对矩阵每个元素取正弦
    y1[ix] = j


# 测试
# test_loss, test_acc = model.evaluate(X1,y1)
from sklearn.metrics import log_loss,accuracy_score
y_p = model.predict(X1)
# print(test_acc)
print('log loss: %f, accuracy: %f' % (
                    log_loss(y1, y_p), accuracy_score(y1, np.argmax(y_p, axis=1))))
y_p = np.argmax(y_p,axis=1) #取行最大值的索引
print(y_p)
print(y1)
plt.plot(y,y_p,'o')
plt.show()

结果:


 50/300 [====>.........................] - ETA: 0s - loss: 0.0119 - acc: 1.0000
300/300 [==============================] - 0s 13us/step - loss: 0.0252 - acc: 0.9900
Epoch 1000/1000

 50/300 [====>.........................] - ETA: 0s - loss: 0.0257 - acc: 1.0000
300/300 [==============================] - 0s 17us/step - loss: 0.0260 - acc: 0.9900
log loss: 0.030671, accuracy: 0.990000
[1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1
 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 0 2 2 2 2 2 2 2 2 2 2 2 2 2
 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
 2 2 2 2]
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1
 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
 2 2 2 2]

找不同找到了吗?
【实战篇】——使用keras构建简单的神经网络_第2张图片(蓝色是训练数据)

一个简洁的小网络就构造完啦!

你可能感兴趣的:(实战篇)