更多作业,请查看⭐ 李宏毅2020机器学习资料汇总
直接在李宏毅课程主页可以找到作业:
如果你打不开colab,下方是搬运的jupyter notebook文件和助教的说明ppt:
上述链接中jupyter notebook文件的图片来源也是colab,如果你无法加载图片,博主已经将图片下载好,放在本文中了。
采集了台湾环境监测所的数据。
要求:根据前9小时的数据,用线性回归来预测第10个小时的PM2.5的数值。
本次作业使用了某个检测站一年的观测数据,数据中每个小时有18个观测指标,将其作为特征。将数据分为train.csv和test.csv,train.csv是该检测站每个月前20天的所有数据,test.csv是从该检测站剩余数据中取样出的部分数据。
所有的数据含有18个观测数据(特征):AMB_TEMP, CH4, CO, NHMC, NO, NO2, NOx, O3, PM10, PM2.5, RAINFALL, RH, SO2, THC, WD_HR, WIND_DIREC, WIND_SPEED, WS_HR。
输入:9个小时的数据,共18项特征(AMB_TEMP, CH4, CO, NHMC, NO, NO2, NOx, O3, PM10, PM2.5, RAINFALL, RH, SO2, THC, WD_HR, WIND_DIREC, WIND_SPEED, WS_HR)
输出:第10小时的PM2.5数值
模型:线性回归
import sys
import pandas as pd
import numpy as np
# 读入train.csv,繁体字以big5编码
data = pd.read_csv('./train.csv', encoding = 'big5')
# 显示前10行
data.head(10)
第1列是日期,第2列是观测站所在地,第3列是观测指标,第4列-第27列是0-23共24小时。
data.shape
Out:
(4320, 27)
数据规格为:4320行,27列
可以看到降雨(rainfall)都是字符“NR”,将它变成数值0;从第3列开始是数值数据,提取出这些数值。
说明:下述代码中的
.to_numpy()
函数需要pandas版本>=0.24.0,否则会报错
# 丢弃前两列,需要的是从第三列开始的数值
data = data.iloc[:, 3:]
# 把降雨的NR字符变成数值0
data[data == 'NR'] = 0
# 把dataframe转换成numpy的数组
raw_data = data.to_numpy()
raw_data
Out:
array([['14', '14', '14', ..., '15', '15', '15'],
['1.8', '1.8', '1.8', ..., '1.8', '1.8', '1.8'],
['0.51', '0.41', '0.39', ..., '0.35', '0.36', '0.32'],
...,
['36', '55', '72', ..., '118', '100', '105'],
['1.9', '2.4', '1.9', ..., '1.5', '2', '2'],
['0.7', '0.8', '1.8', ..., '1.6', '1.8', '2']], dtype=object)
此时,数据变成(4320行,24列)
4320行中,每18行(18个观测指标)是一天的数据,将18行作为一天,4320/18=240天(一年12个月,每个月20天),根据每个月将4320行×24列的数据分成12 组18 行(features) × 480 列(hours) 的数据:
month_data = {
}
for month in range(12):
sample = np.empty([18, 480])
for day in range(20):
sample[:, day * 24 : (day + 1) * 24] = raw_data[18 * (20 * month + day) : 18 * (20 * month + day + 1), :]
month_data[month] = sample
分成了12个月,每个月有18行×480列的数据。
对于每个月,每10个小时分成一组,由前9个小时的数据来预测第10个小时的PM2.5,把前9小时的数据放入x,把第10个小时的数据放入y。窗口的大小为10,从第1个小时开始向右滑动,每次滑动1小时。因此,每个月都有471组这样的数据。
把一组18×9的数据平铺成一行向量,然后放入x的一行中,每个月有471组,共有12×471组向量,因此x有12×471行,18×9列。
将预测值放入y中,y有12(月)×471(组)行,1列。
x = np.empty([12 * 471, 18 * 9], dtype = float)
y = np.empty([12 * 471, 1], dtype = float)
for month in range(12):
for day in range(20):
for hour in range(24):
if day == 19 and hour > 14:
continue
x[month * 471 + day * 24 + hour, :] = month_data[month][:,day * 24 + hour : day * 24 + hour + 9].reshape(1, -1) #vector dim:18*9 (9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9)
y[month * 471 + day * 24 + hour, 0] = month_data[month][9, day * 24 + hour + 9] #value
print(x)
print(y)
Out:
[[14. 14. 14. ... 2. 2. 0.5]
[14. 14. 13. ... 2. 0.5 0.3]
[14. 13. 12. ... 0.5 0.3 0.8]
...
[17. 18. 19. ... 1.1 1.4 1.3]
[18. 19. 18. ... 1.4 1.3 1.6]
[19. 18. 17. ... 1.3 1.6 1.8]]
[[30.]
[41.]
[44.]
...
[17.]
[24.]
[29.]]
Normalization和Standardization分别被翻译成归一化和标准化,其实是有问题的,很容易被混淆。实际上,两者可以统称为标准化(Normalization), x = x − μ σ x=\frac{x-\mu}{\sigma} x=σx−μ叫做z-score normalization,而 x = x − x m i n x m a x − x m i n x=\frac{x-x_{min}}{x_{max}-x_{min}} x=xmax−xminx−xmin又叫做min-max normalization,网上的某些资料有点问题。
x = x − μ σ x=\frac{x-\mu}{\sigma} x=σx−μ, μ \mu μ是 x x x的均值, σ \sigma σ是 x x x的标准差。
通过标准化,可以:
这个转换使得x的均值为0,标准差为1,而不是像网上说的变成(-1,1)之间的数据。从下方的out中,可以看到有部分数据是小于-1的。
这里每一列是一个观测指标,按列进行标准化。
mean_x = np.mean(x, axis = 0) #18 * 9
std_x = np.std(x, axis = 0) #18 * 9
for i in range(len(x)): #12 * 471
for j in range(len(x[0])): #18 * 9
if std_x[j] != 0:
x[i][j] = (x[i][j] - mean_x[j]) / std_x[j]
x
Out:
array([[-1.35825331, -1.35883937, -1.359222 , ..., 0.26650729,
0.2656797 , -1.14082131],
[-1.35825331, -1.35883937, -1.51819928, ..., 0.26650729,
-1.13963133, -1.32832904],
[-1.35825331, -1.51789368, -1.67717656, ..., -1.13923451,
-1.32700613, -0.85955971],
...,
[-0.88092053, -0.72262212, -0.56433559, ..., -0.57693779,
-0.29644471, -0.39079039],
[-0.7218096 , -0.56356781, -0.72331287, ..., -0.29578943,
-0.39013211, -0.1095288 ],
[-0.56269867, -0.72262212, -0.88229015, ..., -0.38950555,
-0.10906991, 0.07797893]])
把训练数据分成训练集train_set和验证集validation,其中train_set用于训练,而validation不会参与训练,仅用于验证。(在baseline中并没有用)
import math
x_train_set = x[: math.floor(len(x) * 0.8), :]
y_train_set = y[: math.floor(len(y) * 0.8), :]
x_validation = x[math.floor(len(x) * 0.8): , :]
y_validation = y[math.floor(len(y) * 0.8): , :]
print(x_train_set)
print(y_train_set)
print(x_validation)
print(y_validation)
print(len(x_train_set))
print(len(y_train_set))
print(len(x_validation))
print(len(y_validation))
Out:
[[-1.35825331 -1.35883937 -1.359222 ... 0.26650729 0.2656797
-1.14082131]
[-1.35825331 -1.35883937 -1.51819928 ... 0.26650729 -1.13963133
-1.32832904]
[-1.35825331 -1.51789368 -1.67717656 ... -1.13923451 -1.32700613
-0.85955971]
...
[ 0.86929969 0.70886668 0.38952809 ... 1.39110073 0.2656797
-0.39079039]
[ 0.71018876 0.39075806 0.07157353 ... 0.26650729 -0.39013211
-0.39079039]
[ 0.3919669 0.07264944 0.07157353 ... -0.38950555 -0.39013211
-0.85955971]]
[[30.]
[41.]
[44.]
...
[ 7.]
[ 5.]
[14.]]
[[ 0.07374504 0.07264944 0.07157353 ... -0.38950555 -0.85856912
-0.57829812]
[ 0.07374504 0.07264944 0.23055081 ... -0.85808615 -0.57750692
0.54674825]
[ 0.07374504 0.23170375 0.23055081 ... -0.57693779 0.54674191
-0.1095288 ]
...
[-0.88092053 -0.72262212 -0.56433559 ... -0.57693779 -0.29644471
-0.39079039]
[-0.7218096 -0.56356781 -0.72331287 ... -0.29578943 -0.39013211
-0.1095288 ]
[-0.56269867 -0.72262212 -0.88229015 ... -0.38950555 -0.10906991
0.07797893]]
[[13.]
[24.]
[22.]
...
[17.]
[24.]
[29.]]
4521
4521
1131
1131
和上图不同处: 下面Loss的代码用到的是 Root Mean Square Error
因为存在常数项b,所以维度(dim)需要多加一列;eps项是极小值,避免adagrad的分母为0.
每一个维度(dim)会对应到各自的gradient和权重w,通过一次次的迭代(iter_time)学习。最终,将训练得到的模型(权重w)存储为.npy格式的文件。
dim = 18 * 9 + 1
w = np.zeros([dim, 1])
x = np.concatenate((np.ones([12 * 471, 1]), x), axis = 1).astype(float)
learning_rate = 100
iter_time = 1000
adagrad = np.zeros([dim, 1])
eps = 0.0000000001
for t in range(iter_time):
loss = np.sqrt(np.sum(np.power(np.dot(x, w) - y, 2))/471/12)#rmse
if(t%100==0):
print(str(t) + ":" + str(loss))
gradient = 2 * np.dot(x.transpose(), np.dot(x, w) - y) #dim*1
adagrad += gradient ** 2
w = w - learning_rate * gradient / np.sqrt(adagrad + eps)
np.save('weight.npy', w)
w
Out:
0:27.071214829194115
100:33.78905859777454
200:19.91375129819709
300:13.531068193689686
400:10.645466158446165
500:9.27735345547506
600:8.518042045956497
700:8.014061987588416
800:7.636756824775686
900:7.33656374037112
array([[ 2.13740269e+01],
[ 3.58888909e+00],
[ 4.56386323e+00],
[ 2.16307023e+00],
[-6.58545223e+00],
[-3.38885580e+01],
[ 3.22235518e+01],
...
[-5.57512471e-01],
[ 8.76239582e-02],
[ 3.02594902e-01],
[-4.23463160e-01],
[ 4.89922051e-01]])
# 读入测试数据test.csv
testdata = pd.read_csv('./test.csv', header = None, encoding = 'big5')
# 丢弃前两列,需要的是从第3列开始的数据
test_data = testdata.iloc[:, 2:]
# 把降雨为NR字符变成数字0
test_data[test_data == 'NR'] = 0
# 将dataframe变成numpy数组
test_data = test_data.to_numpy()
# 将test数据也变成 240 个维度为 18 * 9 + 1 的数据。
test_x = np.empty([240, 18*9], dtype = float)
for i in range(240):
test_x[i, :] = test_data[18 * i: 18* (i + 1), :].reshape(1, -1)
for i in range(len(test_x)):
for j in range(len(test_x[0])):
if std_x[j] != 0:
test_x[i][j] = (test_x[i][j] - mean_x[j]) / std_x[j]
test_x = np.concatenate((np.ones([240, 1]), test_x), axis = 1).astype(float)
test_x
Out:
array([[
1. , -0.24447681, -0.24545919, ..., -0.67065391,
-1.04594393, 0.07797893],
[ 1. , -1.35825331, -1.51789368, ..., 0.17279117,
-0.10906991, -0.48454426],
[ 1. , 1.5057434 , 1.34508393, ..., -1.32666675,
-1.04594393, -0.57829812],
...,
[ 1. , 0.3919669 , 0.54981237, ..., 0.26650729,
-0.20275731, 1.20302531],
[ 1. , -1.8355861 , -1.8360023 , ..., -1.04551839,
-1.13963133, -1.14082131],
[ 1. , -1.35825331, -1.35883937, ..., 2.98427476,
3.26367657, 1.76554849]])
载入模型即可对test数据进行预测,得到预测值ans_y。
w = np.load('weight.npy')
ans_y = np.dot(test_x, w)
ans_y
Out:
array([[ 5.17496040e+00],
[ 1.83062143e+01],
[ 2.04912181e+01],
[ 1.15239429e+01],
[ 2.66160568e+01],
...,
[ 4.12665445e+01],
[ 6.90278920e+01],
[ 4.03462492e+01],
[ 1.43137440e+01],
[ 1.57707266e+01]])
import csv
with open('submit.csv', mode='w', newline='') as submit_file:
csv_writer = csv.writer(submit_file)
header = ['id', 'value']
print(header)
csv_writer.writerow(header)
for i in range(240):
row = ['id_' + str(i), ans_y[i][0]]
csv_writer.writerow(row)
print(row)
Out:
['id', 'value']
['id_0', 5.17496039898473]
['id_1', 18.306214253527884]
['id_2', 20.491218094180553]
['id_3', 11.523942869805396]
...
['id_237', 40.346249244122404]
['id_238', 14.313743982871117]
['id_239', 15.770726634219777]
将240组testing data 中的 PM2.5 值存为submit.csv文件。将submit.csv提交至kaggle平台进行测试,submit.csv内的格式为:
直接将生成的submit.csv提交至kaggle测试结果如下:
Private Score为8.73773,Public Score为6.55912
接下来,需要在此baseline的基础上优化模型,目标是降低Score。
助教提示:可以修改学习率、迭代次数、提取的特征,甚至修改模型。
不过,毕竟是线性回归作业,博主并不打算修改模型。
博主尝试了修改成adam,然而效果并不好,最后还是改回了Adagrad。
Adagrad更新公式:
r t = r t + g t 2 θ t + 1 = θ t − η r t + ϵ m ^ t \begin{aligned} r_t&=r_t+g_t^2\\ \theta_{t+1} &=\theta_{t}-\frac{\eta}{\sqrt{r_{t}}+\epsilon} \hat{m}_{t} \end{aligned} rtθt+1=rt+gt2=θt−rt+ϵηm^t
Adam更新公式:
m t = β 1 m t − 1 + ( 1 − β 1 ) g t v t = β 2 v t − 1 + ( 1 − β 2 ) g t 2 m ^ t = m t 1 − β 1 t v ^ t = v t 1 − β 2 t θ t + 1 = θ t − η v ^ t + ϵ m ^ t \begin{aligned} m_{t} &=\beta_{1} m_{t-1}+\left(1-\beta_{1}\right) g_{t} \\ v_{t} &=\beta_{2} v_{t-1}+\left(1-\beta_{2}\right) g_{t}^{2} \\ \hat{m}_{t} &=\frac{m_{t}}{1-\beta_{1}^{t}} \\ \hat{v}_{t} &=\frac{v_{t}}{1-\beta_{2}^{t}} \\ \theta_{t+1} &=\theta_{t}-\frac{\eta}{\sqrt{\hat{v}_{t}}+\epsilon} \hat{m}_{t} \end{aligned} mtvtm^tv^tθt+1=β1mt−1+(1−β1)gt=β2vt−1+(1−β2)gt2=1−β1tmt=1−β2tvt=θt−v^t+ϵηm^t
在 Adam 原论文以及一些深度学习框架中,初始值 m 0 = 0 m_0=0 m0=0, v 0 = 0 v_0=0 v0=0,默认值为 η = 0.001 \eta=0.001 η=0.001, β 1 = 0.9 \beta_{1}=0.9 β1=0.9, β 2 = 0.999 \beta_{2}=0.999 β2=0.999, ϵ = 1 e − 8 \epsilon=1 e-8 ϵ=1e−8,其中 β 1 \beta_{1} β1 和 β 2 \beta_{2} β2 都是接近 1 的数, ϵ \epsilon ϵ是为了防止除以 0。 g t g_t gt 表示梯度, g t 2 g_t^2 gt2 表示梯度的平方, β 1 t \beta_{1}^{t} β1t表示 β 1 \beta_{1} β1的 t t t次方。
最后,博主调整学习率为2,迭代6000次(多次炼丹的结果)。
除此之外,观察PM2.5的值,可以发现没有负数,都是整数,因此对预测值进行了微调,小于0的数都归为0,而对所有的浮点数四舍五入为整数。
完整代码如下:
# 导入相关库
import sys
import pandas as pd
import numpy as np
# 读入数据
data = pd.read_csv('./train.csv', encoding = 'big5')
# 数据预处理
data = data.iloc[:, 3:]
data[data == 'NR'] = 0
raw_data = data.to_numpy()
# 按月分割数据
month_data = {
}
for month in range(12):
sample = np.empty([18, 480])
for day in range(20):
sample[:, day * 24 : (day + 1) * 24] = raw_data[18 * (20 * month + day) : 18 * (20 * month + day + 1), :]
month_data[month] = sample
# 分割x和y
x = np.empty([12 * 471, 18 * 9], dtype = float)
y = np.empty([12 * 471, 1], dtype = float)
for month in range(12):
for day in range(20):
for hour in range(24):
if day == 19 and hour > 14:
continue
x[month * 471 + day * 24 + hour, :] = month_data[month][:,day * 24 + hour : day * 24 + hour + 9].reshape(1, -1) #vector dim:18*9 (9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9)
y[month * 471 + day * 24 + hour, 0] = month_data[month][9, day * 24 + hour + 9] #value
print(x)
print(y)
# 对x标准化
mean_x = np.mean(x, axis = 0) #18 * 9
std_x = np.std(x, axis = 0) #18 * 9
for i in range(len(x)): #12 * 471
for j in range(len(x[0])): #18 * 9
if std_x[j] != 0:
x[i][j] = (x[i][j] - mean_x[j]) / std_x[j]
# 训练模型并保存权重
dim = 18 * 9 + 1
w = np.zeros([dim, 1])
x2 = np.concatenate((np.ones([12 * 471, 1]), x), axis = 1).astype(float)
learning_rate = 2
iter_time = 10000
adagrad = np.zeros([dim, 1])
eps = 1e-7
for t in range(iter_time):
loss = np.sqrt(np.sum(np.power(np.dot(x2, w) - y, 2))/471/12)#rmse
if(t%100==0):
print(str(t) + ":" + str(loss))
gradient = 2 * np.dot(x2.transpose(), np.dot(x2, w) - y) #dim*1
adagrad += gradient ** 2
w = w - learning_rate * gradient / (np.sqrt(adagrad) + eps)
np.save('weight.npy', w)
# 导入测试数据test.csv
testdata = pd.read_csv('./test.csv', header = None, encoding = 'big5')
test_data = testdata.iloc[:, 2:]
test_data[test_data == 'NR'] = 0
test_data = test_data.to_numpy()
test_x = np.empty([240, 18*9], dtype = float)
for i in range(240):
test_x[i, :] = test_data[18 * i: 18* (i + 1), :].reshape(1, -1)
for i in range(len(test_x)):
for j in range(len(test_x[0])):
if std_x[j] != 0:
test_x[i][j] = (test_x[i][j] - mean_x[j]) / std_x[j]
test_x = np.concatenate((np.ones([240, 1]), test_x), axis = 1).astype(float)
# 对test的x进行预测,得到预测值ans_y
w = np.load('weight.npy')
ans_y = np.dot(test_x, w)
# 加一个预处理<0的都变成0
for i in range(240):
if(ans_y[i][0]<0):
ans_y[i][0]=0
else:
ans_y[i][0]=np.round(ans_y[i][0])
# 保存为csv文件,并提交到kaggle:https://www.kaggle.com/c/ml2020spring-hw1/submissions
import csv
with open('submit.csv', mode='w', newline='') as submit_file:
csv_writer = csv.writer(submit_file)
header = ['id', 'value']
print(header)
csv_writer.writerow(header)
for i in range(240):
row = ['id_' + str(i), ans_y[i][0]]
csv_writer.writerow(row)
print(row)
之后,博主又尝试了加入x的平方项,private score的分数下降了很多,不过public score的分数上升了。
主要的修改部分为:
# 训练集
for month in range(12):
for day in range(20):
for hour in range(24):
if day == 19 and hour > 14:
continue
x1 = month_data[month][:, day * 24 + hour: day * 24 + hour + 9].reshape(1,-1) # vector dim:18*9 (9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9)
x[month * 471 + day * 24 + hour, :18 * 9] = x1
# 在这里加入了x的二次项
x[month * 471 + day * 24 + hour, 18 * 9: 18 * 9 * 2] = np.power(x1, 2)
y[month * 471 + day * 24 + hour, 0] = month_data[month][9, day * 24 + hour + 9] # value
# 测试集
testdata = pd.read_csv('./test.csv', header = None, encoding = 'big5')
test_data = testdata.iloc[:, 2:]
test_data[test_data == 'NR'] = 0
test_data = test_data.to_numpy()
test_x1 = np.empty([240, 18*9], dtype = float)
test_x = np.empty([240, 18*9*2], dtype = float)
for i in range(240):
test_x1 = test_data[18 * i: 18 * (i + 1), :].reshape(1, -1).astype(float)
# 同样在这里加入test x的二次项
test_x[i, : 18 * 9] = test_x1
test_x[i, 18 * 9:] = np.power(test_x1 , 2)
for i in range(len(test_x)):
for j in range(len(test_x[0])):
if std_x[j] != 0:
test_x[i][j] = (test_x[i][j] - mean_x[j]) / std_x[j]
test_x = np.concatenate((np.ones([240, 1]), test_x), axis = 1).astype(float)
完整代码如下:
import sys
import pandas as pd
import numpy as np
# 读入数据
data = pd.read_csv('./train.csv', encoding='big5')
# 数据预处理
data = data.iloc[:, 3:]
data[data == 'NR'] = 0
raw_data = data.to_numpy()
# 按月分割数据
month_data = {
}
for month in range(12):
sample = np.empty([18, 480])
for day in range(20):
sample[:, day * 24: (day + 1) * 24] = raw_data[18 * (20 * month + day): 18 * (20 * month + day + 1), :]
month_data[month] = sample
# 分割x和y
x = np.empty([12 * 471, 18 * 9 * 2], dtype=float)
y = np.empty([12 * 471, 1], dtype=float)
for month in range(12):
for day in range(20):
for hour in range(24):
if day == 19 and hour > 14:
continue
x1 = month_data[month][:, day * 24 + hour: day * 24 + hour + 9].reshape(1,-1) # vector dim:18*9 (9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9)
x[month * 471 + day * 24 + hour, :18 * 9] = x1
# 在这里加入了x的二次项
x[month * 471 + day * 24 + hour, 18 * 9: 18 * 9 * 2] = np.power(x1, 2)
y[month * 471 + day * 24 + hour, 0] = month_data[month][9, day * 24 + hour + 9] # value
# 对x标准化
mean_x = np.mean(x, axis=0) # 18 * 9 * 2
std_x = np.std(x, axis=0) # 18 * 9 * 2
for i in range(len(x)): # 12 * 471
for j in range(len(x[0])): # 18 * 9 * 2
if std_x[j] != 0:
x[i][j] = (x[i][j] - mean_x[j]) / std_x[j]
# 随机打散X和Y
def _shuffle(X, Y):
randomize = np.arange(len(X))
np.random.shuffle(randomize)
return (X[randomize], Y[randomize])
# 训练模型并保存权重
dim = 18 * 9 * 2 + 1
w = np.ones([dim, 1])
learning_rate = 2
iter_time = 5000
adagrad = np.zeros([dim, 1])
eps = 1e-7
for t in range(iter_time):
x, y = _shuffle(x, y)
x2 = np.concatenate((np.ones([len(x), 1]), x), axis=1).astype(float)
gradient = 2 * np.dot(x2.transpose(), np.dot(x2, w) - y) # dim*1
adagrad += gradient ** 2
w = w - learning_rate * gradient / (np.sqrt(adagrad) + eps)
loss = np.sqrt(np.sum(np.power(np.dot(x2, w) - y, 2)) / len(x)) # rmse
if (t % 100 == 0):
print(str(t) + ":" + str(loss))
np.save('weight.npy', w)
# 导入测试数据test.csv
testdata = pd.read_csv('./test.csv', header = None, encoding = 'big5')
test_data = testdata.iloc[:, 2:]
test_data[test_data == 'NR'] = 0
test_data = test_data.to_numpy()
test_x1 = np.empty([240, 18*9], dtype = float)
test_x = np.empty([240, 18*9*2], dtype = float)
for i in range(240):
test_x1 = test_data[18 * i: 18 * (i + 1), :].reshape(1, -1).astype(float)
# 同样在这里加入test x的二次项
test_x[i, : 18 * 9] = test_x1
test_x[i, 18 * 9:] = np.power(test_x1 , 2)
for i in range(len(test_x)):
for j in range(len(test_x[0])):
if std_x[j] != 0:
test_x[i][j] = (test_x[i][j] - mean_x[j]) / std_x[j]
test_x = np.concatenate((np.ones([240, 1]), test_x), axis = 1).astype(float)
# 对test的x进行预测,得到预测值ans_y
w = np.load('weight.npy')
ans_y = np.dot(test_x, w)
# 加一个预处理<0的都变成0
for i in range(240):
if(ans_y[i][0]<0):
ans_y[i][0]=0
else:
ans_y[i][0]=np.round(ans_y[i][0])
# 保存为csv文件,并提交到kaggle:https://www.kaggle.com/c/ml2020spring-hw1/submissions
import csv
with open('submit.csv', mode='w', newline='') as submit_file:
csv_writer = csv.writer(submit_file)
header = ['id', 'value']
print(header)
csv_writer.writerow(header)
for i in range(240):
row = ['id_' + str(i), ans_y[i][0]]
csv_writer.writerow(row)