sklearn.impute.SimpleImputer
来轻松地将均值,中值,或者其他最常用的数值填补到数据中,它是专门用来填补缺失值的类
在使用随机森林填补缺失值之前,先来使用sklearn中专门用于填补缺失值的类sklearn.impute.SimpleImputer
填补一下缺失值
准备工作:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
from sklearn.ensemble import RandomForestRegressor
from sklearn.impute import SimpleImputer
boston = load_boston()
x_full = boston.data #x.shape = (506, 13),共有506个样本,每个样本有13个特征
y_full = boston.target
n_samples = x_full.shape[0] #506
n_features = x_full.shape[1] #13
波士顿房价数据集是一个完整的数据集,先把它改成有缺失值的数据集,再进行填补。因此首先需要确定缺失数据的比例,这里我们假设为50%,那总共就由506✖13✖0.5=3289个数据缺失。
怎么随机的让数据缺失呢?方法是我们创造一个数组,此数组包含3289个分布在0-506之间的行索引,和3289个分布在0-13的列索引(因为一个缺失的数据会需要一个行索引和一个列索引),有了索引之后,就可以利用索引来为数据中的任意3289个位置赋空值,而索引都是在指定范围内随机生成的
rng = np.random.RandomState(0)
missing_rate = 0.5
#np.floor向下取整,返回.0格式的浮点数,因此还要转换成int把.0去掉。
#这里虽然不用np.floor和int直接计算也是整数,但是为了程序的鲁棒性还是加上,如果缺失数据的比例发生变化的话,直接计算可能就不是整数了
n_missing_samples = int(np.floor(n_samples * n_features * missing_rate)) #3289
#randiant(下限,上限,n)在上限和下限之间取出n个整数
missing_features = rng.randint(0 , n_features , n_missing_samples) #0-13 列索引
print(missing_features)
[12 5 0 ... 11 0 2]
missing_samples = rng.randint(0 , n_samples , n_missing_samples) #0-506 行索引
print(missing_samples)
[150 125 28 ... 132 456 402]
我们现在采样了3289个数据,远远超过我们的样本量506,所以我们使用随机抽取的函数randint
。但如果我们需要的数据量小于我们的样本量506,那我们可以采用np.random.choice
来抽样,choice
会随机抽取不重复的随机数,因此可以帮助我们让数据更加分散,确保数据不会集中在一些行中。这里我们不同choice,因为数据量远远超过了506
#在0到 n_samples之间取出n_missing_samples个数值,replace=False表示不要重复
missing_samples = rng.choice(n_samples ,n_missing_samples,replace=False)
创造缺失的数据集:
x_missing = x_full.copy()
y_missing = y_full.copy()
x_missing[missing_samples , missing_features] = np.nan
x_missing = pd.DataFrame(x_missing)
print(x_missing)
0 1 2 3 4 ... 8 9 10 11 12
0 NaN 18.0 NaN NaN 0.538 ... 1.0 296.0 NaN NaN 4.98
1 0.02731 0.0 NaN 0.0 0.469 ... 2.0 NaN NaN 396.90 9.14
2 0.02729 NaN 7.07 0.0 NaN ... 2.0 242.0 NaN NaN NaN
3 NaN NaN NaN 0.0 0.458 ... NaN 222.0 18.7 NaN NaN
4 NaN 0.0 2.18 0.0 NaN ... NaN NaN 18.7 NaN 5.33
.. ... ... ... ... ... ... ... ... ... ... ...
501 NaN NaN NaN 0.0 0.573 ... 1.0 NaN 21.0 NaN 9.67
502 0.04527 0.0 11.93 0.0 0.573 ... 1.0 273.0 NaN 396.90 9.08
503 NaN NaN 11.93 NaN 0.573 ... NaN NaN 21.0 NaN 5.64
504 0.10959 0.0 11.93 NaN 0.573 ... 1.0 NaN 21.0 393.45 6.48
505 0.04741 0.0 11.93 0.0 0.573 ... 1.0 NaN NaN 396.90 7.88
注意,这里不处理y_missing
,特征可以空,标签不能空,标签空了就变成无监督学习了,我们所谓的填补缺失值是填补特征矩阵中的缺失值
使用均值进行填充:
imp_mean = SimpleImputer(missing_values=np.nan , strategy="mean") #实例化
#将x_missing里所有的值导入到此模型中训练,然后填完均值返回结果
x_missing_mean = imp_mean.fit_transform(x_missing)
print(pd.DataFrame(x_missing_mean))
0 1 2 ... 10 11 12
0 3.627579 18.000000 11.163464 ... 18.521192 352.741952 4.980000
1 0.027310 0.000000 11.163464 ... 18.521192 396.900000 9.140000
2 0.027290 10.722951 7.070000 ... 18.521192 352.741952 12.991767
3 3.627579 10.722951 11.163464 ... 18.700000 352.741952 12.991767
4 3.627579 0.000000 2.180000 ... 18.700000 352.741952 5.330000
.. ... ... ... ... ... ... ...
501 3.627579 10.722951 11.163464 ... 21.000000 352.741952 9.670000
502 0.045270 0.000000 11.930000 ... 18.521192 396.900000 9.080000
503 3.627579 10.722951 11.930000 ... 21.000000 352.741952 5.640000
504 0.109590 0.000000 11.930000 ... 21.000000 393.450000 6.480000
505 0.047410 0.000000 11.930000 ... 18.521192 396.900000 7.880000
[506 rows x 13 columns]
使用0进行填补:
imp_0 = SimpleImputer(missing_values=np.nan , strategy="constant" , fill_value=0)
x_missing_0 = imp_0.fit_transform(x_missing)
print(pd.DataFrame(x_missing_0))
0 1 2 3 4 ... 8 9 10 11 12
0 0.00000 18.0 0.00 0.0 0.538 ... 1.0 296.0 0.0 0.00 4.98
1 0.02731 0.0 0.00 0.0 0.469 ... 2.0 0.0 0.0 396.90 9.14
2 0.02729 0.0 7.07 0.0 0.000 ... 2.0 242.0 0.0 0.00 0.00
3 0.00000 0.0 0.00 0.0 0.458 ... 0.0 222.0 18.7 0.00 0.00
4 0.00000 0.0 2.18 0.0 0.000 ... 0.0 0.0 18.7 0.00 5.33
.. ... ... ... ... ... ... ... ... ... ... ...
501 0.00000 0.0 0.00 0.0 0.573 ... 1.0 0.0 21.0 0.00 9.67
502 0.04527 0.0 11.93 0.0 0.573 ... 1.0 273.0 0.0 396.90 9.08
503 0.00000 0.0 11.93 0.0 0.573 ... 0.0 0.0 21.0 0.00 5.64
504 0.10959 0.0 11.93 0.0 0.573 ... 1.0 0.0 21.0 393.45 6.48
505 0.04741 0.0 11.93 0.0 0.573 ... 1.0 0.0 0.0 396.90 7.88
[506 rows x 13 columns]
完整代码为:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
from sklearn.ensemble import RandomForestRegressor
from sklearn.impute import SimpleImputer
boston = load_boston()
x_full = boston.data #x.shape = (506, 13),共有506个样本,每个样本有13个特征
y_full = boston.target
n_samples = x_full.shape[0] #506
n_features = x_full.shape[1] #13
rng = np.random.RandomState(0)
missing_rate = 0.5
n_missing_samples = int(np.floor(n_samples * n_features * missing_rate))
missing_features = rng.randint(0 , n_features , n_missing_samples) #0-13 列索引
missing_samples = rng.randint(0 , n_samples , n_missing_samples) #0-506 行索引
#创造缺失的数据集
x_missing = x_full.copy()
y_missing = y_full.copy()
x_missing[missing_samples , missing_features] = np.nan
x_missing = pd.DataFrame(x_missing)
#使用均值进行填充
imp_mean = SimpleImputer(missing_values=np.nan , strategy="mean") #实例化
#将x_missing里所有的值导入到此模型中训练,然后填完均值返回结果
x_missing_mean = imp_mean.fit_transform(x_missing)
imp_0 = SimpleImputer(missing_values=np.nan , strategy="constant" , fill_value=0)
x_missing_0 = imp_0.fit_transform(x_missing)
任何回归都是从特征矩阵中学习,然后求解连续型标签y的过程,之所以能够实现这个过程,是因为回归算法认为,特征矩阵和标签之前存在着某种联系。实际上,标签和特征是可以相互转换的,比如说,在一个“用地区,环境,附近学校数量”预测“房价”的问题中,我们既可以用“地区”,“环境”,“附近学校数量”的数据来预测“房价”,也可以反过来,用“环境”,“附近学校数量”和“房价”来预测“地区”。而回归填补缺失值,正是利用了这种思想
对于一个有n个特征的数据来说,其中特征T有缺失值,我们就把特征T当作标签,其他的n-1个特征和原本的标签组成新的特征矩阵。对于T来说,它没有缺失的部分,就是我们的y_test
,这部分数据既有标签也有特征,而它缺失的部分,只有特征没有标签,就是我们需要预测的部分
x_train
:特征T不缺失的值对应的其他n-1个特征 + 本来的标签
y_train
:特征T不缺失的值
x_test
:特征T缺失的值对应的其他n-1个特征 + 本来的标签
y_test
:特征T缺失的值,是未知的,也就我我们需要进行预测的(填补的)
这种做法,对于某一特征大量缺失,其他特征却很完整的情况,非常适用
那如果数据中除了特征T之外,其他特征也有缺失值怎么办?
答案是遍历所有的特征,从缺失最少的开始进行填补(因为填补缺失最少的特征所需要的准确信息最少)。填补一个特征时,先将其他特征的缺失值用0代替,每完成一次回归预测,就将预测值放到原本的特征矩阵中,再继续填补下一个特征。每一次填补完毕,有缺失值的特征会减少一个,所以每次循环后,需要用0来填补的特征就越来越少。当进行到最后一个特征时(这个特征应该是所有特征中缺失值最多的),已经没有任何的其他特征需要用0来进行填补了,而我们已经使用回归为其他特征填补了大量有效信息,可以用来填补缺失最多的特征。遍历所有的特征后,数据就完整,不再有缺失值了
用两张图对此思路进行说明
假设有这么一个数据集,有7个样本,4个特征x0-x3,其中特征x1有三个缺失,特征x2有两个缺失,特征x3有一个缺失,那么首先填补第7个样本
特征x3的缺失值填补完后,将预测值放回到原本的特征矩阵中,再填补特征x2的缺失值
然后再重复上述步骤填补x1的缺失值,这里就不画了
此代码和4.1的一致,这里不做详解
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
from sklearn.ensemble import RandomForestRegressor
from sklearn.impute import SimpleImputer
boston = load_boston()
x_full = boston.data #x.shape = (506, 13),共有506个样本,每个样本有13个特征
y_full = boston.target
n_samples = x_full.shape[0] #506
n_features = x_full.shape[1] #13
rng = np.random.RandomState(0)
missing_rate = 0.5
n_missing_samples = int(np.floor(n_samples * n_features * missing_rate))
missing_features = rng.randint(0 , n_features , n_missing_samples) #0-13 列索引
missing_samples = rng.randint(0 , n_samples , n_missing_samples) #0-506 行索引
#创造缺失的数据集
x_missing = x_full.copy()
y_missing = y_full.copy()
x_missing[missing_samples , missing_features] = np.nan
x_missing = pd.DataFrame(x_missing)
x_missing_reg = x_missing.copy() #随机森林的填补的结果放在x_missing_reg中
从缺失值最少的开始填,所以首先需要一个排序,即数据集中缺失值从少到多的一个顺序,找顺序本质即为找索引,一行代码解决,即:
sortindex = np.argsort(x_missing_reg.isnull().sum(axis=0)).values
print(sortindex)
[ 6 12 8 7 9 0 2 1 5 4 3 10 11]
一步一步解析
print(x_missing_reg.isnull()) #首先找出数据集中哪里有缺失值,返回的全是布尔值
0 1 2 3 4 ... 8 9 10 11 12
0 True False True True False ... False False True True False
1 False False True False False ... False True True False False
2 False True False False True ... False False True True True
3 True True True False False ... True False False True True
4 True False False False True ... True True False True False
.. ... ... ... ... ... ... ... ... ... ... ...
501 True True True False False ... False True False True False
502 False False False False False ... False False True False False
503 True True False True False ... True True False True False
504 False False False True False ... False True False False False
505 False False False False False ... False True True False False
print(x_missing_reg.isnull().sum(axis=0))#然后按列进行取和,返回0-12这13个特征中所有缺失值的数量
0 200
1 201
2 200
3 203
4 202
5 201
6 185
7 197
8 196
9 197
10 204
11 214
12 189
dtype: int64
#注意,这里不能用np.sort,因为这里需要的是索引,而不是缺失值的数量,np.sort会损失掉索引
print(np.argsort(x_missing_reg.isnull().sum(axis=0)))#使用np.argsort进行排序,返回从小到大的顺序对对应的索引
0 6
1 12
2 8
3 7
4 9
5 0
6 2
7 1
8 5
9 4
10 3
11 10
12 11
dtype: int64
print(np.argsort(x_missing_reg.isnull().sum(axis=0)).values)#取出其中的values即可
[ 6 12 8 7 9 0 2 1 5 4 3 10 11]
此步一共三行代码,接下来逐行解析此三行代码。注意,这里先不套循环
df = x_missing_reg
fillc = df.iloc[: , 6]
df = pd.concat([df.iloc[: , df.columns != 6] , pd.DataFrame(y_full)] , axis=1)
(1)在回归填补的过程中,有填0的操作,这里不能在原有的矩阵上进行0的填补,不然下一次循环进行的时候,会报错,因为没有缺失值了(原来是缺失值的都填了0),所以原本的矩阵的缺失值不能被0填补,但是被选出来的那一列要用回归跑出来的结果填上,因此最开始需要df = x_missing_reg
,然后将0填到df
中,然后划分训练集和测试集也用df
来划分,将最后回归出来的结果赋值到x_missing_reg
中去。因此,只有回归的结果才能填入到x_missing_reg
中,而且每次循环的时候,都让df = x_missing_reg
df = x_missing_reg
(2)构建新标签
根据上述索引排序的结果,索引为6的那一列是我们最开始要去填补的,用切片的方式将其取出
fillc = df.iloc[: , 6]
print(fillc)
0 65.2
1 78.9
2 61.1
3 45.8
4 NaN
...
501 69.1
502 76.7
503 91.0
504 89.3
505 NaN
Name: 6, Length: 506, dtype: float64
(3)构建新特征矩阵(没有被选中的特征+原有的标签)
df.columns
可以看到所有的列索引,用列表的形式看一下
print(df.columns)
RangeIndex(start=0, stop=13, step=1)
print([*df.columns])
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
取出除开索引为6的列的其余列
print(df.columns != 6)
[ True True True True True True False True True True True True True]
print(df.iloc[: , df.columns != 6].iloc[: , 0:10]) #全部打印中间的4-7列数据会被省略,所以这里打印前10列,可以看到第六列没了,其他的列都提出来了
0 1 2 3 4 5 7 8 9 10
0 NaN 18.0 NaN NaN 0.538 NaN 4.0900 1.0 296.0 NaN
1 0.02731 0.0 NaN 0.0 0.469 NaN 4.9671 2.0 NaN NaN
2 0.02729 NaN 7.07 0.0 NaN 7.185 NaN 2.0 242.0 NaN
3 NaN NaN NaN 0.0 0.458 NaN NaN NaN 222.0 18.7
4 NaN 0.0 2.18 0.0 NaN 7.147 NaN NaN NaN 18.7
.. ... ... ... ... ... ... ... ... ... ...
501 NaN NaN NaN 0.0 0.573 NaN NaN 1.0 NaN 21.0
502 0.04527 0.0 11.93 0.0 0.573 6.120 2.2875 1.0 273.0 NaN
503 NaN NaN 11.93 NaN 0.573 6.976 NaN NaN NaN 21.0
504 0.10959 0.0 11.93 NaN 0.573 NaN NaN 1.0 NaN 21.0
505 0.04741 0.0 11.93 0.0 0.573 6.030 NaN 1.0 NaN NaN
然后把这个矩阵和y_full
连接起来,上面是DataFrame
,因此先把y_full
转换成DataFrame
的格式,然后用pd.concat()
函数进行连接
print(pd.DataFrame(y_full))
0
0 24.0
1 21.6
2 34.7
3 33.4
4 36.2
.. ...
501 22.4
502 20.6
503 23.9
504 22.0
505 11.9
print(pd.concat([df.iloc[: , df.columns != 6] , pd.DataFrame(y_full)] , axis=1))
0 1 2 3 4 ... 9 10 11 12 0
0 NaN 18.0 NaN NaN 0.538 ... 296.0 NaN NaN 4.98 24.0
1 0.02731 0.0 NaN 0.0 0.469 ... NaN NaN 396.90 9.14 21.6
2 0.02729 NaN 7.07 0.0 NaN ... 242.0 NaN NaN NaN 34.7
3 NaN NaN NaN 0.0 0.458 ... 222.0 18.7 NaN NaN 33.4
4 NaN 0.0 2.18 0.0 NaN ... NaN 18.7 NaN 5.33 36.2
.. ... ... ... ... ... ... ... ... ... ... ...
501 NaN NaN NaN 0.0 0.573 ... NaN 21.0 NaN 9.67 22.4
502 0.04527 0.0 11.93 0.0 0.573 ... 273.0 NaN 396.90 9.08 20.6
503 NaN NaN 11.93 NaN 0.573 ... NaN 21.0 NaN 5.64 23.9
504 0.10959 0.0 11.93 NaN 0.573 ... NaN 21.0 393.45 6.48 22.0
505 0.04741 0.0 11.93 0.0 0.573 ... NaN NaN 396.90 7.88 11.9
df_0 = SimpleImputer(missing_values=np.nan , strategy="constant" , fill_value=0).fit_transform(df)
print(pd.DataFrame(df_0))
0 1 2 3 4 ... 8 9 10 11 12
0 0.00000 18.0 0.00 0.0 0.538 ... 296.0 0.0 0.00 4.98 24.0
1 0.02731 0.0 0.00 0.0 0.469 ... 0.0 0.0 396.90 9.14 21.6
2 0.02729 0.0 7.07 0.0 0.000 ... 242.0 0.0 0.00 0.00 34.7
3 0.00000 0.0 0.00 0.0 0.458 ... 222.0 18.7 0.00 0.00 33.4
4 0.00000 0.0 2.18 0.0 0.000 ... 0.0 18.7 0.00 5.33 36.2
.. ... ... ... ... ... ... ... ... ... ... ...
501 0.00000 0.0 0.00 0.0 0.573 ... 0.0 21.0 0.00 9.67 22.4
502 0.04527 0.0 11.93 0.0 0.573 ... 273.0 0.0 396.90 9.08 20.6
503 0.00000 0.0 11.93 0.0 0.573 ... 0.0 21.0 0.00 5.64 23.9
504 0.10959 0.0 11.93 0.0 0.573 ... 0.0 21.0 393.45 6.48 22.0
505 0.04741 0.0 11.93 0.0 0.573 ... 0.0 0.0 396.90 7.88 11.9
y_train = fillc[fillc.notnull()]
y_test = fillc[fillc.isnull()]
x_train = df_0[y_train.index , :]
x_test = df_0[y_test.index , : ]
先找y_train。y_train即为被选中要填补的特征中所存在的那些非空值
y_train = fillc[fillc.notnull()]
print(y_train)
0 65.2
1 78.9
2 61.1
3 45.8
5 58.7
...
500 79.7
501 69.1
502 76.7
503 91.0
504 89.3
Name: 6, Length: 321, dtype: float64
然后是y_test,y_test是被选中要填补的特征中,不存在的那些值(空值),这里我们不需要y_test的值,而是y_test所带的索引
y_test = fillc[fillc.isnull()]
print(y_test)
4 NaN
8 NaN
9 NaN
10 NaN
14 NaN
..
482 NaN
488 NaN
493 NaN
494 NaN
505 NaN
Name: 6, Length: 185, dtype: float64
然后根据y_train和y_test所带的索引,我们就可以得到x_train和x_test了。
#在新特征矩阵上,被选出来的要填充的特征的非空值所对应的记录
x_train = df_0[y_train.index , :] #shape=(321,13)
#在新特征矩阵上,被选出来的要填充的特征的空值所对应的记录
x_test = df_0[y_test.index , : ] #shape=(185,13)
rfc = RandomForestRegressor()
rfc.fit(x_train , y_train)
y_predict = rfc.predict(x_test) #用predict接口将x_test导入,得到我们的预测结果,此结果就是要用来填补空值的
怎么定位到那些空值并填入预测好的值呢
首先还是取出需要填的那一列
print(x_missing_reg.iloc[:,6])
0 65.2
1 78.9
2 61.1
3 45.8
4 NaN
...
501 69.1
502 76.7
503 91.0
504 89.3
505 NaN
Name: 6, Length: 506, dtype: float64
然后找出是空值的那些行
print(x_missing_reg.iloc[:,6].isnull())
0 False
1 False
2 False
3 False
4 True
...
501 False
502 False
503 False
504 False
505 True
Name: 6, Length: 506, dtype: bool
然后再用布尔索引取出在第6列中是空值的那些行,注意,这里要用.loc
。顺带简要介绍下iloc函数和loc函数。
loc
函数:通过行索引 “Index” 中的具体值来取行数据(如取"Index"为"A"的行),这里取的是布尔索引值为True
的行,因此用loc
函数
data.loc[:,['A']] #取'A'列所有行,多取几列格式为 data.loc[:,['A','B']]
iloc
函数:通过行号来取行数据(如取第二行的数据)
data.iloc[:,[0]] #取第0列所有行,多取几列格式为 data.iloc[:,[0,1]]
print(x_missing_reg.loc[x_missing_reg.iloc[:,6].isnull() , 6])
4 NaN
8 NaN
9 NaN
10 NaN
14 NaN
..
482 NaN
488 NaN
493 NaN
494 NaN
505 NaN
Name: 6, Length: 185, dtype: float64
然后将填补好的特征填入到其中
x_missing_reg.loc[x_missing_reg.iloc[:,6].isnull() , 6] = y_predict
print(x_missing_reg.iloc[: , :10])
0 1 2 3 4 5 6 7 8 9
0 NaN 18.0 NaN NaN 0.538 NaN 65.200 4.0900 1.0 296.0
1 0.02731 0.0 NaN 0.0 0.469 NaN 78.900 4.9671 2.0 NaN
2 0.02729 NaN 7.07 0.0 NaN 7.185 61.100 NaN 2.0 242.0
3 NaN NaN NaN 0.0 0.458 NaN 45.800 NaN NaN 222.0
4 NaN 0.0 2.18 0.0 NaN 7.147 58.470 NaN NaN NaN
.. ... ... ... ... ... ... ... ... ... ...
501 NaN NaN NaN 0.0 0.573 NaN 69.100 NaN 1.0 NaN
502 0.04527 0.0 11.93 0.0 0.573 6.120 76.700 2.2875 1.0 273.0
503 NaN NaN 11.93 NaN 0.573 6.976 91.000 NaN NaN NaN
504 0.10959 0.0 11.93 NaN 0.573 NaN 89.300 NaN 1.0 NaN
505 0.04741 0.0 11.93 0.0 0.573 6.030 86.075 NaN 1.0 NaN
[506 rows x 10 columns]
print(x_missing_reg.isnull().sum(axis=0))
0 200
1 201
2 200
3 203
4 202
5 201
6 0
7 197
8 196
9 197
10 204
11 214
12 189
dtype: int64
可以看到,索引为6的这一列全部都填上了,没有缺失值了,其他列还没有填补上,因此接下来就是填补其他列,用循环来实现,让前面代码中所有索引为6的地方都换成索引为i即可
import numpy as np
import pandas as pd
from sklearn.datasets import load_boston
from sklearn.ensemble import RandomForestRegressor
from sklearn.impute import SimpleImputer
boston = load_boston()
x_full = boston.data #x.shape = (506, 13),共有506个样本,每个样本有13个特征
y_full = boston.target
n_samples = x_full.shape[0]
n_features = x_full.shape[1]
rng = np.random.RandomState(0)
missing_rate = 0.5
n_missing_samples = int(np.floor(n_samples * n_features * missing_rate))
missing_features = rng.randint(0 , n_features , n_missing_samples) #列索引 13
missing_samples = rng.randint(0 , n_samples , n_missing_samples) #行索引 506
#创造缺失的数据集
x_missing = x_full.copy()
y_missing = y_full.copy()
x_missing[missing_samples , missing_features] = np.nan
x_missing = pd.DataFrame(x_missing)
#数据集中缺失值从少到多进行排序
x_missing_reg = x_missing.copy()
sortindex = np.argsort(x_missing_reg.isnull().sum(axis=0)).values
for i in sortindex :
#构建新特征矩阵和新标签
df = x_missing_reg
fillc = df.iloc[: , i]
df = pd.concat([df.iloc[: , df.columns != i] , pd.DataFrame(y_full)] , axis=1)
#在新特征矩阵中,对含有缺失值的列,进行0的填补
df_0 = SimpleImputer(missing_values=np.nan , strategy="constant" , fill_value=0).fit_transform(df)
#构建新的训练集和测试集
y_train = fillc[fillc.notnull()]
y_test = fillc[fillc.isnull()]
x_train = df_0[y_train.index , :]
x_test = df_0[y_test.index , : ]
#用随机森林填补缺失值
rfc = RandomForestRegressor()
rfc.fit(x_train , y_train)
y_predict = rfc.predict(x_test) #用predict接口将x_test导入,得到我们的预测结果,此结果就是要用来填补空值的
x_missing_reg.loc[x_missing_reg.iloc[:,i].isnull() , i] = y_predict
处理前:
print(x_missing_reg.isnull().sum(axis=0))
0 200
1 201
2 200
3 203
4 202
5 201
6 185
7 197
8 196
9 197
10 204
11 214
12 189
dtype: int64
处理后:
print(x_missing_reg.isnull().sum(axis=0))
0 0
1 0
2 0
3 0
4 0
5 0
6 0
7 0
8 0
9 0
10 0
11 0
12 0
dtype: int64
将随机森林填补缺失值的效果与用0或均值填补缺失值的效果进行一个对比。用交叉验证,打分策略为mse。
X = [x_full , x_missing_mean , x_missing_0 , x_missing_reg]
mse = []
for x in X :
estimator = RandomForestRegressor(random_state=0)
scores = cross_val_score(estimator , x , y_full ,
scoring="neg_mean_squared_error" , cv=5).mean()
mse.append(scores * -1)
print([*zip(["x_full" , "x_missing_mean" , "x_missing_0" , "x_missing_reg"] , mse)])
[('x_full', 21.571667100368845),
('x_missing_mean', 40.848037216676374),
('x_missing_0', 49.626793201980185),
('x_missing_reg', 19.980358874238)]
mse越小越好,可以看到,用随机森林填补缺失值的效果,居然比原本的数据集效果还要好,当然这有过拟合的风险,不过无法否认的是,随机森林填补缺失值的效果确实不错
import numpy as np
import pandas as pd
from sklearn.datasets import load_boston
from sklearn.ensemble import RandomForestRegressor
from sklearn.impute import SimpleImputer
from sklearn.model_selection import cross_val_score
boston = load_boston()
x_full = boston.data #x.shape = (506, 13),共有506个样本,每个样本有13个特征
y_full = boston.target
n_samples = x_full.shape[0]
n_features = x_full.shape[1]
rng = np.random.RandomState(0)
missing_rate = 0.5
n_missing_samples = int(np.floor(n_samples * n_features * missing_rate))
missing_features = rng.randint(0 , n_features , n_missing_samples) #列索引 13
missing_samples = rng.randint(0 , n_samples , n_missing_samples) #行索引 506
#创造缺失的数据集
x_missing = x_full.copy()
y_missing = y_full.copy()
x_missing[missing_samples , missing_features] = np.nan
x_missing = pd.DataFrame(x_missing)
#使用均值进行填充
imp_mean = SimpleImputer(missing_values=np.nan , strategy="mean")
x_missing_mean = imp_mean.fit_transform(x_missing)
#使用0进行填充
imp_0 = SimpleImputer(missing_values=np.nan , strategy="constant" , fill_value=0)
x_missing_0 = imp_0.fit_transform(x_missing)
#数据集中缺失值从少到多进行排序
x_missing_reg = x_missing.copy()
sortindex = np.argsort(x_missing_reg.isnull().sum(axis=0)).values
for i in sortindex :
#构建新特征矩阵和新标签
df = x_missing_reg
fillc = df.iloc[: , i]
df = pd.concat([df.iloc[: , df.columns != i] , pd.DataFrame(y_full)] , axis=1)
#在新特征矩阵中,对含有缺失值的列,进行0的填补
df_0 = SimpleImputer(missing_values=np.nan , strategy="constant" , fill_value=0).fit_transform(df)
y_train = fillc[fillc.notnull()]
y_test = fillc[fillc.isnull()]
x_train = df_0[y_train.index , :]
x_test = df_0[y_test.index , : ]
#用随机森林填补缺失值
rfc = RandomForestRegressor()
rfc.fit(x_train , y_train)
y_predict = rfc.predict(x_test) #用predict接口将x_test导入,得到我们的预测结果,此结果就是要用来填补空值的
x_missing_reg.loc[x_missing_reg.iloc[:,i].isnull() , i] = y_predict
X = [x_full , x_missing_mean , x_missing_0 , x_missing_reg]
mse = []
for x in X :
estimator = RandomForestRegressor(random_state=0)
scores = cross_val_score(estimator , x , y_full ,
scoring="neg_mean_squared_error" , cv=5).mean()
mse.append(scores * -1)
print([*zip(["x_full" , "x_missing_mean" , "x_missing_0" , "x_missing_reg"] , mse)])
本文主要参考菜菜TsaiTsai的课程