让样本矩阵每一列的平均值为0,标准差为1,如三个数a,b,c
平均值:
m = ( a + b + c ) / 3 a ′ = a − m b ′ = b − m c ′ = c − m m = (a+b+c) / 3 \\ a' = a - m \\ b' = b - m \\ c' = c - m \\ m=(a+b+c)/3a′=a−mb′=b−mc′=c−m
预处理后的平均值为0:
m ′ = ( a ′ + b ′ + c ′ ) / 3 = ( ( a + b + c ) − 3 m ) = 0 m' = (a'+b'+c')/3 = ((a+b+c)-3m) = 0 \\ m′=(a′+b′+c′)/3=((a+b+c)−3m)=0
标准差:
s = ( ( ( a − m ) 2 + ( b − m ) 2 + ( c − m ) 2 ) / 3 ) a ′ ′ = a ′ / s b ′ ′ = b ′ / s c ′ ′ = c ′ / s s = \sqrt(((a-m)^2 + (b-m)^2 + (c-m)^2)/3) \\ a'' = a'/s b'' = b'/s c'' = c'/s s=(((a−m)2+(b−m)2+(c−m)2)/3)a′′=a′/sb′′=b′/sc′′=c′/s
预处理之后的标准差为1:
s ′ ′ = ( a ′ ′ 2 + b ′ ′ 2 + c ′ ′ 2 ) / 3 = ( ( a ′ / s ) 2 + ( b ′ / s ) 2 + ( c ′ / s ) 2 ) / 3 = ( ( a − m ) 2 + ( b − m ) 2 + ( c − m ) 2 ) / 3 s = 1 s'' = \sqrt{(a''^2 + b''^2 + c''^2)/3} = \sqrt{((a'/s)^2 + (b'/s)^2 + (c'/s)^2)/3} = \frac{\sqrt{((a-m)^2 + (b-m)^2 + (c-m)^2)/3}}{s} = 1 s′′=(a′′2+b′′2+c′′2)/3=((a′/s)2+(b′/s)2+(c′/s)2)/3=s((a−m)2+(b−m)2+(c−m)2)/3=1
import numpy as np
import sklearn.preprocessing as sp
a = np.array([
[1.0, 2.0, 3.0],
[4.0, 5.0, 6.0],
[7.0, 8.0, 9.0]]).astype("float64")
b = sp.scale(a)
print(b.mean(axis=0))
print(b.std(axis=0))
"""
[0. 0. 0.]
[1. 1. 1.]
"""
将样本矩阵中的每一列最小值及最大值设定为相同的区间,如a,b,c三个数,b为最小值,c为最大值,令:
a ′ = a − b b ′ = b − b c ′ = c − b a' = a - b \\ b' = b - b \\ c' = c- b \\ a′=a−bb′=b−bc′=c−b
然后:
a ′ ′ = a ′ / c ′ b ′ ′ = b ′ / c ′ c ′ ′ = c ′ / c ′ a'' = a' / c' \\ b'' = b' / c' \\ c'' = c' / c' \\ a′′=a′/c′b′′=b′/c′c′′=c′/c′
这样缩放完成时,max{a’’,b’’,c’’} = 1, min{a’’,b’’,c’’} = 0
import sklearn.preprocessing as sp
import numpy as np
a = np.array([
[1.0, 2.0, 3.0],
[4.0, 5.0, 6.0],
[7.0, 8.0, 9.0]]).astype("float64")
obj = sp.MinMaxScaler(feature_range=(0,1))
b = obj.fit_transform(a)
print(b)
"""
[[0. 0. 0. ]
[0.5 0.5 0.5]
[1. 1. 1. ]]
"""
用样本特征值分别除以样本特征值总和
import numpy as np
import sklearn.preprocessing as sp
a = np.array([
[-1.0, 2.0, 3.0],
[4.0, 5.0, 6.0],
[7.0, 8.0, 9.0]]).astype("float64")
b = sp.normalize(a, norm='l1') # 各特征除以样本特征绝对值之和
c = sp.normalize(a, norm='l2') # 各特征除以样本特征二范数
print(b)
print(c)
"""
[[-0.16666667 0.33333333 0.5 ]
[ 0.26666667 0.33333333 0.4 ]
[ 0.29166667 0.33333333 0.375 ]]
[[-0.26726124 0.53452248 0.80178373]
[ 0.45584231 0.56980288 0.68376346]
[ 0.50257071 0.57436653 0.64616234]]
"""
根据事先给定的阈值,用0和1表示特征值是否超过阈值
import numpy as np
import sklearn.preprocessing as sp
a = np.array([
[1.0, 2.0, 3.0],
[4.0, 5.0, 6.0],
[7.0, 8.0, 9.0]]).astype("float64")
obj = sp.Binarizer(threshold=4)
b = obj.transform(a)
print(b)
"""
[[0. 0. 0.]
[0. 1. 1.]
[1. 1. 1.]]
"""
根据一个特征的个数建立0和1组成的序列,用来对序列中所有的特征值进行编码,例如如下样本:
[ 1.0 1.0 1.0 1.0 1.0 2.0 1.0 2.0 4.0 ] \left[ \begin{matrix} 1.0 & 1.0 & 1.0 \\ 1.0 & 1.0 & 2.0\\ 1.0 & 2.0 & 4.0 \\ \end{matrix} \right] ⎣⎡1.01.01.01.01.02.01.02.04.0⎦⎤
对于第一列,全是1,因此采用一位数字
对于第二列,有1和2,因此采用两位数字
对于第三列,有1,2,4,因此采用三位数字
import numpy as np
import sklearn.preprocessing as sp
a = np.array([
[1.0, 1.0, 1.0],
[1.0, 1.0, 2.0],
[1.0, 2.0, 4.0]]).astype("float64")
# 定义独热编码器
one_hot_encoder = sp.OneHotEncoder(
sparse=False, # 不采用稀疏格式
dtype='int32',
categories='auto'
)
b = one_hot_encoder.fit_transform(a)
print(b)
"""
[[1 1 0 1 0 0]
[1 1 0 0 1 0]
[1 0 1 0 0 1]]
"""
# 逆向编码(内部维护着一个字典,存储编码和原数据的对应关系)
c = one_hot_encoder.inverse_transform(b)
print(c)
"""
[[1. 1. 1.]
[1. 1. 2.]
[1. 2. 4.]]
"""
根据字符串形式的特征值在特征序列的位置,为其指定一个数字标签,用于提供基于数值算法的学习模型
import sklearn.preprocessing as sp
label = ['cat', 'dog', 'sheep',
'cat', 'sheep', 'dog']
label_encoder = sp.LabelEncoder()
label_normalize = label_encoder.fit_transform(label)
print(label_normalize)
"""
[0 1 2 0 2 1]
"""
# 内部自动维护字典,逆向对应原始数据
reverse_data = label_encoder.inverse_transform(label_normalize)
print(reverse_data)
"""
['cat' 'dog' 'sheep' 'cat' 'sheep' 'dog']
"""
OK, 以上就是一些简单的数据预处理操作,记录一下只是为了有需要的时候更快的查一查。