sklearn 数据预处理

sklearn

preporcessing

  • 预处理,针对每一列数据生成对应的多项式特征,可以用于之后的多项式拟合,即先生成对应的多项式,然后再利用最小二乘法拟合
  • 一个矩阵为[a,b]的形式,则生成2阶多项式特征之后为[1, a, b, ab, a^2, b^2]
  • interaction_only为True时,只找出有交互作用的多项式矩阵,[1,a,b,ab]
  • include_bias=False时,不包括常数列。(如果包括常数列,则之后生成的多项式特征可以用于生成线性特征,一般加上该偏差项会简化之后的建模过程)
  • 参考链接:http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PolynomialFeatures.html
import numpy as np
from sklearn import preprocessing

x = np.arange(6).reshape(3,2)
print(x)

poly = preprocessing.PolynomialFeatures( 2 )
y0 = poly.fit_transform( x )
poly = preprocessing.PolynomialFeatures( 2, interaction_only=True )
y1 = poly.fit_transform(x)
poly = preprocessing.PolynomialFeatures( 2, include_bias=False )
y2 = poly.fit_transform(x)
print(y0)
print(y1)
print(y2)
[[0 1]
 [2 3]
 [4 5]]
[[  1.   0.   1.   0.   0.   1.]
 [  1.   2.   3.   4.   6.   9.]
 [  1.   4.   5.  16.  20.  25.]]
[[  1.   0.   1.   0.]
 [  1.   2.   3.   6.]
 [  1.   4.   5.  20.]]
[[  0.   1.   0.   0.   1.]
 [  2.   3.   4.   6.   9.]
 [  4.   5.  16.  20.  25.]]

model_selection

  • 在将数据划分为训练集和测试集时,可以直接采用sklearn.model_selection中的train_test_split函数
  • 有时候训练样本不够大的时候,数据集中有些属性对应的样本个数可能会很少,这时候如果仅仅采用随机抽样可能会造成训练集和数据集的偏差,因此需要采用分层抽样,即所有训练集和测试集的样本都需要按照特定属性(可以是标签值或者特定的属性值)的比例进行随机抽取,最终保证训练集中各类所占的比例跟原始数据是一样的
from sklearn.model_selection import train_test_split
data = np.arange(50).reshape(10,5)
train_set, test_set = train_test_split(data, test_size=0.2)

from sklearn.model_selection import StratifiedShuffleSplit
X = np.array([[1, 2], [3, 4], [1, 2], [3, 4]])
y = np.array([0, 0, 1, 1])
sss = StratifiedShuffleSplit(n_splits=3, test_size=0.5)
sss.get_n_splits( X, y )
for train_index, test_index in sss.split(X, y):
    print( "train index : ", train_index, ", test index : ", test_index )
train index :  [1 3] , test index :  [0 2]
train index :  [2 0] , test index :  [3 1]
train index :  [2 1] , test index :  [3 0]

Imputer异常值处理

  • 由于程序无法处理缺失值信息,因此需要对缺失值进行补充,我们可以选择删除所有属性缺失的样本,但是这样数据量会大大减少,Imputer可以对缺失的数据利用已有数据的中值、均值或者最大频率点的数据进行填充
  • 由于这种方法只能处理数值型数据,因此首先需要去除数据里所有的非数值类型的数据
import pandas as pd
import os
csv_path = os.path.join("./datasets/housing", "housing.csv")
housing = pd.read_csv( csv_path )
housing.info()

from sklearn.preprocessing import Imputer
imputer = Imputer( strategy="median" )
housing_num = housing.drop( "ocean_proximity", axis=1 )
# housing_num.info()
imputer.fit( housing_num )
# 返回的X为np.ndarray格式
X = imputer.transform( housing_num )
# 将数据转化为pandas的Dataframe格式
housing_tr = pd.DataFrame(X, columns=housing_num.columns)
housing_tr.info()

RangeIndex: 20640 entries, 0 to 20639
Data columns (total 10 columns):
longitude             20640 non-null float64
latitude              20640 non-null float64
housing_median_age    20640 non-null float64
total_rooms           20640 non-null float64
total_bedrooms        20433 non-null float64
population            20640 non-null float64
households            20640 non-null float64
median_income         20640 non-null float64
median_house_value    20640 non-null float64
ocean_proximity       20640 non-null object
dtypes: float64(9), object(1)
memory usage: 1.6+ MB
(20640, 9)





pandas.core.frame.DataFrame

处理类别属性

  • 在预处理的过程中,通常会遇到非数值型的属性数据,此时需要将其转换为数值型,可以采用LabelEncoder将类别转化为对应的唯一的数字
  • 在处理的过程中,如果只是单独地按照之前的转化方法,不同类别之间的距离不同,这显然不符合实际情况,因此采用one-hot encoding对类别进行编码,N X 1的类别信息会被转化为N X m的矩阵,m为类别的个数。输入为上述LabelEncoder生成的矩阵,输出是scipy的稀疏矩阵,采用toarray方法就可以将其转化为np.ndarray矩阵
  • 如果需要直接将类别(非数值类型)属性转化为OneHot类型的数据,可以使用LabelBinarizer,此时默认返回的是array
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OneHotEncoder
from sklearn.preprocessing import LabelBinarizer
encoder = LabelEncoder()
housing_cat = housing["ocean_proximity"]
housing_cat_encoded = encoder.fit_transform( housing_cat )
housing_cat_encoded
print( encoder.classes_ )

encoder = OneHotEncoder()
housing_cat = housing["ocean_proximity"]
housing_cat_1hot = encoder.fit_transform( housing_cat_encoded.reshape(-1,1) )
print(type(housing_cat_1hot))
print(housing_cat_1hot.toarray())

encoder = LabelBinarizer()
house_cat_1hot_lb = encoder.fit_transform( housing_cat )
print(house_cat_1hot_lb)
['<1H OCEAN' 'INLAND' 'ISLAND' 'NEAR BAY' 'NEAR OCEAN']

[[ 0.  0.  0.  1.  0.]
 [ 0.  0.  0.  1.  0.]
 [ 0.  0.  0.  1.  0.]
 ..., 
 [ 0.  1.  0.  0.  0.]
 [ 0.  1.  0.  0.  0.]
 [ 0.  1.  0.  0.  0.]]
[[0 0 0 1 0]
 [0 0 0 1 0]
 [0 0 0 1 0]
 ..., 
 [0 1 0 0 0]
 [0 1 0 0 0]
 [0 1 0 0 0]]

数据标准化

  • min-max方法,即将所有属性对应的数据都归一化到[0,1]之间
  • z-score方法,即将所有属性对应的数据变为N(0,1)的数据
  • min-max方法生成的数据可以很方便地被用于神经网络中(很多层的输入的范围都是[0,1]范围内),但是受异常值的影响比较大
  • z-score方法生成的数据不受异常值的影响

Pipeline

  • 机器学习中,数据处理都是按照一个特定的流程来进行的,预处理的过程也是如此。因此,可以建立一个pipeline,对数据进行自动处理
  • 管道之间可以相互连接,形成一个新的管道,所有的管道都有fit_transform方法,用于生成需要的数据

  • 首先创建一个transformer,同时增加一些额外的属性

from sklearn.base import BaseEstimator, TransformerMixin

# column index
rooms_ix, bedrooms_ix, population_ix, household_ix = 3, 4, 5, 6

class CombinedAttributesAdder(BaseEstimator, TransformerMixin):
    def __init__(self, add_bedrooms_per_room = True): # no *args or **kargs
        self.add_bedrooms_per_room = add_bedrooms_per_room
    def fit(self, X, y=None):
        return self  # nothing else to do
    def transform(self, X, y=None):
        rooms_per_household = X[:, rooms_ix] / X[:, household_ix]
        population_per_household = X[:, population_ix] / X[:, household_ix]
        if self.add_bedrooms_per_room:
            bedrooms_per_room = X[:, bedrooms_ix] / X[:, rooms_ix]
            return np.c_[X, rooms_per_household, population_per_household,
                         bedrooms_per_room]
        else:
            return np.c_[X, rooms_per_household, population_per_household]

from sklearn.base import BaseEstimator, TransformerMixin

# Create a class to select numerical or categorical columns 
# since Scikit-Learn doesn't handle DataFrames yet
class DataFrameSelector(BaseEstimator, TransformerMixin):
    def __init__(self, attribute_names):
        self.attribute_names = attribute_names
    def fit(self, X, y=None):
        return self
    def transform(self, X):
        return X[self.attribute_names].values

attr_adder = CombinedAttributesAdder(add_bedrooms_per_room=False)
housing_extra_attribs = attr_adder.transform(housing.values)
# Definition of the CategoricalEncoder class, copied from PR #9151.
# Just run this cell, or copy it to your code, do not try to understand it (yet).

from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.utils import check_array
from sklearn.preprocessing import LabelEncoder
from scipy import sparse

class CategoricalEncoder(BaseEstimator, TransformerMixin):
    """Encode categorical features as a numeric array.
    The input to this transformer should be a matrix of integers or strings,
    denoting the values taken on by categorical (discrete) features.
    The features can be encoded using a one-hot aka one-of-K scheme
    (``encoding='onehot'``, the default) or converted to ordinal integers
    (``encoding='ordinal'``).
    This encoding is needed for feeding categorical data to many scikit-learn
    estimators, notably linear models and SVMs with the standard kernels.
    Read more in the :ref:`User Guide `.
    Parameters
    ----------
    encoding : str, 'onehot', 'onehot-dense' or 'ordinal'
        The type of encoding to use (default is 'onehot'):
        - 'onehot': encode the features using a one-hot aka one-of-K scheme
          (or also called 'dummy' encoding). This creates a binary column for
          each category and returns a sparse matrix.
        - 'onehot-dense': the same as 'onehot' but returns a dense array
          instead of a sparse matrix.
        - 'ordinal': encode the features as ordinal integers. This results in
          a single column of integers (0 to n_categories - 1) per feature.
    categories : 'auto' or a list of lists/arrays of values.
        Categories (unique values) per feature:
        - 'auto' : Determine categories automatically from the training data.
        - list : ``categories[i]`` holds the categories expected in the ith
          column. The passed categories are sorted before encoding the data
          (used categories can be found in the ``categories_`` attribute).
    dtype : number type, default np.float64
        Desired dtype of output.
    handle_unknown : 'error' (default) or 'ignore'
        Whether to raise an error or ignore if a unknown categorical feature is
        present during transform (default is to raise). When this is parameter
        is set to 'ignore' and an unknown category is encountered during
        transform, the resulting one-hot encoded columns for this feature
        will be all zeros.
        Ignoring unknown categories is not supported for
        ``encoding='ordinal'``.
    Attributes
    ----------
    categories_ : list of arrays
        The categories of each feature determined during fitting. When
        categories were specified manually, this holds the sorted categories
        (in order corresponding with output of `transform`).
    Examples
    --------
    Given a dataset with three features and two samples, we let the encoder
    find the maximum value per feature and transform the data to a binary
    one-hot encoding.
    >>> from sklearn.preprocessing import CategoricalEncoder
    >>> enc = CategoricalEncoder(handle_unknown='ignore')
    >>> enc.fit([[0, 0, 3], [1, 1, 0], [0, 2, 1], [1, 0, 2]])
    ... # doctest: +ELLIPSIS
    CategoricalEncoder(categories='auto', dtype=<... 'numpy.float64'>,
              encoding='onehot', handle_unknown='ignore')
    >>> enc.transform([[0, 1, 1], [1, 0, 4]]).toarray()
    array([[ 1.,  0.,  0.,  1.,  0.,  0.,  1.,  0.,  0.],
           [ 0.,  1.,  1.,  0.,  0.,  0.,  0.,  0.,  0.]])
    See also
    --------
    sklearn.preprocessing.OneHotEncoder : performs a one-hot encoding of
      integer ordinal features. The ``OneHotEncoder assumes`` that input
      features take on values in the range ``[0, max(feature)]`` instead of
      using the unique values.
    sklearn.feature_extraction.DictVectorizer : performs a one-hot encoding of
      dictionary items (also handles string-valued features).
    sklearn.feature_extraction.FeatureHasher : performs an approximate one-hot
      encoding of dictionary items or strings.
    """

    def __init__(self, encoding='onehot', categories='auto', dtype=np.float64,
                 handle_unknown='error'):
        self.encoding = encoding
        self.categories = categories
        self.dtype = dtype
        self.handle_unknown = handle_unknown

    def fit(self, X, y=None):
        """Fit the CategoricalEncoder to X.
        Parameters
        ----------
        X : array-like, shape [n_samples, n_feature]
            The data to determine the categories of each feature.
        Returns
        -------
        self
        """

        if self.encoding not in ['onehot', 'onehot-dense', 'ordinal']:
            template = ("encoding should be either 'onehot', 'onehot-dense' "
                        "or 'ordinal', got %s")
            raise ValueError(template % self.handle_unknown)

        if self.handle_unknown not in ['error', 'ignore']:
            template = ("handle_unknown should be either 'error' or "
                        "'ignore', got %s")
            raise ValueError(template % self.handle_unknown)

        if self.encoding == 'ordinal' and self.handle_unknown == 'ignore':
            raise ValueError("handle_unknown='ignore' is not supported for"
                             " encoding='ordinal'")

        X = check_array(X, dtype=np.object, accept_sparse='csc', copy=True)
        n_samples, n_features = X.shape

        self._label_encoders_ = [LabelEncoder() for _ in range(n_features)]

        for i in range(n_features):
            le = self._label_encoders_[i]
            Xi = X[:, i]
            if self.categories == 'auto':
                le.fit(Xi)
            else:
                valid_mask = np.in1d(Xi, self.categories[i])
                if not np.all(valid_mask):
                    if self.handle_unknown == 'error':
                        diff = np.unique(Xi[~valid_mask])
                        msg = ("Found unknown categories {0} in column {1}"
                               " during fit".format(diff, i))
                        raise ValueError(msg)
                le.classes_ = np.array(np.sort(self.categories[i]))

        self.categories_ = [le.classes_ for le in self._label_encoders_]

        return self

    def transform(self, X):
        """Transform X using one-hot encoding.
        Parameters
        ----------
        X : array-like, shape [n_samples, n_features]
            The data to encode.
        Returns
        -------
        X_out : sparse matrix or a 2-d array
            Transformed input.
        """
        X = check_array(X, accept_sparse='csc', dtype=np.object, copy=True)
        n_samples, n_features = X.shape
        X_int = np.zeros_like(X, dtype=np.int)
        X_mask = np.ones_like(X, dtype=np.bool)

        for i in range(n_features):
            valid_mask = np.in1d(X[:, i], self.categories_[i])

            if not np.all(valid_mask):
                if self.handle_unknown == 'error':
                    diff = np.unique(X[~valid_mask, i])
                    msg = ("Found unknown categories {0} in column {1}"
                           " during transform".format(diff, i))
                    raise ValueError(msg)
                else:
                    # Set the problematic rows to an acceptable value and
                    # continue `The rows are marked `X_mask` and will be
                    # removed later.
                    X_mask[:, i] = valid_mask
                    X[:, i][~valid_mask] = self.categories_[i][0]
            X_int[:, i] = self._label_encoders_[i].transform(X[:, i])

        if self.encoding == 'ordinal':
            return X_int.astype(self.dtype, copy=False)

        mask = X_mask.ravel()
        n_values = [cats.shape[0] for cats in self.categories_]
        n_values = np.array([0] + n_values)
        indices = np.cumsum(n_values)

        column_indices = (X_int + indices[:-1]).ravel()[mask]
        row_indices = np.repeat(np.arange(n_samples, dtype=np.int32),
                                n_features)[mask]
        data = np.ones(n_samples * n_features)[mask]

        out = sparse.csc_matrix((data, (row_indices, column_indices)),
                                shape=(n_samples, indices[-1]),
                                dtype=self.dtype).tocsr()
        if self.encoding == 'onehot-dense':
            return out.toarray()
        else:
            return out
from sklearn.pipeline import FeatureUnion
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
num_attribs = list( housing_num )
cat_attribs = ["ocean_proximity"]

num_pipeline = Pipeline([
        ('selector', DataFrameSelector(num_attribs)),
        ('imputer', Imputer(strategy="median")),
        ('attribs_adder', CombinedAttributesAdder()),
        ('std_scaler', StandardScaler()),
    ])

cat_pipeline = Pipeline([
        ('selector', DataFrameSelector(cat_attribs)),
        ('cat_encoder', CategoricalEncoder(encoding="onehot-dense")),
    ])

full_pipeline = FeatureUnion(transformer_list=[
        ("num_pipeline", num_pipeline),
        ("cat_pipeline", cat_pipeline),
    ])

housing_prepared = full_pipeline.fit_transform(housing)
print( housing_prepared )
[[-1.32783522  1.05254828  0.98214266 ...,  0.          1.          0.        ]
 [-1.32284391  1.04318455 -0.60701891 ...,  0.          1.          0.        ]
 [-1.33282653  1.03850269  1.85618152 ...,  0.          1.          0.        ]
 ..., 
 [-0.8237132   1.77823747 -0.92485123 ...,  0.          0.          0.        ]
 [-0.87362627  1.77823747 -0.84539315 ...,  0.          0.          0.        ]
 [-0.83369581  1.75014627 -1.00430931 ...,  0.          0.          0.        ]]

训练和评估模型

  • sklearn中含有很多模型,如LinearRegression可以用于线性模型
  • 求解回归问题时,一般步骤为
    • 新建一个回归的类
    • fit函数,将训练数据导入模型,并生成模型
    • predict函数,导入测试数据,生成模型的预测结果
    • mean_squared_error等函数,可以用于对模型进行评价
  • K折交叉验证指的是将所有数据分为K份,然后每次从K份中取出K-1份用于训练,另外一份用于测试,这样循环K次,可以保证所有的数据都被用于训练,对得到的K个模型误差求平均,就可以得到最终的模型误差,最终比单一模型的误差要小一些。
    • 因为交叉验证的指标函数是求最大值,因此我们需要将cost function取反,作为交叉验证的指标函数
  • 集成方法如随机森林等方法是训练出n个模型,然后对n个模型的回归结果求加权平均,得到最终的结果,可以降低模型方差,有的也可以降低模型误差
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
from sklearn.tree import DecisionTreeRegressor
from sklearn.model_selection import cross_val_predict
from sklearn.ensemble import RandomForestRegressor
#return

shuffle和permutation

  • shuffle是对矩阵自身进行洗牌,permutation是返回一个原矩阵洗牌后的副本
  • 如果传入permutation的是一个整型数,则会返回一个洗牌后的arange
  • 参考链接:https://www.jianshu.com/p/f0eb10acaa2d
import numpy as np
a = 5
b = np.random.permutation(a)
a = np.arange(a)
c = np.random.permutation(a)
np.random.shuffle(a)
print(a)
print(b)
print(c)
[1 3 2 4 0]
[4 0 3 2 1]
[4 3 0 2 1]

pandas 处理预处理数据

read_csv导入数据

import pandas as pd
import os
csv_path = os.path.join("./datasets/housing", "housing.csv")
housing = pd.read_csv( csv_path )

print( housing.head() )
print( housing.info() )
print( housing["ocean_proximity"].value_counts() / len(housing) )
   longitude  latitude  housing_median_age  total_rooms  total_bedrooms  \
0    -122.23     37.88                41.0        880.0           129.0   
1    -122.22     37.86                21.0       7099.0          1106.0   
2    -122.24     37.85                52.0       1467.0           190.0   
3    -122.25     37.85                52.0       1274.0           235.0   
4    -122.25     37.85                52.0       1627.0           280.0   

   population  households  median_income  median_house_value ocean_proximity  
0       322.0       126.0         8.3252            452600.0        NEAR BAY  
1      2401.0      1138.0         8.3014            358500.0        NEAR BAY  
2       496.0       177.0         7.2574            352100.0        NEAR BAY  
3       558.0       219.0         5.6431            341300.0        NEAR BAY  
4       565.0       259.0         3.8462            342200.0        NEAR BAY  

RangeIndex: 20640 entries, 0 to 20639
Data columns (total 10 columns):
longitude             20640 non-null float64
latitude              20640 non-null float64
housing_median_age    20640 non-null float64
total_rooms           20640 non-null float64
total_bedrooms        20433 non-null float64
population            20640 non-null float64
households            20640 non-null float64
median_income         20640 non-null float64
median_house_value    20640 non-null float64
ocean_proximity       20640 non-null object
dtypes: float64(9), object(1)
memory usage: 1.6+ MB
None
<1H OCEAN     0.442636
INLAND        0.317393
NEAR OCEAN    0.128779
NEAR BAY      0.110950
ISLAND        0.000242
Name: ocean_proximity, dtype: float64

可视化

  • %matplotlib inline是初始化matplotlib的绘图环境
%matplotlib inline
import matplotlib.pyplot as plt

housing.plot( kind="scatter", x="longitude", y="latitude", alpha=0.1 )
housing.plot( kind="scatter", x="longitude", y="latitude", alpha=0.4, 
             s=housing["population"]/100, label="population",c="median_house_value", cmap=plt.get_cmap("jet"), colorbar=True )
plt.legend()

相关系数及其可视化

from pandas.tools.plotting import scatter_matrix
attributes = ["median_house_value", "median_income", "total_rooms",
    "housing_median_age"]
scatter_matrix( housing[attributes], figsize=(12,8) )
# 所有属性之间的相关系数
corr_matrix = housing.corr()
print( corr_matrix )
type(corr_matrix)
                    longitude  latitude  housing_median_age  total_rooms  \
longitude            1.000000 -0.924664           -0.108197     0.044568   
latitude            -0.924664  1.000000            0.011173    -0.036100   
housing_median_age  -0.108197  0.011173            1.000000    -0.361262   
total_rooms          0.044568 -0.036100           -0.361262     1.000000   
total_bedrooms       0.069608 -0.066983           -0.320451     0.930380   
population           0.099773 -0.108785           -0.296244     0.857126   
households           0.055310 -0.071035           -0.302916     0.918484   
median_income       -0.015176 -0.079809           -0.119034     0.198050   
median_house_value  -0.045967 -0.144160            0.105623     0.134153   

                    total_bedrooms  population  households  median_income  \
longitude                 0.069608    0.099773    0.055310      -0.015176   
latitude                 -0.066983   -0.108785   -0.071035      -0.079809   
housing_median_age       -0.320451   -0.296244   -0.302916      -0.119034   
total_rooms               0.930380    0.857126    0.918484       0.198050   
total_bedrooms            1.000000    0.877747    0.979728      -0.007723   
population                0.877747    1.000000    0.907222       0.004834   
households                0.979728    0.907222    1.000000       0.013033   
median_income            -0.007723    0.004834    0.013033       1.000000   
median_house_value        0.049686   -0.024650    0.065843       0.688075   

                    median_house_value  
longitude                    -0.045967  
latitude                     -0.144160  
housing_median_age            0.105623  
total_rooms                   0.134153  
total_bedrooms                0.049686  
population                   -0.024650  
households                    0.065843  
median_income                 0.688075  
median_house_value            1.000000  





pandas.core.frame.DataFrame

sklearn 数据预处理_第1张图片

你可能感兴趣的:(python相关,机器学习)