kaggle——泰坦尼克之灾2

之前已经写过一篇关于这个比赛的,简单的描述了比赛的大致流程,参考。在这之后又看了rank3的kernal,参考链接,对比起来内容更加详细综合,所以用这篇来总结一下。

1、流程

就这个案例来讲,导入数据之后要做的,我分为3步走:
1、观察数据,了解特征的含义以及与生存率的关系,方便做特征工程
2、特征工程&数据清洗,这一步是为了得到一个可以用于训练的好且完整的数据。
3、跑模型
4、提交

2、代码及分析

先导入需要使用的库

"""导入库"""
# 数据分析与整理
import pandas as pd
import numpy as np
import random as rnd
# 可视化
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
# 机器学习
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC, LinearSVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import Perceptron
from sklearn.linear_model import SGDClassifier
from sklearn.tree import DecisionTreeClassifier

然后获取训练集和测试集的数据

"""获取数据"""
train_df = pd.read_csv('train.csv')
test_df = pd.read_csv('test.csv')
combine = [train_df, test_df]

2.1观察数据

print(train_df.columns.values)# 初步了解有什么特征

Out[]:['PassengerId' 'Survived' 'Pclass' 'Name' 'Sex' 'Age' 'SibSp' 'Parch'
'Ticket' 'Fare' 'Cabin' 'Embarked']

train_df.head(3)# 预览前3行

Out[]:

train_df.info()# 了解每列特征的非空值数量和数据类型,方便数据清洗
test_df.info()

Out[]:

初步观察到特征Age、Cabin、Embarked有缺失值,而Cabin缺失得比较严重,可能要删去。特征Sex、Embarked等的数据类型是object。这些特征都要进行数据清洗。

同样的,观察到特征Age、Fare,Cabin有缺失值,而Cabin缺失得比较严重,可能要删去。特征Sex、Embarked等的数据类型是object。这些特征都要进行数据清洗。
接着观察单个特征与生存率的关系,了解数据之间的相关性,为构造特征工程做准备。

train_df[['Pclass', 'Survived']].groupby(['Pclass'], as_index=False).mean().sort_values(by='Survived', ascending=False)

Out[]:

Pclass为1表示最高级车厢,也表示乘客社会阶级越高,生存率也更高

train_df[["Sex", "Survived"]].groupby(['Sex'], as_index=False).mean().sort_values(by='Survived', ascending=False)

Out[]:


female的生存率更高,满足电影里“女士先行,男士断后”

还可以通过可视化的方式更直观地体现多个特征与生存率的相关性

g = sns.FacetGrid(train_df, col='Survived')
g.map(plt.hist, 'Age', bins=20)

Out[]:

grid = sns.FacetGrid(train_df, col='Survived', row='Pclass', size=2.2, aspect=1.6)
grid.map(plt.hist, 'Age', alpha=.5, bins=20)
grid.add_legend();

Out[]:

grid = sns.FacetGrid(train_df, row='Embarked', col='Survived', size=2.2, aspect=1.6)
grid.map(sns.barplot, 'Sex', 'Fare', alpha=.5, ci=None)
grid.add_legend()

2.2特征工程&数据清洗

主要操作是删除特征或提取新特征,并对特征进行数据清洗。数据清洗包括缺失值填充、数据转换(包括将数据类型转换为机器学习可以处理的int型,或将数据映射到区间里,对满足同个区间赋予同样的处理值,参考特征Age和Fare的处理)

缺失值补充分两种情况
对于连续型特征:
1、最简单用中位数或者平均值填充。
2、平均值和标准差之间生成随机数。
3、使用其他相关特征。假设猜测Age,可使用不同pclass和gender组合时的Age中值来猜测。pclass=1,gender=0,pclass=1,gender=1的中位年龄等等来填充
对于分类型特征: 用类别最多的类别补充

由于特征Cabin包含许多空值。特征Ticket包含高重复率(22%),并且与生存率之间可能没有相关性。所以将这两个特征删除。

train_df = train_df.drop(['Ticket', 'Cabin'], axis=1)
test_df = test_df.drop(['Ticket', 'Cabin'], axis=1)
combine = [train_df, test_df]

利用特征Name提取新特征Title

for dataset in combine:
    dataset['Title'] = dataset.Name.str.extract(' ([A-Za-z]+)\.', expand=False)# expand=False表示返回DataFrame
# 用一个更常见的名字替换许多标题,分类稀有标题
for dataset in combine:
    dataset['Title'] = dataset['Title'].replace(['Lady', 'Countess','Capt', 'Col',\
    'Don', 'Dr', 'Major', 'Rev', 'Sir', 'Jonkheer', 'Dona'], 'Rare')
    dataset['Title'] = dataset['Title'].replace('Mlle', 'Miss')
    dataset['Title'] = dataset['Title'].replace('Ms', 'Miss')
    dataset['Title'] = dataset['Title'].replace('Mme', 'Mrs')

删除特征Name和PassengerId

train_df = train_df.drop(['Name', 'PassengerId'], axis=1)
test_df = test_df.drop(['Name'], axis=1)
combine = [train_df, test_df]

将特征Sex的数据类型转换为机器学习可以处理的int型

for dataset in combine:
    dataset['Sex'] = dataset['Sex'].map( {'female': 1, 'male': 0} ).astype(int)

特征Age缺失值填充:利用相关特征Sex和Pclass来估计Age的值

guess_ages = np.zeros((2,3))
# 迭代sex(0或1)和pclass(1,2,3)来计算六个组合的年龄估计值。
for dataset in combine:
    for i in range(0, 2):
        for j in range(0, 3):
            guess_df = dataset[(dataset['Sex'] == i) & \
                                  (dataset['Pclass'] == j+1)]['Age'].dropna()
            age_guess = guess_df.median()
            guess_ages[i,j] = int( age_guess/0.5 + 0.5 ) * 0.5
    for i in range(0, 2):
        for j in range(0, 3):
            dataset.loc[ (dataset.Age.isnull()) & (dataset.Sex == i) & (dataset.Pclass == j+1),\
                    'Age'] = guess_ages[i,j]
    dataset['Age'] = dataset['Age'].astype(int)

利用特征Age提取新特征AgeBand,目的是根据AgeBand的区间重新赋予Age处理值

train_df['AgeBand'] = pd.cut(train_df['Age'], 5)
train_df[['AgeBand', 'Survived']].groupby(['AgeBand'], as_index=False).mean().sort_values(by='AgeBand', ascending=True)

Out[]:

for dataset in combine:    
    dataset.loc[ dataset['Age'] <= 16, 'Age'] = 0
    dataset.loc[(dataset['Age'] > 16) & (dataset['Age'] <= 32), 'Age'] = 1
    dataset.loc[(dataset['Age'] > 32) & (dataset['Age'] <= 48), 'Age'] = 2
    dataset.loc[(dataset['Age'] > 48) & (dataset['Age'] <= 64), 'Age'] = 3
    dataset.loc[ dataset['Age'] > 64, 'Age']

删除特征AgeBand

train_df = train_df.drop(['AgeBand'], axis=1)
combine = [train_df, test_df]

利用特征Parch和SibSp提取新特征FamilySize,目的是引出新特征IsAlone

for dataset in combine:
    dataset['FamilySize'] = dataset['SibSp'] + dataset['Parch'] + 1
for dataset in combine:
    dataset['IsAlone'] = 0
    dataset.loc[dataset['FamilySize'] == 1, 'IsAlone'] = 1

删除特征Parch、SibSp、FamilySize(这些被用来提取出特征IsAlone)

train_df = train_df.drop(['Parch', 'SibSp', 'FamilySize'], axis=1)
test_df = test_df.drop(['Parch', 'SibSp', 'FamilySize'], axis=1)
combine = [train_df, test_df]

利用特征Age和Class提取新特征Age*Class

for dataset in combine:
    dataset['Age*Class'] = dataset.Age * dataset.Pclass

Embarked缺失值填充(用最常见的类别填充)并

freq_port = train_df.Embarked.dropna().mode()[0]# 众数
for dataset in combine:
    dataset['Embarked'] = dataset['Embarked'].fillna(freq_port)
for dataset in combine:
    dataset['Embarked'] = dataset['Embarked'].map( {'S': 0, 'C': 1, 'Q': 2} ).astype(int)

Fare缺失值填充并利用新特征FareBand的区间重新赋予Fare处理值,最后将特征FareBand删除

test_df['Fare'].fillna(test_df['Fare'].dropna().median(), inplace=True)
train_df['FareBand'] = pd.qcut(train_df['Fare'], 4)
train_df[['FareBand', 'Survived']].groupby(['FareBand'], as_index=False).mean().sort_values(by='FareBand', ascending=True)

Out[]:

for dataset in combine:
    dataset.loc[ dataset['Fare'] <= 7.91, 'Fare'] = 0
    dataset.loc[(dataset['Fare'] > 7.91) & (dataset['Fare'] <= 14.454), 'Fare'] = 1
    dataset.loc[(dataset['Fare'] > 14.454) & (dataset['Fare'] <= 31), 'Fare']   = 2
    dataset.loc[ dataset['Fare'] > 31, 'Fare'] = 3
    dataset['Fare'] = dataset['Fare'].astype(int)

train_df = train_df.drop(['FareBand'], axis=1)
combine = [train_df, test_df]

最后预览一下经过特征工程和数据清洗后的训练集和测试集数据

train_df.head(10)
test_df.head(10)

2.3跑模型

X_train = train_df.drop("Survived", axis=1)
Y_train = train_df["Survived"]
X_test  = test_df.drop("PassengerId", axis=1).copy()

逻辑回归:

# Logistic Regression

logreg = LogisticRegression()
logreg.fit(X_train, Y_train)
Y_pred = logreg.predict(X_test)
acc_log = round(logreg.score(X_train, Y_train) * 100, 2)
acc_log

Out[]:

80.359999999999999

SVC:

# Support Vector Machines

svc = SVC()
svc.fit(X_train, Y_train)
Y_pred = svc.predict(X_test)
acc_svc = round(svc.score(X_train, Y_train) * 100, 2)
acc_svc

Out[]:

83.840000000000003

KNN:

knn = KNeighborsClassifier(n_neighbors = 3)
knn.fit(X_train, Y_train)
Y_pred = knn.predict(X_test)
acc_knn = round(knn.score(X_train, Y_train) * 100, 2)
acc_knn

Out[]:

84.739999999999995

朴素贝叶斯:

# Gaussian Naive Bayes

gaussian = GaussianNB()
gaussian.fit(X_train, Y_train)
Y_pred = gaussian.predict(X_test)
acc_gaussian = round(gaussian.score(X_train, Y_train) * 100, 2)
acc_gaussian

Out[]:

72.280000000000001

感知器:

# Perceptron

perceptron = Perceptron()
perceptron.fit(X_train, Y_train)
Y_pred = perceptron.predict(X_test)
acc_perceptron = round(perceptron.score(X_train, Y_train) * 100, 2)
acc_perceptron

Out[]:

78.0

Linear SVC:

# Linear SVC

linear_svc = LinearSVC()
linear_svc.fit(X_train, Y_train)
Y_pred = linear_svc.predict(X_test)
acc_linear_svc = round(linear_svc.score(X_train, Y_train) * 100, 2)
acc_linear_svc

Out[]:

79.120000000000005

随机梯度下降:

# Stochastic Gradient Descent

sgd = SGDClassifier()
sgd.fit(X_train, Y_train)
Y_pred = sgd.predict(X_test)
acc_sgd = round(sgd.score(X_train, Y_train) * 100, 2)
acc_sgd

Out[]:

76.879999999999995

决策树:

# Decision Tree

decision_tree = DecisionTreeClassifier()
decision_tree.fit(X_train, Y_train)
Y_pred = decision_tree.predict(X_test)
acc_decision_tree = round(decision_tree.score(X_train, Y_train) * 100, 2)
acc_decision_tree

Out[]:

86.760000000000005

随机森林:

# Random Forest

random_forest = RandomForestClassifier(n_estimators=100)
random_forest.fit(X_train, Y_train)
Y_pred = random_forest.predict(X_test)
random_forest.score(X_train, Y_train)
acc_random_forest = round(random_forest.score(X_train, Y_train) * 100, 2)
acc_random_forest

Out[]:

86.760000000000005

对所有模型的评估进行排名,以选择适合我们问题的最佳模型。当决策树和随机森林得分相同时,我们选择使用随机森林,因为它们纠正了决策树的过度适应训练集的习惯。

models = pd.DataFrame({
    'Model': ['Support Vector Machines', 'KNN', 'Logistic Regression', 
              'Random Forest', 'Naive Bayes', 'Perceptron', 
              'Stochastic Gradient Decent', 'Linear SVC', 
              'Decision Tree'],
    'Score': [acc_svc, acc_knn, acc_log, 
              acc_random_forest, acc_gaussian, acc_perceptron, 
              acc_sgd, acc_linear_svc, acc_decision_tree]})
models.sort_values(by='Score', ascending=False)

Out[]:

2.3提交

submission = pd.DataFrame({
        "PassengerId": test_df["PassengerId"],
        "Survived": Y_pred
    })
submission.to_csv('submission.csv', index=False)

3、优点

相比其他kernal,能做到rank3,我觉得最突出的优点是在数据的处理上,即特征工程&数据清洗上。下面我给出它与我上一篇泰坦尼克之灾1.0的一些对比,直观的了解这个kernal的优点(1.0 vs 2.0,只给出对比)

4、总结

对于一般的简单机器学习,先进行数据观察和探索,了解特征以及特征之间的关系(方便做特征工程);然后进行特征工程,得到用于训练的特征,并对特征做数据清洗,得到可以用于训练和测试的好且完整的数据;最后就可以跑模型了,可以跑多个基准模型看看效果,复杂的数据可以尝试混合模型集成学习,具体情况具体分析。

你可能感兴趣的:(kaggle——泰坦尼克之灾2)