文章目录
- 第二章part2:数据重构
- 2.4 数据的合并
- 2.4.1 任务一:将data文件夹里面的所有数据都载入,观察数据的之间的关系
- 2.4.2:任务二:使用concat方法:将数据train-left-up.csv和train-right-up.csv横向合并为一张表,并保存这张表为result_up
- 2.4.3 任务三:使用concat方法:将train-left-down和train-right-down横向合并为一张表,并保存这张表为result_down。然后将上边的result_up和result_down纵向合并为result。
- 2.4.4 任务四:使用DataFrame自带的方法join方法和append:完成任务二和任务三的任务
- 2.4.5 任务五:使用Panads的merge方法和DataFrame的append方法:完成任务二和任务三的任务
- 2.4.6 任务六:完成的数据保存为result.csv
- 2.5 换一种角度看数据
- 2.5.1 任务一:将我们的数据变为Series类型的数据
- 2.6 数据运用
- 2.6.1 任务一:通过教材《Python for Data Analysis》P303、Google or anything来学习了解GroupBy机制
- 2.4.2:任务二:计算泰坦尼克号男性与女性的平均票价
- 2.4.3:任务三:统计泰坦尼克号中男女的存活人数
- 2.4.4:任务四:计算客舱不同等级的存活人数
- 2.4.5:任务五:统计在不同等级的票中的不同年龄的船票花费的平均值
- 2.4.6:任务六:将任务二和任务三的数据合并,并保存到sex_fare_survived.csv
- 2.4.7:任务七:得出不同年龄的总的存活人数,然后找出存活人数的最高的年龄,最后计算存活人数最高的存活率(存活人数/总人数)
复习:在前面我们已经学习了Pandas基础,第二章我们开始进入数据分析的业务部分,在第二章第一节的内容中,我们学习了数据的清洗,这一部分十分重要,只有数据变得相对干净,我们之后对数据的分析才可以更有力。而这一节,我们要做的是数据重构,数据重构依旧属于数据理解(准备)的范围。
开始之前,导入numpy、pandas包和数据
import numpy as np
import pandas as pd
text = pd.read_csv('./data/train-left-up.csv')
text.head()
|
PassengerId |
Survived |
Pclass |
Name |
0 |
1 |
0 |
3 |
Braund, Mr. Owen Harris |
1 |
2 |
1 |
1 |
Cumings, Mrs. John Bradley (Florence Briggs Th... |
2 |
3 |
1 |
3 |
Heikkinen, Miss. Laina |
3 |
4 |
1 |
1 |
Futrelle, Mrs. Jacques Heath (Lily May Peel) |
4 |
5 |
0 |
3 |
Allen, Mr. William Henry |
第二章part2:数据重构
2.4 数据的合并
2.4.1 任务一:将data文件夹里面的所有数据都载入,观察数据的之间的关系
text_left_up = pd.read_csv('./data/train-left-up.csv')
text_left_down = pd.read_csv('./data/train-left-down.csv')
text_right_up = pd.read_csv('./data/train-right-up.csv')
text_right_down = pd.read_csv('./data/train-right-down.csv')
text_left_up.head()
|
PassengerId |
Survived |
Pclass |
Name |
0 |
1 |
0 |
3 |
Braund, Mr. Owen Harris |
1 |
2 |
1 |
1 |
Cumings, Mrs. John Bradley (Florence Briggs Th... |
2 |
3 |
1 |
3 |
Heikkinen, Miss. Laina |
3 |
4 |
1 |
1 |
Futrelle, Mrs. Jacques Heath (Lily May Peel) |
4 |
5 |
0 |
3 |
Allen, Mr. William Henry |
text_left_down.head()
|
PassengerId |
Survived |
Pclass |
Name |
0 |
440 |
0 |
2 |
Kvillner, Mr. Johan Henrik Johannesson |
1 |
441 |
1 |
2 |
Hart, Mrs. Benjamin (Esther Ada Bloomfield) |
2 |
442 |
0 |
3 |
Hampe, Mr. Leon |
3 |
443 |
0 |
3 |
Petterson, Mr. Johan Emil |
4 |
444 |
1 |
2 |
Reynaldo, Ms. Encarnacion |
text_right_up.head()
|
Sex |
Age |
SibSp |
Parch |
Ticket |
Fare |
Cabin |
Embarked |
0 |
male |
22.0 |
1.0 |
0.0 |
A/5 21171 |
7.2500 |
NaN |
S |
1 |
female |
38.0 |
1.0 |
0.0 |
PC 17599 |
71.2833 |
C85 |
C |
2 |
female |
26.0 |
0.0 |
0.0 |
STON/O2. 3101282 |
7.9250 |
NaN |
S |
3 |
female |
35.0 |
1.0 |
0.0 |
113803 |
53.1000 |
C123 |
S |
4 |
male |
35.0 |
0.0 |
0.0 |
373450 |
8.0500 |
NaN |
S |
text_right_down.head()
|
Sex |
Age |
SibSp |
Parch |
Ticket |
Fare |
Cabin |
Embarked |
0 |
male |
31.0 |
0 |
0 |
C.A. 18723 |
10.500 |
NaN |
S |
1 |
female |
45.0 |
1 |
1 |
F.C.C. 13529 |
26.250 |
NaN |
S |
2 |
male |
20.0 |
0 |
0 |
345769 |
9.500 |
NaN |
S |
3 |
male |
25.0 |
1 |
0 |
347076 |
7.775 |
NaN |
S |
4 |
female |
28.0 |
0 |
0 |
230434 |
13.000 |
NaN |
S |
【提示】结合之前我们加载的train.csv数据,大致预测一下上面的数据是什么
答:将train.csv分成了四块区域,左上左下和右上右下
2.4.2:任务二:使用concat方法:将数据train-left-up.csv和train-right-up.csv横向合并为一张表,并保存这张表为result_up
result_up = pd.concat([text_left_up,text_right_up],axis=1)
result_up.head()
|
PassengerId |
Survived |
Pclass |
Name |
Sex |
Age |
SibSp |
Parch |
Ticket |
Fare |
Cabin |
Embarked |
0 |
1.0 |
0.0 |
3.0 |
Braund, Mr. Owen Harris |
male |
22.0 |
1.0 |
0.0 |
A/5 21171 |
7.2500 |
NaN |
S |
1 |
2.0 |
1.0 |
1.0 |
Cumings, Mrs. John Bradley (Florence Briggs Th... |
female |
38.0 |
1.0 |
0.0 |
PC 17599 |
71.2833 |
C85 |
C |
2 |
3.0 |
1.0 |
3.0 |
Heikkinen, Miss. Laina |
female |
26.0 |
0.0 |
0.0 |
STON/O2. 3101282 |
7.9250 |
NaN |
S |
3 |
4.0 |
1.0 |
1.0 |
Futrelle, Mrs. Jacques Heath (Lily May Peel) |
female |
35.0 |
1.0 |
0.0 |
113803 |
53.1000 |
C123 |
S |
4 |
5.0 |
0.0 |
3.0 |
Allen, Mr. William Henry |
male |
35.0 |
0.0 |
0.0 |
373450 |
8.0500 |
NaN |
S |
2.4.3 任务三:使用concat方法:将train-left-down和train-right-down横向合并为一张表,并保存这张表为result_down。然后将上边的result_up和result_down纵向合并为result。
result_down = pd.concat([text_left_down,text_right_down],axis=1)
result_down.head()
|
PassengerId |
Survived |
Pclass |
Name |
Sex |
Age |
SibSp |
Parch |
Ticket |
Fare |
Cabin |
Embarked |
0 |
440 |
0 |
2 |
Kvillner, Mr. Johan Henrik Johannesson |
male |
31.0 |
0 |
0 |
C.A. 18723 |
10.500 |
NaN |
S |
1 |
441 |
1 |
2 |
Hart, Mrs. Benjamin (Esther Ada Bloomfield) |
female |
45.0 |
1 |
1 |
F.C.C. 13529 |
26.250 |
NaN |
S |
2 |
442 |
0 |
3 |
Hampe, Mr. Leon |
male |
20.0 |
0 |
0 |
345769 |
9.500 |
NaN |
S |
3 |
443 |
0 |
3 |
Petterson, Mr. Johan Emil |
male |
25.0 |
1 |
0 |
347076 |
7.775 |
NaN |
S |
4 |
444 |
1 |
2 |
Reynaldo, Ms. Encarnacion |
female |
28.0 |
0 |
0 |
230434 |
13.000 |
NaN |
S |
result = pd.concat([result_up,result_down],axis=0)
result.head()
|
PassengerId |
Survived |
Pclass |
Name |
Sex |
Age |
SibSp |
Parch |
Ticket |
Fare |
Cabin |
Embarked |
0 |
1.0 |
0.0 |
3.0 |
Braund, Mr. Owen Harris |
male |
22.0 |
1.0 |
0.0 |
A/5 21171 |
7.2500 |
NaN |
S |
1 |
2.0 |
1.0 |
1.0 |
Cumings, Mrs. John Bradley (Florence Briggs Th... |
female |
38.0 |
1.0 |
0.0 |
PC 17599 |
71.2833 |
C85 |
C |
2 |
3.0 |
1.0 |
3.0 |
Heikkinen, Miss. Laina |
female |
26.0 |
0.0 |
0.0 |
STON/O2. 3101282 |
7.9250 |
NaN |
S |
3 |
4.0 |
1.0 |
1.0 |
Futrelle, Mrs. Jacques Heath (Lily May Peel) |
female |
35.0 |
1.0 |
0.0 |
113803 |
53.1000 |
C123 |
S |
4 |
5.0 |
0.0 |
3.0 |
Allen, Mr. William Henry |
male |
35.0 |
0.0 |
0.0 |
373450 |
8.0500 |
NaN |
S |
2.4.4 任务四:使用DataFrame自带的方法join方法和append:完成任务二和任务三的任务
- join方法实现增加column。
- append方法实现增加数据行。
result_up = text_left_up.join(text_right_up)
result_up.head()
|
PassengerId |
Survived |
Pclass |
Name |
Sex |
Age |
SibSp |
Parch |
Ticket |
Fare |
Cabin |
Embarked |
0 |
1 |
0 |
3 |
Braund, Mr. Owen Harris |
male |
22.0 |
1.0 |
0.0 |
A/5 21171 |
7.2500 |
NaN |
S |
1 |
2 |
1 |
1 |
Cumings, Mrs. John Bradley (Florence Briggs Th... |
female |
38.0 |
1.0 |
0.0 |
PC 17599 |
71.2833 |
C85 |
C |
2 |
3 |
1 |
3 |
Heikkinen, Miss. Laina |
female |
26.0 |
0.0 |
0.0 |
STON/O2. 3101282 |
7.9250 |
NaN |
S |
3 |
4 |
1 |
1 |
Futrelle, Mrs. Jacques Heath (Lily May Peel) |
female |
35.0 |
1.0 |
0.0 |
113803 |
53.1000 |
C123 |
S |
4 |
5 |
0 |
3 |
Allen, Mr. William Henry |
male |
35.0 |
0.0 |
0.0 |
373450 |
8.0500 |
NaN |
S |
result_down = text_left_down.join(text_right_down)
result_down.head()
|
PassengerId |
Survived |
Pclass |
Name |
Sex |
Age |
SibSp |
Parch |
Ticket |
Fare |
Cabin |
Embarked |
0 |
440 |
0 |
2 |
Kvillner, Mr. Johan Henrik Johannesson |
male |
31.0 |
0 |
0 |
C.A. 18723 |
10.500 |
NaN |
S |
1 |
441 |
1 |
2 |
Hart, Mrs. Benjamin (Esther Ada Bloomfield) |
female |
45.0 |
1 |
1 |
F.C.C. 13529 |
26.250 |
NaN |
S |
2 |
442 |
0 |
3 |
Hampe, Mr. Leon |
male |
20.0 |
0 |
0 |
345769 |
9.500 |
NaN |
S |
3 |
443 |
0 |
3 |
Petterson, Mr. Johan Emil |
male |
25.0 |
1 |
0 |
347076 |
7.775 |
NaN |
S |
4 |
444 |
1 |
2 |
Reynaldo, Ms. Encarnacion |
female |
28.0 |
0 |
0 |
230434 |
13.000 |
NaN |
S |
result = result_up.append(result_down)
result.head()
|
PassengerId |
Survived |
Pclass |
Name |
Sex |
Age |
SibSp |
Parch |
Ticket |
Fare |
Cabin |
Embarked |
0 |
1 |
0 |
3 |
Braund, Mr. Owen Harris |
male |
22.0 |
1.0 |
0.0 |
A/5 21171 |
7.2500 |
NaN |
S |
1 |
2 |
1 |
1 |
Cumings, Mrs. John Bradley (Florence Briggs Th... |
female |
38.0 |
1.0 |
0.0 |
PC 17599 |
71.2833 |
C85 |
C |
2 |
3 |
1 |
3 |
Heikkinen, Miss. Laina |
female |
26.0 |
0.0 |
0.0 |
STON/O2. 3101282 |
7.9250 |
NaN |
S |
3 |
4 |
1 |
1 |
Futrelle, Mrs. Jacques Heath (Lily May Peel) |
female |
35.0 |
1.0 |
0.0 |
113803 |
53.1000 |
C123 |
S |
4 |
5 |
0 |
3 |
Allen, Mr. William Henry |
male |
35.0 |
0.0 |
0.0 |
373450 |
8.0500 |
NaN |
S |
2.4.5 任务五:使用Panads的merge方法和DataFrame的append方法:完成任务二和任务三的任务
pd.merge(left, right, how=‘inner’, on=None, left_on=None, right_on=None,
left_index=False, right_index=False, sort=True,
suffixes=(’_x’, ‘_y’), copy=True, indicator=False,
validate=None)
- left: 拼接的左侧DataFrame对象
- right: 拼接的右侧DataFrame对象
- on: 作为左右dataframe对象的连接键,必须在左侧和右侧DataFrame对象中找到。如果未传递且根据index索引(left_index和right_index)为False,则取两个df的列索引交集为连接键。
- left_on:左侧DataFrame中的列或索引级别用作键。 可以是列名,索引级名称,也可以是长度等于DataFrame长度的数组。
- right_on: 左侧DataFrame中的列或索引级别用作键。 可以是列名,索引级名称,也可以是长度等于DataFrame长度的数组。
- left_index: 如果为True,则使用左侧DataFrame中的索引(行标签)作为其连接键。
- right_index: 与left_index功能相似。
- how: 合并方式,包括‘left’, ‘right’, ‘outer’, ‘inner’. 默认inner。inner是取交集,outer取并集。比如left:[‘A’,‘B’,‘C’];right[’'A,‘C’,‘D’];inner取交集的话,left中出现的A会和right中出现的每一个A进行匹配拼接,如果没有是B,在right中没有匹配到,则会丢失。'outer’取并集,出现的A会进行一一匹配,没有同时出现的会将缺失的部分添加缺失值。
- sort: 按字典顺序通过连接键对结果DataFrame进行排序。 默认为True,设置为False将在很多情况下显着提高性能。
- suffixes: 用于两个df重复列索引添加后缀。 默认为(‘x’,’ y’)。
- copy: 始终从传递的DataFrame对象复制数据(默认为True),即使不需要重建索引也是如此。
- indicator:将一列添加到名为_merge的输出DataFrame,其中包含有关每行源的信息。 _merge是分类类型,并且对于其合并键仅出现在“左”DataFrame中的观察值,取得值为left_only,对于其合并键仅出现在“右”DataFrame中的观察值为right_only,并且如果在两者中都找到观察点的合并键,则为left_only。
result_up = pd.merge(text_left_up,text_right_up,left_index=True,right_index=True)
result_down = pd.merge(text_left_down,text_right_down,left_index=True,right_index=True)
result = result_up.append(result_down)
result.head()
|
PassengerId |
Survived |
Pclass |
Name |
Sex |
Age |
SibSp |
Parch |
Ticket |
Fare |
Cabin |
Embarked |
0 |
1 |
0 |
3 |
Braund, Mr. Owen Harris |
male |
22.0 |
1.0 |
0.0 |
A/5 21171 |
7.2500 |
NaN |
S |
1 |
2 |
1 |
1 |
Cumings, Mrs. John Bradley (Florence Briggs Th... |
female |
38.0 |
1.0 |
0.0 |
PC 17599 |
71.2833 |
C85 |
C |
2 |
3 |
1 |
3 |
Heikkinen, Miss. Laina |
female |
26.0 |
0.0 |
0.0 |
STON/O2. 3101282 |
7.9250 |
NaN |
S |
3 |
4 |
1 |
1 |
Futrelle, Mrs. Jacques Heath (Lily May Peel) |
female |
35.0 |
1.0 |
0.0 |
113803 |
53.1000 |
C123 |
S |
4 |
5 |
0 |
3 |
Allen, Mr. William Henry |
male |
35.0 |
0.0 |
0.0 |
373450 |
8.0500 |
NaN |
S |
【思考】对比merge、join以及concat的方法的不同以及相同。思考一下在任务四和任务五的情况下,为什么都要求使用DataFrame的append方法,如何只要求使用merge或者join可不可以完成任务四和任务五呢?
答:join是merge的简化版,没有基于列连接字的相关操作。concat是单纯的两个dataframe对象的拼接。
只使用merge和join无法完成任务,因为都是在列上进行操作,而append是对行进行拼接。
2.4.6 任务六:完成的数据保存为result.csv
result.to_csv('result.csv')
2.5 换一种角度看数据
2.5.1 任务一:将我们的数据变为Series类型的数据
text = pd.read_csv('./result.csv')
text.head()
|
Unnamed: 0 |
PassengerId |
Survived |
Pclass |
Name |
Sex |
Age |
SibSp |
Parch |
Ticket |
Fare |
Cabin |
Embarked |
0 |
0 |
1 |
0 |
3 |
Braund, Mr. Owen Harris |
male |
22.0 |
1.0 |
0.0 |
A/5 21171 |
7.2500 |
NaN |
S |
1 |
1 |
2 |
1 |
1 |
Cumings, Mrs. John Bradley (Florence Briggs Th... |
female |
38.0 |
1.0 |
0.0 |
PC 17599 |
71.2833 |
C85 |
C |
2 |
2 |
3 |
1 |
3 |
Heikkinen, Miss. Laina |
female |
26.0 |
0.0 |
0.0 |
STON/O2. 3101282 |
7.9250 |
NaN |
S |
3 |
3 |
4 |
1 |
1 |
Futrelle, Mrs. Jacques Heath (Lily May Peel) |
female |
35.0 |
1.0 |
0.0 |
113803 |
53.1000 |
C123 |
S |
4 |
4 |
5 |
0 |
3 |
Allen, Mr. William Henry |
male |
35.0 |
0.0 |
0.0 |
373450 |
8.0500 |
NaN |
S |
unit_result = text.stack().head(20)
unit_result
0 Unnamed: 0 0
PassengerId 1
Survived 0
Pclass 3
Name Braund, Mr. Owen Harris
Sex male
Age 22
SibSp 1
Parch 0
Ticket A/5 21171
Fare 7.25
Embarked S
1 Unnamed: 0 1
PassengerId 2
Survived 1
Pclass 1
Name Cumings, Mrs. John Bradley (Florence Briggs Th...
Sex female
Age 38
SibSp 1
dtype: object
unit_result.to_csv('./unit_result.csv')
test = pd.read_csv('./unit_result.csv')
test.head()
|
0 |
Unnamed: 0 |
0.1 |
0 |
0 |
PassengerId |
1 |
1 |
0 |
Survived |
0 |
2 |
0 |
Pclass |
3 |
3 |
0 |
Name |
Braund, Mr. Owen Harris |
4 |
0 |
Sex |
male |
result = pd.read_csv('./result.csv')
result.head()
|
Unnamed: 0 |
PassengerId |
Survived |
Pclass |
Name |
Sex |
Age |
SibSp |
Parch |
Ticket |
Fare |
Cabin |
Embarked |
0 |
0 |
1 |
0 |
3 |
Braund, Mr. Owen Harris |
male |
22.0 |
1.0 |
0.0 |
A/5 21171 |
7.2500 |
NaN |
S |
1 |
1 |
2 |
1 |
1 |
Cumings, Mrs. John Bradley (Florence Briggs Th... |
female |
38.0 |
1.0 |
0.0 |
PC 17599 |
71.2833 |
C85 |
C |
2 |
2 |
3 |
1 |
3 |
Heikkinen, Miss. Laina |
female |
26.0 |
0.0 |
0.0 |
STON/O2. 3101282 |
7.9250 |
NaN |
S |
3 |
3 |
4 |
1 |
1 |
Futrelle, Mrs. Jacques Heath (Lily May Peel) |
female |
35.0 |
1.0 |
0.0 |
113803 |
53.1000 |
C123 |
S |
4 |
4 |
5 |
0 |
3 |
Allen, Mr. William Henry |
male |
35.0 |
0.0 |
0.0 |
373450 |
8.0500 |
NaN |
S |
2.6 数据运用
2.6.1 任务一:通过教材《Python for Data Analysis》P303、Google or anything来学习了解GroupBy机制
groupby用于进行数据的分组以及分组后地组内运算。
- df.groupby(分类属性)[结果属性] 按照分类属性分类对于结果属性进行统计
- df[‘结果属性’].groupby[‘分类属性’]
2.4.2:任务二:计算泰坦尼克号男性与女性的平均票价
result.groupby('Sex')['Fare'].mean()
Sex
female 44.479818
male 25.523893
Name: Fare, dtype: float64
在了解GroupBy机制之后,运用这个机制完成一系列的操作,来达到我们的目的。
下面通过几个任务来熟悉GroupBy机制。
2.4.3:任务三:统计泰坦尼克号中男女的存活人数
result.groupby('Sex')['Survived'].sum()
Sex
female 233
male 109
Name: Survived, dtype: int64
2.4.4:任务四:计算客舱不同等级的存活人数
result.groupby('Pclass')['Survived'].sum()
survived_pclass = result['Survived'].groupby(result['Pclass'])
survived_pclass.sum()
Pclass
1 136
2 87
3 119
Name: Survived, dtype: int64
【提示:】表中的存活那一栏,可以发现如果还活着记为1,死亡记为0
【思考】从数据分析的角度,上面的统计结果可以得出那些结论
思考心得
- 船上女性乘客票价高于男性
- 在救生船的分配问题上女性受到照顾所以幸存较多
- 1、3客舱位置可能相较2客舱较易逃离
【思考】从任务二到任务四中,这些运算可以通过agg()函数来同时计算。并且可以使用rename函数修改列名。你可以按照提示写出这个过程吗?
temp = result.groupby('Sex').agg({'Fare':'mean','Survived':'sum'})
temp.columns = ['价格','是否幸存']
temp.index.name = '性别'
temp
|
价格 |
是否幸存 |
性别 |
|
|
female |
44.479818 |
233 |
male |
25.523893 |
109 |
2.4.5:任务五:统计在不同等级的票中的不同年龄的船票花费的平均值
result.groupby(['Pclass','Age'])['Fare'].mean().head()
Pclass Age
1 0.92 151.5500
2.00 151.5500
4.00 81.8583
11.00 120.0000
14.00 120.0000
Name: Fare, dtype: float64
2.4.6:任务六:将任务二和任务三的数据合并,并保存到sex_fare_survived.csv
temp.to_csv('./sex_fare_survived.csv',encoding='gb2312')
2.4.7:任务七:得出不同年龄的总的存活人数,然后找出存活人数的最高的年龄,最后计算存活人数最高的存活率(存活人数/总人数)
age_survived = result.groupby('Age')['Survived'].sum()
age_survived[age_survived.values==age_survived.max()]
Age
24.0 15
Name: Survived, dtype: int64
_sum = result['Survived'].sum()
_sum
342
percent = age_survived.max()/_sum
print(percent)
0.043859649122807015