复习:在前面我们已经学习了Pandas基础,第二章我们开始进入数据分析的业务部分,在第二章第一节的内容中,我们学习了数据的清洗,这一部分十分重要,只有数据变得相对干净,我们之后对数据的分析才可以更有力。而这一节,我们要做的是数据重构,数据重构依旧属于数据理解(准备)的范围。
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = 'all'
# 导入基本库
import numpy as np
import pandas as pd
# 载入data文件中的:train-left-up.csv
text = pd.read_csv("data/train-left-up.csv")
text.head()
PassengerId | Survived | Pclass | Name | |
---|---|---|---|---|
0 | 1 | 0 | 3 | Braund, Mr. Owen Harris |
1 | 2 | 1 | 1 | Cumings, Mrs. John Bradley (Florence Briggs Th... |
2 | 3 | 1 | 3 | Heikkinen, Miss. Laina |
3 | 4 | 1 | 1 | Futrelle, Mrs. Jacques Heath (Lily May Peel) |
4 | 5 | 0 | 3 | Allen, Mr. William Henry |
#写入代码
df = pd.read_csv('train.csv')
df.head()
df.shape
PassengerId | Survived | Pclass | Name | Sex | Age | SibSp | Parch | Ticket | Fare | Cabin | Embarked | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1 | 0 | 3 | Braund, Mr. Owen Harris | male | 22.0 | 1 | 0 | A/5 21171 | 7.2500 | NaN | S |
1 | 2 | 1 | 1 | Cumings, Mrs. John Bradley (Florence Briggs Th... | female | 38.0 | 1 | 0 | PC 17599 | 71.2833 | C85 | C |
2 | 3 | 1 | 3 | Heikkinen, Miss. Laina | female | 26.0 | 0 | 0 | STON/O2. 3101282 | 7.9250 | NaN | S |
3 | 4 | 1 | 1 | Futrelle, Mrs. Jacques Heath (Lily May Peel) | female | 35.0 | 1 | 0 | 113803 | 53.1000 | C123 | S |
4 | 5 | 0 | 3 | Allen, Mr. William Henry | male | 35.0 | 0 | 0 | 373450 | 8.0500 | NaN | S |
(891, 12)
text_left_up = pd.read_csv("data/train-left-up.csv")
text_left_down = pd.read_csv("data/train-left-down.csv")
text_right_up = pd.read_csv("data/train-right-up.csv")
text_right_down = pd.read_csv("data/train-right-down.csv")
#写入代码
text_left_up.head()
text_left_up.shape
PassengerId | Survived | Pclass | Name | |
---|---|---|---|---|
0 | 1 | 0 | 3 | Braund, Mr. Owen Harris |
1 | 2 | 1 | 1 | Cumings, Mrs. John Bradley (Florence Briggs Th... |
2 | 3 | 1 | 3 | Heikkinen, Miss. Laina |
3 | 4 | 1 | 1 | Futrelle, Mrs. Jacques Heath (Lily May Peel) |
4 | 5 | 0 | 3 | Allen, Mr. William Henry |
(439, 4)
text_left_down.head()
text_left_down.shape
PassengerId | Survived | Pclass | Name | |
---|---|---|---|---|
0 | 440 | 0 | 2 | Kvillner, Mr. Johan Henrik Johannesson |
1 | 441 | 1 | 2 | Hart, Mrs. Benjamin (Esther Ada Bloomfield) |
2 | 442 | 0 | 3 | Hampe, Mr. Leon |
3 | 443 | 0 | 3 | Petterson, Mr. Johan Emil |
4 | 444 | 1 | 2 | Reynaldo, Ms. Encarnacion |
(452, 4)
text_right_up.head()
text_right_up.shape
Sex | Age | SibSp | Parch | Ticket | Fare | Cabin | Embarked | |
---|---|---|---|---|---|---|---|---|
0 | male | 22.0 | 1 | 0 | A/5 21171 | 7.2500 | NaN | S |
1 | female | 38.0 | 1 | 0 | PC 17599 | 71.2833 | C85 | C |
2 | female | 26.0 | 0 | 0 | STON/O2. 3101282 | 7.9250 | NaN | S |
3 | female | 35.0 | 1 | 0 | 113803 | 53.1000 | C123 | S |
4 | male | 35.0 | 0 | 0 | 373450 | 8.0500 | NaN | S |
(439, 8)
text_right_down.head()
text_right_down.shape
Sex | Age | SibSp | Parch | Ticket | Fare | Cabin | Embarked | |
---|---|---|---|---|---|---|---|---|
0 | male | 31.0 | 0 | 0 | C.A. 18723 | 10.500 | NaN | S |
1 | female | 45.0 | 1 | 1 | F.C.C. 13529 | 26.250 | NaN | S |
2 | male | 20.0 | 0 | 0 | 345769 | 9.500 | NaN | S |
3 | male | 25.0 | 1 | 0 | 347076 | 7.775 | NaN | S |
4 | female | 28.0 | 0 | 0 | 230434 | 13.000 | NaN | S |
(452, 8)
【提示】结合之前我们加载的train.csv数据,大致预测一下上面的数据是什么
【回答】通过df.shape()函数,我们可以看出891行12列的dataframe被从第439行以下第4列以右切分成四个小的dataframe,文件的命名说明了它们在原dataframe的位置。
#写入代码
list_up = [text_left_up,text_right_up] # 两个dataframe放到一张list里面
result_up = pd.concat(list_up,axis=1) # concat的对象要是list,把list里面的元素合并,axis=1表示在垂直方向进行连接
result_up.head()
type(result_up)
result_up.shape
PassengerId | Survived | Pclass | Name | Sex | Age | SibSp | Parch | Ticket | Fare | Cabin | Embarked | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1 | 0 | 3 | Braund, Mr. Owen Harris | male | 22.0 | 1 | 0 | A/5 21171 | 7.2500 | NaN | S |
1 | 2 | 1 | 1 | Cumings, Mrs. John Bradley (Florence Briggs Th... | female | 38.0 | 1 | 0 | PC 17599 | 71.2833 | C85 | C |
2 | 3 | 1 | 3 | Heikkinen, Miss. Laina | female | 26.0 | 0 | 0 | STON/O2. 3101282 | 7.9250 | NaN | S |
3 | 4 | 1 | 1 | Futrelle, Mrs. Jacques Heath (Lily May Peel) | female | 35.0 | 1 | 0 | 113803 | 53.1000 | C123 | S |
4 | 5 | 0 | 3 | Allen, Mr. William Henry | male | 35.0 | 0 | 0 | 373450 | 8.0500 | NaN | S |
pandas.core.frame.DataFrame
(439, 12)
【分析】可以看出将两个原来df上部dataframe横向合并成了一个新的dataframe
# 保存表为result_up
result_up.to_csv('result_up.csv')
#将train-left-down和train-right-down横向合并为一张表,并保存这张表为result_down
list_down=[text_left_down,text_right_down]
result_down = pd.concat(list_down,axis=1)
result_down.head()
result_down.to_csv('result_down.csv')
PassengerId | Survived | Pclass | Name | Sex | Age | SibSp | Parch | Ticket | Fare | Cabin | Embarked | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 440 | 0 | 2 | Kvillner, Mr. Johan Henrik Johannesson | male | 31.0 | 0 | 0 | C.A. 18723 | 10.500 | NaN | S |
1 | 441 | 1 | 2 | Hart, Mrs. Benjamin (Esther Ada Bloomfield) | female | 45.0 | 1 | 1 | F.C.C. 13529 | 26.250 | NaN | S |
2 | 442 | 0 | 3 | Hampe, Mr. Leon | male | 20.0 | 0 | 0 | 345769 | 9.500 | NaN | S |
3 | 443 | 0 | 3 | Petterson, Mr. Johan Emil | male | 25.0 | 1 | 0 | 347076 | 7.775 | NaN | S |
4 | 444 | 1 | 2 | Reynaldo, Ms. Encarnacion | female | 28.0 | 0 | 0 | 230434 | 13.000 | NaN | S |
# 将上边的result_up和result_down纵向合并为result
result = pd.concat([result_up,result_down]) #默认axis=0,即水平方向拼接
result.head()
type(result)
result.shape
PassengerId | Survived | Pclass | Name | Sex | Age | SibSp | Parch | Ticket | Fare | Cabin | Embarked | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1 | 0 | 3 | Braund, Mr. Owen Harris | male | 22.0 | 1 | 0 | A/5 21171 | 7.2500 | NaN | S |
1 | 2 | 1 | 1 | Cumings, Mrs. John Bradley (Florence Briggs Th... | female | 38.0 | 1 | 0 | PC 17599 | 71.2833 | C85 | C |
2 | 3 | 1 | 3 | Heikkinen, Miss. Laina | female | 26.0 | 0 | 0 | STON/O2. 3101282 | 7.9250 | NaN | S |
3 | 4 | 1 | 1 | Futrelle, Mrs. Jacques Heath (Lily May Peel) | female | 35.0 | 1 | 0 | 113803 | 53.1000 | C123 | S |
4 | 5 | 0 | 3 | Allen, Mr. William Henry | male | 35.0 | 0 | 0 | 373450 | 8.0500 | NaN | S |
pandas.core.frame.DataFrame
(891, 12)
【分析】可以看出,上部dataframe和下部dataframe纵向合并成功
resul_up = text_left_up.join(text_right_up)
result_down = text_left_down.join(text_right_down)
result = result_up.append(result_down)
result.head()
type(result)
result.shape
PassengerId | Survived | Pclass | Name | Sex | Age | SibSp | Parch | Ticket | Fare | Cabin | Embarked | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1 | 0 | 3 | Braund, Mr. Owen Harris | male | 22.0 | 1 | 0 | A/5 21171 | 7.2500 | NaN | S |
1 | 2 | 1 | 1 | Cumings, Mrs. John Bradley (Florence Briggs Th... | female | 38.0 | 1 | 0 | PC 17599 | 71.2833 | C85 | C |
2 | 3 | 1 | 3 | Heikkinen, Miss. Laina | female | 26.0 | 0 | 0 | STON/O2. 3101282 | 7.9250 | NaN | S |
3 | 4 | 1 | 1 | Futrelle, Mrs. Jacques Heath (Lily May Peel) | female | 35.0 | 1 | 0 | 113803 | 53.1000 | C123 | S |
4 | 5 | 0 | 3 | Allen, Mr. William Henry | male | 35.0 | 0 | 0 | 373450 | 8.0500 | NaN | S |
pandas.core.frame.DataFrame
(891, 12)
【分析】join可以做横向操作,append可以做纵向操作,具体参数和用法见下面参考资料。
result_up = pd.merge(text_left_up,text_right_up,left_index=True,right_index=True) # 使用左侧与右侧dataframe的行索引作为连接键
result_down = pd.merge(text_left_down,text_right_down,left_index=True,right_index=True)
result = resul_up.append(result_down)
result.head()
type(result)
result.shape
PassengerId | Survived | Pclass | Name | Sex | Age | SibSp | Parch | Ticket | Fare | Cabin | Embarked | |
---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 1 | 0 | 3 | Braund, Mr. Owen Harris | male | 22.0 | 1 | 0 | A/5 21171 | 7.2500 | NaN | S |
1 | 2 | 1 | 1 | Cumings, Mrs. John Bradley (Florence Briggs Th... | female | 38.0 | 1 | 0 | PC 17599 | 71.2833 | C85 | C |
2 | 3 | 1 | 3 | Heikkinen, Miss. Laina | female | 26.0 | 0 | 0 | STON/O2. 3101282 | 7.9250 | NaN | S |
3 | 4 | 1 | 1 | Futrelle, Mrs. Jacques Heath (Lily May Peel) | female | 35.0 | 1 | 0 | 113803 | 53.1000 | C123 | S |
4 | 5 | 0 | 3 | Allen, Mr. William Henry | male | 35.0 | 0 | 0 | 373450 | 8.0500 | NaN | S |
pandas.core.frame.DataFrame
(891, 12)
【思考】对比merge、join以及concat的方法的不同以及相同。思考一下在任务四和任务五的情况下,为什么都要求使用DataFrame的append方法,如果只要求使用merge或者join可不可以完成任务四和任务五呢?
【回答】
【参考】DataFrame 数据合并,连接(merge,join,concat)
python 把几个DataFrame合并成一个DataFrame——merge,append,join,concat
result.to_csv('result.csv')
这个stack函数是干什么的?
# 将完整的数据加载出来
text = pd.read_csv('result.csv')
text.head()
Unnamed: 0 | PassengerId | Survived | Pclass | Name | Sex | Age | SibSp | Parch | Ticket | Fare | Cabin | Embarked | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0 | 1 | 0 | 3 | Braund, Mr. Owen Harris | male | 22.0 | 1 | 0 | A/5 21171 | 7.2500 | NaN | S |
1 | 1 | 2 | 1 | 1 | Cumings, Mrs. John Bradley (Florence Briggs Th... | female | 38.0 | 1 | 0 | PC 17599 | 71.2833 | C85 | C |
2 | 2 | 3 | 1 | 3 | Heikkinen, Miss. Laina | female | 26.0 | 0 | 0 | STON/O2. 3101282 | 7.9250 | NaN | S |
3 | 3 | 4 | 1 | 1 | Futrelle, Mrs. Jacques Heath (Lily May Peel) | female | 35.0 | 1 | 0 | 113803 | 53.1000 | C123 | S |
4 | 4 | 5 | 0 | 3 | Allen, Mr. William Henry | male | 35.0 | 0 | 0 | 373450 | 8.0500 | NaN | S |
unit_result=text.stack().head(20)
unit_result
unit_result.shape
type(unit_result)
0 Unnamed: 0 0
PassengerId 1
Survived 0
Pclass 3
Name Braund, Mr. Owen Harris
Sex male
Age 22
SibSp 1
Parch 0
Ticket A/5 21171
Fare 7.25
Embarked S
1 Unnamed: 0 1
PassengerId 2
Survived 1
Pclass 1
Name Cumings, Mrs. John Bradley (Florence Briggs Th...
Sex female
Age 38
SibSp 1
dtype: object
(20,)
pandas.core.series.Series
【分析】可以看出,stack: 将数据从DataFrame变成层次化的Series,即将其列索引变成行索引。
【参考】https://www.lizenghai.com/archives/3403.html
#将代码保存为unit_result,csv
unit_result.to_csv('unit_result.csv')
E:\Anaconda3\lib\site-packages\ipykernel_launcher.py:2: FutureWarning: The signature of `Series.to_csv` was aligned to that of `DataFrame.to_csv`, and argument 'header' will change its default value from False to True: please pass an explicit value to suppress this warning.
test = pd.read_csv('unit_result.csv')
test.head()
type(test)
0 | Unnamed: 0 | 0.1 | |
---|---|---|---|
0 | 0 | PassengerId | 1 |
1 | 0 | Survived | 0 |
2 | 0 | Pclass | 3 |
3 | 0 | Name | Braund, Mr. Owen Harris |
4 | 0 | Sex | male |
pandas.core.frame.DataFrame
复习:在前面我们已经学习了Pandas基础,第二章我们开始进入数据分析的业务部分,在第二章第一节的内容中,我们学习了数据的清洗,这一部分十分重要,只有数据变得相对干净,我们之后对数据的分析才可以更有力。而这一节,我们要做的是数据重构,数据重构依旧属于数据理解(准备)的范围。
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = 'all'
# 导入基本库
import numpy as np
import pandas as pd
# 载入上一个任务人保存的文件中:result.csv,并查看这个文件
text = pd.read_csv('result.csv')
text.head()
Unnamed: 0 | PassengerId | Survived | Pclass | Name | Sex | Age | SibSp | Parch | Ticket | Fare | Cabin | Embarked | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | 0 | 1 | 0 | 3 | Braund, Mr. Owen Harris | male | 22.0 | 1 | 0 | A/5 21171 | 7.2500 | NaN | S |
1 | 1 | 2 | 1 | 1 | Cumings, Mrs. John Bradley (Florence Briggs Th... | female | 38.0 | 1 | 0 | PC 17599 | 71.2833 | C85 | C |
2 | 2 | 3 | 1 | 3 | Heikkinen, Miss. Laina | female | 26.0 | 0 | 0 | STON/O2. 3101282 | 7.9250 | NaN | S |
3 | 3 | 4 | 1 | 1 | Futrelle, Mrs. Jacques Heath (Lily May Peel) | female | 35.0 | 1 | 0 | 113803 | 53.1000 | C123 | S |
4 | 4 | 5 | 0 | 3 | Allen, Mr. William Henry | male | 35.0 | 0 | 0 | 373450 | 8.0500 | NaN | S |
#写入心得
Groupby()函数可以灵活的组合多个属性,进行数据的分组以及分组后地组内运算。
【参考】
https://blog.csdn.net/weixin_42782150/article/details/90716533
https://www.cnblogs.com/Yanjy-OnlyOne/p/11217802.html
# 写入代码
df =text['Fare'].groupby(text['Sex'])
means =df.mean()
means
Sex
0 25.523893
1 44.479818
Name: Fare, dtype: float64
【分析】女性的平均票价更高,且比男性的高不少。我推测男的劳工比较多,买的便宜票,又或者说女的贵妇人比例更高
在了解GroupBy机制之后,运用这个机制完成一系列的操作,来达到我们的目的。
下面通过几个任务来熟悉GroupBy机制。
# 写入代码
survived_sex=text['Survived'].groupby(text['Sex'])
sums=survived_sex.sum()
sums.head()
Sex
0 109
1 233
Name: Survived, dtype: int64
pandas.core.series.Series
【分析】从这个指标我们可以料想,逃生的时候真如电影《泰坦尼克号》一样,女士优先。但是这是个绝对值,保守起见,我们分析下两个性别相对存活率。
text['Sex'].value_counts(ascending=True)
sums/text['Sex'].value_counts(ascending=True)
female 314
male 577
Name: Sex, dtype: int64
Sex
female 0.742038
male 0.188908
dtype: float64
【分析】可以看出,逃生的时候确实是女性优先,而且女性生存率达到了约0.74,男性生存率只有约0.19。
# 写入代码
suvived_pclass=text['Survived'].groupby(text['Pclass'])
suvived_pclass.sum()
Pclass
1 136
2 87
3 119
Name: Survived, dtype: int64
【分析】这个绝对值其实意义不是很大,最好求出各个仓位等级的存活率
text['Pclass'].value_counts(sort=False)
rate_suvived_pclass=suvived_pclass.sum()/text['Pclass'].value_counts(sort=False)
rate_suvived_pclass
1 216
2 184
3 491
Name: Pclass, dtype: int64
Pclass
1 0.629630
2 0.472826
3 0.242363
dtype: float64
【分析】经过如此处理,我们看出了各个仓位等级的存活率,可以看出一等舱存活率达到约0.63,二等舱存活率约0.47,三等舱存活率约0.24,说明仓位等级越高,存活率越高。
【提示:】表中的存活那一栏,可以发现如果还活着记为1,死亡记为0
【分析】这样设置很好,如果死亡记为1,活着记为0,那么得出的将是死亡人数。
【思考】从数据分析的角度,上面的统计结果可以得出那些结论
【思考心得】
统计结果已经在上面进行了分析。得出了以下几点结论:
1. 女人的平均票价更高,推测总体而言,女人的地位相对男性要高;
2. 女性的存活情况远高于男性,推测逃生时女性优先;
3. 仓位等级越高,越容易存活,推测逃生时高等仓位优先。
【思考】从任务二到任务四中,这些运算可以通过agg()函数来同时计算。并且可以使用rename函数修改列名。你可以按照提示写出这个过程吗?
text['Sex']=text['Sex'].map({'male':0,'female':1})
text.groupby('Survived').agg({'Sex': 'mean', 'Pclass': 'count'}).rename(columns=
{'Sex': 'mean_sex', 'Pclass': 'count_pclass'})
mean_sex | count_pclass | |
---|---|---|
Survived | ||
0 | 0.147541 | 549 |
1 | 0.681287 | 342 |
【注意】前面那个mean_sex怎么理解??是不是说在死亡的女性约占0.15比例,存活的女性约占0.68的比例?我觉得这个理解没问题。
# 写入代码
text.groupby(['Pclass','Age'])['Fare'].mean()
Pclass Age
1 0.92 151.5500
2.00 151.5500
4.00 81.8583
11.00 120.0000
14.00 120.0000
...
3 61.00 6.2375
63.00 9.5875
65.00 7.7500
70.50 7.7500
74.00 7.7750
Name: Fare, Length: 182, dtype: float64
# 写入代码
result = pd.merge(means,sums,on='Sex')
result
Fare | Survived | |
---|---|---|
Sex | ||
0 | 25.523893 | 109 |
1 | 44.479818 | 233 |
#不同年龄的存活人数
survived_age = text['Survived'].groupby(text['Age']).sum()
survived_age
Age
0.42 1
0.67 1
0.75 2
0.83 2
0.92 1
..
70.00 0
70.50 0
71.00 0
74.00 0
80.00 1
Name: Survived, Length: 88, dtype: int64
#找出最大值的年龄段
survived_age[survived_age.values==survived_age.max()]
Age
24.0 15
Name: Survived, dtype: int64
_sum = text['Survived'].sum()
print(_sum)
342
print("sum of person:"+str(_sum))
precent =survived_age.max()/_sum
print("最大存活率:"+str(precent))
sum of person:342
最大存活率:0.043859649122807015