1、被下单数最多商品(item)是什么
c = chipo[['item_name','quantity']].groupby(['item_name'],as_index=False).agg({'quantity':sum})
c.sort_values(['quantity'],ascending=False,inplace=True)
c.head()
groupby() 和 agg()
2、在item_name这一列中,一共有多少种商品被下单?
(1)
chipo['item_name'].nunique()
(2)
len(chipo.item_name.value_counts())
nunique()
3、将item_price转换为浮点数
chipo['item_price'] = chipo.item_price.apply(lambda x: float(x[1: ]))
apply()+lambda
4、在该数据集对应的时期内,收入(revenue)是多少
revenue = (chipo['quantity'] * chipo['item_price']).sum()
sum()
5、对数据框discipline按照先Red Cards再Yellow Cards进行排序
discipline.sort_values(['Red Cards', 'Yellow Cards'], ascending = False)
sort_values() , sort_index()
6、选取以字母G开头的球队数据
euro12[euro12.Team.str.startswith('G')]
str.startswith()
7、找到英格兰(England)、意大利(Italy)和俄罗斯(Russia)的射正率(Shooting Accuracy)
euro12.loc[euro12.Team.isin(['England', 'Italy', 'Russia']), ['Team','Shooting Accuracy']]
loc(), isin()
8、打印出每个大陆对spirit饮品消耗的平均值,最大值和最小值
drinks.groupby('continent').spirit_servings.agg(['mean', 'min', 'max'])
9、将Year的数据类型转换为 datetime64
crime.Year = pd.to_datetime(crime.Year, format='%Y')
pd.to_datetime()
10、将列Year设置为数据框的索引
crime = crime.set_index('Year', drop = True)
set_index()
11、del df[‘temp’] 是正确的, 而del df.temp是错误的
12、按照Year对数据框进行分组并求和
temp = crime.resample('10AS').sum() # resample a time series per decades
# 用resample去得到“Population”列的最大值
population = crime['Population'].resample('10AS').max()
# 更新 "Population"
temp['Population'] = population
resample()
13、何时是美国历史上生存最危险的年代?
crime.idxmax(0)
Signature: df.idxmax(axis=0, skipna=True)
Docstring:
Return index of first occurrence of maximum over requested axis.
NA/null values are excluded.
axis : {0 or ‘index’, 1 or ‘columns’}, default 0
0 or ‘index’ for row-wise, 1 or ‘columns’ for column-wise
14、构建dataframe标准数据格式:
raw_data_1 = {
'subject_id': ['1', '2', '3', '4', '5'],
'first_name': ['Alex', 'Amy', 'Allen', 'Alice', 'Ayoung'],
'last_name': ['Anderson', 'Ackerman', 'Ali', 'Aoni', 'Atiches']}
raw_data_2 = {
'subject_id': ['4', '5', '6', '7', '8'],
'first_name': ['Billy', 'Brian', 'Bran', 'Bryce', 'Betty'],
'last_name': ['Bonder', 'Black', 'Balwner', 'Brice', 'Btisan']}
raw_data_3 = {
'subject_id': ['1', '2', '3', '4', '5', '7', '8', '9', '10', '11'],
'test_id': [51, 15, 15, 61, 16, 14, 15, 1, 61, 16]}
注意构造dataframe的数据格式是字典
15、按照subject_id的值对all_data和data3作合并
pd.merge(all_data, data3, on='subject_id')
merge(on=)
16、“.data”数据的读取方式
data = pd.read_table(‘wind.data’, sep = "\s+", parse_dates = [[0,1,2]])
17、2061年?我们真的有这一年的数据?创建一个函数并用它去修复这个bug。
“Yr_Mo_Dy" datetime64
def fix_century(x):
year = x.year - 100 if x.year > 1989 else x.year
return datetime.date(year, x.month, x.day)
# apply the function fix_century on the column and replace the values to the right ones
data['Yr_Mo_Dy'] = data['Yr_Mo_Dy'].apply(fix_century)
注意这里对datetime类型数据的使用方法 e: x.year
18、对于每一个location,计算一月份的平均风速
pandas.query()相当于sql的查询语句
data['date'] = data.index
# creates a column for each value from date
data['month'] = data['date'].apply(lambda date: date.month)
data['year'] = data['date'].apply(lambda date: date.year)
data['day'] = data['date'].apply(lambda date: date.day)
# gets all value from the month 1 and assign to janyary_winds
january_winds = data.query('month == 1')
# gets the mean from january_winds, using .loc to not print the mean of month, year and day
january_winds.loc[:,'RPT':"MAL"].mean()
query()
19、有重复的日期吗?
apple.index.is_unique
index.is_unique
20、数据集中最早的日期和最晚的日期相差多少天?
apple.index.max() - apple.index.min()).days
21、在数据中一共有多少个月?
(太神奇了,这是个神奇的组合键 sft+option+k)
apple_months = apple.resample('BM').mean()
len(apple_months.index)
resample是一个非常重要的函数,抽样函数
22、 删除有缺失值的行
iris = iris.dropna(how='any')
dropna(how = )
23、重新设置索引
iris = iris.reset_index(drop = True)
reset_index(drop=True)