Udacity_数据分析之用Numpy和Pandas分析二维数组

1、DataFrame返回最大行并求这行的平均值和总平均值

  • pandas里面的DataFrame生成数据
import pandas as pd

# Subway ridership for 5 stations on 10 different days
ridership_df = pd.DataFrame(
    data=[[   0,    0,    2,    5,    0],
          [1478, 3877, 3674, 2328, 2539],
          [1613, 4088, 3991, 6461, 2691],
          [1560, 3392, 3826, 4787, 2613],
          [1608, 4802, 3932, 4477, 2705],
          [1576, 3933, 3909, 4979, 2685],
          [  95,  229,  255,  496,  201],
          [   2,    0,    1,   27,    0],
          [1438, 3785, 3589, 4174, 2215],
          [1342, 4043, 4009, 4665, 3033]],
    index=['05-01-11', '05-02-11', '05-03-11', '05-04-11', '05-05-11',
           '05-06-11', '05-07-11', '05-08-11', '05-09-11', '05-10-11'],
    columns=['R003', 'R004', 'R005', 'R006', 'R007']
)
  • 求总平均值和最大行平均值的函数
def mean_riders_for_max_station(ridership):
    '''
    Fill in this function to find the station with the maximum riders on the
    first day, then return the mean riders per day for that station. Also
    return the mean ridership overall for comparsion.
    
    This is the same as a previous exercise, but this time the
    input is a Pandas DataFrame rather than a 2D NumPy array.
    '''
    max_station = ridership.iloc[0].argmax() 
    mean_for_max = ridership[max_station].mean()
    overall_mean = ridership.values.mean()
    return (overall_mean, mean_for_max)
mean_riders_for_max_station(ridership_df)

2、array返回最大行并求这行的平均值和总平均值

import numpy as np

# Subway ridership for 5 stations on 10 different days
ridership = np.array([
    [   0,    0,    2,    5,    0],
    [1478, 3877, 3674, 2328, 2539],
    [1613, 4088, 3991, 6461, 2691],
    [1560, 3392, 3826, 4787, 2613],
    [1608, 4802, 3932, 4477, 2705],
    [1576, 3933, 3909, 4979, 2685],
    [  95,  229,  255,  496,  201],
    [   2,    0,    1,   27,    0],
    [1438, 3785, 3589, 4174, 2215],
    [1342, 4043, 4009, 4665, 3033]
])
def mean_riders_for_max_station(ridership):
    '''
    Fill in this function to find the station with the maximum riders on the
    first day, then return the mean riders per day for that station. Also
    return the mean ridership overall for comparsion.
    
    Hint: NumPy's argmax() function might be useful:
    http://docs.scipy.org/doc/numpy/reference/generated/numpy.argmax.html
    '''
    max_station = ridership[0,:].argmax()
    mean_for_max = ridership[:,max_station].mean()
    overall_mean = ridership.mean()
    return (overall_mean, mean_for_max)
  • 以上输出结果都是:

(2342.5999999999999, 3239.9)

3、DataFrame向量化运算

# --- Quiz ---
# Cumulative entries and exits for one station for a few hours.
entries_and_exits = pd.DataFrame({
    'ENTRIESn': [3144312, 3144335, 3144353, 3144424, 3144594,
                 3144808, 3144895, 3144905, 3144941, 3145094],
    'EXITSn': [1088151, 1088159, 1088177, 1088231, 1088275,
               1088317, 1088328, 1088331, 1088420, 1088753]
})
#计算每小时进出人数的函数
def get_hourly_entries_and_exits(entries_and_exits):
    return entries_and_exits - entries_and_exits.shift(1)
get_hourly_entries_and_exits(entries_and_exits)
  • 输出结果:
ENTRIESn    EXITSn
0   NaN     NaN
1   23.0    8.0
2   18.0    18.0
3   71.0    54.0
4   170.0   44.0
5   214.0   42.0
6   87.0    11.0
7   10.0    3.0
8   36.0    89.0
9   153.0   333.0
  • DataFrame.shift

4、DataFrame applymap

  • 使用示例
import pandas as pd
if True:
    df = pd.DataFrame({
        'a': [1, 2, 3],
        'b': [10, 20, 30],
        'c': [5, 10, 15]
    })
    def add_one(x):
        return x + 1
    print(df.applymap(add_one))
  • 输出结果:
   a   b   c
0  2  11   6
1  3  21  11
2  4  31  16
  • 把分数转化为等级

The conversion rule is:
90-100 -> A
80-89 -> B
70-79 -> C
60-69 -> D
0-59 -> F

  • 实现函数:
grades_df = pd.DataFrame(
    data={'exam1': [43, 81, 78, 75, 89, 70, 91, 65, 98, 87],
          'exam2': [24, 63, 56, 56, 67, 51, 79, 46, 72, 60]},
    index=['Andre', 'Barry', 'Chris', 'Dan', 'Emilio', 
           'Fred', 'Greta', 'Humbert', 'Ivan', 'James']
)    
def convert_grade(grade):
    if grade >= 90:
        return 'A'
    elif grade >= 80:
        return 'B'
    elif grade >= 70:
        return 'C'
    elif grade >= 60:
        return 'D'
    else:
        return 'F'
def convert_grades(grades):
    return grades.applymap(convert_grade)
print(grades_df)
convert_grades(grades_df)
  • 输出结果:
         exam1  exam2
Andre       43     24
Barry       81     63
Chris       78     56
Dan         75     56
Emilio      89     67
Fred        70     51
Greta       91     79
Humbert     65     46
Ivan        98     72
James       87     60
----------------------------
    exam1 exam2
Andre   F   F
Barry   B   D
Chris   C   F
Dan     C   F
Emilio  B   D
Fred    C   F
Greta   A   C
Humbert D   F
Ivan    A   C
James   B   D

5、DataFrame apply

案例1:
import pandas as pd

grades_df = pd.DataFrame(
    data={'exam1': [43, 81, 78, 75, 89, 70, 91, 65, 98, 87],
          'exam2': [24, 63, 56, 56, 67, 51, 79, 46, 72, 60]},
    index=['Andre', 'Barry', 'Chris', 'Dan', 'Emilio', 
           'Fred', 'Greta', 'Humbert', 'Ivan', 'James']
)

# Change False to True for this block of code to see what it does

# DataFrame apply()
if True:
    def convert_grades_curve(exam_grades):
        # Pandas has a bult-in function that will perform this calculation
        # This will give the bottom 0% to 10% of students the grade 'F',
        # 10% to 20% the grade 'D', and so on. You can read more about
        # the qcut() function here:
        # http://pandas.pydata.org/pandas-docs/stable/generated/pandas.qcut.html
        return pd.qcut(exam_grades,
                       [0, 0.1, 0.2, 0.5, 0.8, 1],
                       labels=['F', 'D', 'C', 'B', 'A'])
        
    # qcut() operates on a list, array, or Series. This is the
    # result of running the function on a single column of the
    # DataFrame.
    
    # qcut() does not work on DataFrames, but we can use apply()
    # to call the function on each column separately
    
def standardize(df):
    '''
    Fill in this function to standardize each column of the given
    DataFrame. To standardize a variable, convert each value to the
    number of standard deviations it is above or below the mean.
    '''
    return df.apply(standardize_column)
def standardize_column(column):
    return (column-column.mean())/column.std()
  • 输出 exam1的等级:
    print(convert_grades_curve(grades_df['exam1']))
Andre      F
Barry      B
Chris      C
Dan        C
Emilio     B
Fred       C
Greta      A
Humbert    D
Ivan       A
James      B
Name: exam1, dtype: category
Categories (5, object): [F < D < C < B < A]
  • grades_df分数转化为等级:
    print(grades_df.apply(convert_grades_curve))
        exam1 exam2
Andre       F     F
Barry       B     B
Chris       C     C
Dan         C     C
Emilio      B     B
Fred        C     C
Greta       A     A
Humbert     D     D
Ivan        A     A
James       B     B
  • 标准化:
    standardize(grades_df)

          exam1      exam2
Andre  -2.196525    -2.186335
Barry   0.208891     0.366571
Chris   0.018990    -0.091643
Dan    -0.170911    -0.091643
Emilio  0.715295     0.628408
Fred   -0.487413    -0.418938
Greta   0.841896     1.413917
Humbert-0.803916    -0.746234
Ivan    1.284999     0.955703
James   0.588694     0.170194
案例2:
  • 1、输出每列中的最大值和平均值:
import numpy as np
import pandas as pd

df = pd.DataFrame({
    'a': [4, 5, 3, 1, 2],
    'b': [20, 10, 40, 50, 30],
    'c': [25, 20, 5, 15, 10]
})

# Change False to True for this block of code to see what it does

# DataFrame apply() - use case 2
if True:   
    print(df.apply(np.mean))
    print(df.apply(np.max))
  • 输出结果:
a     3.0
b    30.0
c    15.0
dtype: float64
a     5
b    50
c    25
dtype: int64
  • 2、输出每列中的第二大值
def second_largest_in_column(column):
    sorted_column = column.sort_values(ascending = False)
    return sorted_column.iloc[1]
def second_largest(df):
    '''
    Fill in this function to return the second-largest value of each 
    column of the input DataFrame.
    '''
    
    return df.apply(second_largest_in_column)
second_largest(df)
  • 输出结果:
a     4
b    40
c    20
dtype: int64

6、向Series中添加DataFrame

  1. 直接相加
import pandas as pd

# Adding using +
if True:
    s = pd.Series([1, 2, 3, 4])
    df = pd.DataFrame({
        0: [10, 20, 30, 40],
        1: [50, 60, 70, 80],
        2: [90, 100, 110, 120],
        3: [130, 140, 150, 160]
    })
    
    print(df)
    print('') # Create a blank line between outputs
    print(df + s)
  • 输出
    0   1    2    3
0  10  50   90  130
1  20  60  100  140
2  30  70  110  150
3  40  80  120  160

    0   1    2    3
0  11  52   93  134
1  21  62  103  144
2  31  72  113  154
3  41  82  123  164
  1. index相加
# Adding with axis='index'
if True:
    s = pd.Series([1, 2, 3, 4])
    df = pd.DataFrame({
        0: [10, 20, 30, 40],
        1: [50, 60, 70, 80],
        2: [90, 100, 110, 120],
        3: [130, 140, 150, 160]
    })
    
    print(df)
    print('') # Create a blank line between outputs
    print(df.add(s, axis='index'))
    # The functions sub(), mul(), and div() work similarly to add()
  • 输出:
    0   1    2    3
0  10  50   90  130
1  20  60  100  140
2  30  70  110  150
3  40  80  120  160

    0   1    2    3
0  11  51   91  131
1  22  62  102  142
2  33  73  113  153
3  44  84  124  164
  1. column相加
# Adding with axis='columns'
s = pd.Series([1,2,3,4])
df = pd.DataFrame({
    0: [10, 20, 30, 40],
    1: [50, 60, 70, 80],
    2: [90, 100, 110, 120],
    3: [130, 140, 150, 160]
})

print (df)
print ('') # Create a blank line between outputs
print (df.add(s, axis='columns'))
# The functions sub(), mul(), and div() work similarly to add()
  • 输出:
0   1    2    3
0  10  50   90  130
1  20  60  100  140
2  30  70  110  150
3  40  80  120  160

    0   1    2    3
0  11  52   93  134
1  21  62  103  144
2  31  72  113  154
3  41  82  123  164

7、标准化DateFrame的行

  • 数据
grades_df = pd.DataFrame(
    data={'exam1': [43, 81, 78, 75, 89, 70, 91, 65, 98, 87],
          'exam2': [24, 63, 56, 56, 67, 51, 79, 46, 72, 60]},
    index=['Andre', 'Barry', 'Chris', 'Dan', 'Emilio', 
           'Fred', 'Greta', 'Humbert', 'Ivan', 'James']
)
  • grades_df输出:
    exam1   exam2
Andre   43  24
Barry   81  63
Chris   78  56
Dan     75  56
Emilio  89  67
Fred    70  51
Greta   91  79
Humbert 65  46
Ivan    98  72
James   87  60
  • grades_df.mean()的输出:
# 默认输出的是按index计算的平均值
exam1    77.7
exam2    57.4
dtype: float64
  • grades_df.mean(axis='columns')的输出:
# 指定按columns输出平均值
Andre      33.5
Barry      72.0
Chris      67.0
Dan        65.5
Emilio     78.0
Fred       60.5
Greta      85.0
Humbert    55.5
Ivan       85.0
James      73.5
dtype: float64
  • 计算每人的两次成绩与两次成绩平均值的偏差并标准化:
    mean_diffs =grades_df.sub(grades_df.mean(axis='columns'),axis='index'
         exam1  exam2
Andre      9.5   -9.5
Barry      9.0   -9.0
Chris     11.0  -11.0
Dan        9.5   -9.5
Emilio    11.0  -11.0
Fred       9.5   -9.5
Greta      6.0   -6.0
Humbert    9.5   -9.5
Ivan      13.0  -13.0
James     13.5  -13.5

mean_diffs.div(grades_df.std(axis='columns'),axis='index')

    exam1   exam2
Andre   0.707107    -0.707107
Barry   0.707107    -0.707107
Chris   0.707107    -0.707107
Dan     0.707107    -0.707107
Emilio  0.707107    -0.707107
Fred    0.707107    -0.707107
Greta   0.707107    -0.707107
Humbert 0.707107    -0.707107
Ivan    0.707107    -0.707107
James   0.707107    -0.707107

8、DataFramegroupby的使用

import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
values = np.array([1, 3, 2, 4, 1, 6, 4])
example_df = pd.DataFrame({
    'value': values,
    'even': values % 2 == 0,
    'above_three': values > 3 
}, index=['a', 'b', 'c', 'd', 'e', 'f', 'g'])
  1. print (example_df)输出结果:
above_three   even  value
a       False  False      1
b       False  False      3
c       False   True      2
d        True   True      4
e       False  False      1
f        True   True      6
g        True   True      4
  1. even分组:
grouped_data = example_df.groupby('even')
    # The groups attribute is a dictionary mapping keys to lists of row indexes
print(grouped_data.groups)
  • 输出结果:
{False: ['a', 'b', 'e'], True: ['c', 'd', 'f', 'g']}
  1. evenabove_three分组:
grouped_data = example_df.groupby(['even', 'above_three'])
print(grouped_data.groups)
  • 输出结果:
{(True, False): ['c'], (False, False): ['a', 'b', 'e'], (True, True): ['d', 'f', 'g']}
  1. 求每个group的和
grouped_data = example_df.groupby('even')
print(grouped_data.sum())
  • 输出:
       above_three  value
even                     
False          0.0      5
True           3.0     16
  • 按columns计算和
grouped_data = example_df.groupby('even')
# You can take one or more columns from the result DataFrame
print(grouped_data.sum()['value'])
print ('\n') # Blank line to separate results
print(grouped_data['value'].sum())
  • 以上两个print计算结果一样:
even
False     5
True     16
Name: value, dtype: int32
  1. group实现分组后的标准化和求第二大的值
import numpy as np
import pandas as pd

values = np.array([1, 3, 2, 4, 1, 6, 4])
example_df = pd.DataFrame({
    'value': values,
    'even': values % 2 == 0,
    'above_three': values > 3 
}, index=['a', 'b', 'c', 'd', 'e', 'f', 'g'])

# Change False to True for each block of code to see what it does

# Standardize each group
if True:
    def standardize(xs):
        return (xs - xs.mean()) / xs.std()
    grouped_data = example_df.groupby('even')
    print(grouped_data.groups)
    print(grouped_data['value'].apply(standardize))
if True:
    def second_largest(xs):
        sorted_xs = xs.sort(inplace=False, ascending=False)
        return sorted_xs.iloc[1]
    grouped_data = example_df.groupby('even')
    print(grouped_data['value'].apply(second_largest))
  • 输出:
# print按even分组
{False: ['a', 'b', 'e'], True: ['c', 'd', 'f', 'g']}
# print标准化
a   -0.577350
b    1.154701
c   -1.224745
d    0.000000
e   -0.577350
f    1.224745
g    0.000000
Name: value, dtype: float64
# print第二大值
even
False    1
True     4
Name: value, dtype: int64
  1. 每小时入站和出站数
ridership_df = pd.DataFrame({
    'UNIT': ['R051', 'R079', 'R051', 'R079', 'R051', 'R079', 'R051', 'R079', 'R051'],
    'TIMEn': ['00:00:00', '02:00:00', '04:00:00', '06:00:00', '08:00:00', '10:00:00', '12:00:00', '14:00:00', '16:00:00'],
    'ENTRIESn': [3144312, 8936644, 3144335, 8936658, 3144353, 8936687, 3144424, 8936819, 3144594],
    'EXITSn': [1088151, 13755385,  1088159, 13755393,  1088177, 13755598, 1088231, 13756191,  1088275]
})
def hours_for_group(entries_and_exits):
    return entries_and_exits-entries_and_exits.shift(1)
ridership_df.groupby('UNIT')[['ENTRIESn','EXITSn']].apply(hours_for_group)
  • 输出结果:
    ENTRIESn EXITSn
0   NaN     NaN
1   NaN     NaN
2   23.0    8.0
3   14.0    8.0
4   18.0    18.0
5   29.0    205.0
6   71.0    54.0
7   132.0   593.0
8   170.0   44.0

9、DataFrame合并

import pandas as pd

subway_df = pd.DataFrame({
    'UNIT': ['R003', 'R003', 'R003', 'R003', 'R003', 'R004', 'R004', 'R004',
             'R004', 'R004'],
    'DATEn': ['05-01-11', '05-02-11', '05-03-11', '05-04-11', '05-05-11',
              '05-01-11', '05-02-11', '05-03-11', '05-04-11', '05-05-11'],
    'hour': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    'ENTRIESn': [ 4388333,  4388348,  4389885,  4391507,  4393043, 14656120,
                 14656174, 14660126, 14664247, 14668301],
    'EXITSn': [ 2911002,  2911036,  2912127,  2913223,  2914284, 14451774,
               14451851, 14454734, 14457780, 14460818],
    'latitude': [ 40.689945,  40.689945,  40.689945,  40.689945,  40.689945,
                  40.69132 ,  40.69132 ,  40.69132 ,  40.69132 ,  40.69132 ],
    'longitude': [-73.872564, -73.872564, -73.872564, -73.872564, -73.872564,
                  -73.867135, -73.867135, -73.867135, -73.867135, -73.867135]
})

weather_df = pd.DataFrame({
    'DATEn': ['05-01-11', '05-01-11', '05-02-11', '05-02-11', '05-03-11',
              '05-03-11', '05-04-11', '05-04-11', '05-05-11', '05-05-11'],
    'daten': ['05-01-11', '05-01-11', '05-02-11', '05-02-11', '05-03-11',
              '05-03-11', '05-04-11', '05-04-11', '05-05-11', '05-05-11'],
    'hour': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    'latitude': [ 40.689945,  40.69132 ,  40.689945,  40.69132 ,  40.689945,
                  40.69132 ,  40.689945,  40.69132 ,  40.689945,  40.69132 ],
    'longitude': [-73.872564, -73.867135, -73.872564, -73.867135, -73.872564,
                  -73.867135, -73.872564, -73.867135, -73.872564, -73.867135],
    'pressurei': [ 30.24,  30.24,  30.32,  30.32,  30.14,  30.14,  29.98,  29.98,
                   30.01,  30.01],
    'fog': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    'rain': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
    'tempi': [ 52. ,  52. ,  48.9,  48.9,  54. ,  54. ,  57.2,  57.2,  48.9,  48.9],
    'wspdi': [  8.1,   8.1,   6.9,   6.9,   3.5,   3.5,  15. ,  15. ,  15. ,  15. ]
})
subway_df.merge(weather_df,on =['DATEn','hour','latitude','longitude'],how = 'inner')
subway_df.merge(weather_df,left_on =['DATEn','hour','latitude','longitude'],right_on =['daten','hour','latitude','longitude'],how = 'inner')
  • 输出结果:


    Udacity_数据分析之用Numpy和Pandas分析二维数组_第1张图片
    print1
Udacity_数据分析之用Numpy和Pandas分析二维数组_第2张图片
print2

你可能感兴趣的:(Udacity_数据分析之用Numpy和Pandas分析二维数组)