kaggle-Santander 客户交易预测总结

1 绘图

sns.kdeplot()——核密度估计图
sns.distplot()——集合了matplotlib的hist()与核函数估计kdeplot的功能
Seaborn入门系列之kdeplot和distplot

2 Permutation Importance

我们在构建树类模型(XGBoost、LightGBM等)时,如果想要知道哪些变量比较重要的话。可以通过模型的feature_importances_方法来获取特征重要性。例如LightGBM的feature_importances_可以通过特征的分裂次数或利用该特征分裂后的增益来衡量。一般情况下,不同的衡量准则得到的特征重要性顺序会有差异。我一般是通过多种评价标准来交叉选择特征。若一个特征在不同的评价标准下都是比较重要的,那么该特征对label有较好的预测能力。
若将一个特征置为随机数,模型效果下降很多,说明该特征比较重要;反之则不是

import eli5
from eli5.sklearn import PermutationImportance
from sklearn.feature_selection import SelectFromModel
 
def PermutationImportance_(clf,X_train,y_train,X_valid,X_test):
    
    perm = PermutationImportance(clf, n_iter=5, random_state=1024, cv=5)
    perm.fit(X_train, y_train)    
    
    result_ = {'var':X_train.columns.values
           ,'feature_importances_':perm.feature_importances_
           ,'feature_importances_std_':perm.feature_importances_std_}
    feature_importances_ = pd.DataFrame(result_, columns=['var','feature_importances_','feature_importances_std_'])
    feature_importances_ = feature_importances_.sort_values('feature_importances_',ascending=False)
    #eli5.show_weights(perm, feature_names=X_train.columns.tolist(), top=500) #结果可视化   
    
    sel = SelectFromModel(perm, threshold=0.00, prefit=True)
    X_train_ = sel.transform(X_train)
    X_valid_ = sel.transform(X_valid)
    X_test_ = sel.transform(X_test)
 
    return feature_importances_,X_train_,X_valid_,X_test
 
#PermutationImportance
model_1 = RandomForestClassifier(random_state=1024)
feature_importances_1,X_train_1,X_valid_1,X_test_1 = PermutationImportance_(model_1,X_train,y_train,X_valid,X_test)
 
model_2 = lgb.LGBMClassifier(objective='binary',random_state=1024)
feature_importances_2,X_train_2,X_valid_2,X_test_2 = PermutationImportance_(model_2,X_train,y_train,X_valid,X_test)
 
model_3 = LogisticRegression(random_state=1024)
feature_importances_3,X_train_3,X_valid_3,X_test_3 = PermutationImportance_(model_3,X_train,y_train,X_valid,X_test

3 部分依赖图

部分依赖图显示每个变量或预测变量如何影响模型的预测。这对于以下问题很有用:

  1. 男女之间的工资差异有多少仅仅取决于性别,而不是教育背景或工作经历的差异?
  2. 控制房屋特征,经度和纬度对房价有何影响?为了重申这一点,我们想要了解在不同区域如何定价同样大小的房屋,即使实际上这些地区的房屋大小不同。
  3. 由于饮食差异或其他因素,两组之间是否存在健康差异?
#画部分依赖图,看目标y与变量之间的关系
from sklearn.ensemble.partial_dependence import plot_partial_dependence

my_plots= plot_partial_dependence(my_model,
                                  feature_names= clo_to_use,
                                  features= [0,2],
                                  X= imputed_X)

4 tqdm

from tqdm import tqdm_notebook as tqdm

Tqdm 是一个快速,可扩展的Python进度条,可以在 Python 长循环中添加一个进度提示信息,用户只需要封装任意的迭代器 tqdm(iterator)。

5 特征工程

找出每一列中的唯一值,如果其唯一,则标记为1。
如果某一样本中含有唯一值,则视为真样本;如果某一样本中所有特征均不唯一,则视为假样本。
将真样本和真实训练样本拼在一起。

unique_samples = []
unique_count = np.zeros_like(df_test)
for feature in range(df_test.shape[1]):
    _, index_, count_ = np.unique(df_test[:, feature], return_counts=True, return_index=True)
    unique_count[index_[count_ == 1], feature] += 1

# Samples which have unique values are real the others are fake
real_samples_indexes = np.argwhere(np.sum(unique_count, axis=1) > 0)[:, 0]
synthetic_samples_indexes = np.argwhere(np.sum(unique_count, axis=1) == 0)[:, 0]

"vc"列:重复数值的个数,大于10次的取10
"sum"列:出现次数大于1的,用vc列的值乘以(原值-均值)

for feat in feats:
    temp = df[feat].value_counts(dropna = True) 
    df_train[feat+"vc"] = df_train[feat].map(temp).map(lambda x:min(10,x)).astype(np.uint8)
    df_test[feat+"vc"] = df_test[feat].map(temp).map(lambda x:min(10,x)).astype(np.uint8)
    print(feat,temp.shape[0],df_train[feat+"vc"].map(lambda x:int(x>2)).sum(),df_train[feat+"vc"].map(lambda x:int(x>3)).sum())
    df_train[feat+"sum"] = ((df_train[feat] - df[feat].mean()) * df_train[feat+"vc"].map(lambda x:int(x>1))).astype(np.float32)
    df_test[feat+"sum"] = ((df_test[feat] - df[feat].mean()) * df_test[feat+"vc"].map(lambda x:int(x>1))).astype(np.float32)
    df_train[feat+"sum2"] = ((df_train[feat]) * df_train[feat+"vc"].map(lambda x:int(x>2))).astype(np.float32)
    df_test[feat+"sum2"] = ((df_test[feat]) * df_test[feat+"vc"].map(lambda x:int(x>2))).astype(np.float32)
    df_train[feat+"sum3"] = ((df_train[feat]) * df_train[feat+"vc"].map(lambda x:int(x>4))).astype(np.float32) 
    df_test[feat+"sum3"] = ((df_test[feat]) * df_test[feat+"vc"].map(lambda x:int(x>4))).astype(np.float32) 
# FREQUENCY ENCODE
def encode_FE(df,col,test):
    cv = df[col].value_counts()
    nm = col+'_FE'
    df[nm] = df[col].map(cv)
    test[nm] = test[col].map(cv)
    test[nm].fillna(0,inplace=True)
    if cv.max()<=255:
        df[nm] = df[nm].astype('uint8')
        test[nm] = test[nm].astype('uint8')
    else:
        df[nm] = df[nm].astype('uint16')
        test[nm] = test[nm].astype('uint16')        
    return

test['target'] = -1
comb = pd.concat([train,test.loc[real_samples_indexes]],axis=0,sort=True)
for i in range(200): encode_FE(comb,'var_'+str(i),test)
train = comb[:len(train)]; del comb
print('Added 200 new magic features!')

你可能感兴趣的:(kaggle-Santander 客户交易预测总结)