Kaggle ICR比赛现在在进行中,这个比赛是一个典型的数据挖掘比赛,很适合入门学习。本文将介绍现在ICR基础的解决方案。
赛题名称:ICR - Identifying Age-Related Conditions
赛题任务:数据挖掘
https://www.kaggle.com/competitions/icr-identify-age-related-conditions
代码地址:https://www.kaggle.com/code/chaitanyagiri/icr-2023-single-lgbm-0-12-cv-0-16-lb
比赛数据包含与三种与年龄相关的状况相关联的五十多个匿名健康特征。您的目标是预测受试者是否被诊断出患有这些病症之一——二元分类问题。
技术要学会分享、交流,不建议闭门造车。一个人可以走的很快、一堆人可以走的更远。
好的文章离不开粉丝的分享、交流、推荐,资料干货、资料分享、数据、技术交流提升,均可加交流群获取,群友已超过2000人,添加时最好的备注方式为:来源+兴趣方向,方便找到志同道合的朋友。
方式①、添加微信号:mlc2060,备注:来自CSDN + 加群
方式②、微信搜索公众号:机器学习社区,后台回复:加群
train.csv训练集
test.csv - 测试集。
greeks.csv:训练集元数据
COMP_PATH = "/kaggle/input/icr-identify-age-related-conditions"
train = pd.read_csv(f"{COMP_PATH}/train.csv")
test = pd.read_csv(f"{COMP_PATH}/test.csv")
sample_submission = pd.read_csv(f"{COMP_PATH}/sample_submission.csv")
greeks = pd.read_csv(f"{COMP_PATH}/greeks.csv")
赛题使用的balance log loss,为了与赛题保持一致,可以自定义指标。当然也可以自定义目标函数。
def competition_log_loss(y_true, y_pred):
N_0 = np.sum(1 - y_true)
N_1 = np.sum(y_true)
p_1 = np.clip(y_pred, 1e-15, 1 - 1e-15)
p_0 = 1 - p_1
log_loss_0 = -np.sum((1 - y_true) * np.log(p_0)) / N_0
log_loss_1 = -np.sum(y_true * np.log(p_1)) / N_1
return (log_loss_0 + log_loss_1)/2
def balanced_log_loss(y_true, y_pred):
N_0 = np.sum(1 - y_true)
N_1 = np.sum(y_true)
p_1 = np.clip(y_pred, 1e-15, 1 - 1e-15)
p_0 = 1 - p_1
log_loss_0 = -np.sum((1 - y_true) * np.log(p_0))
log_loss_1 = -np.sum(y_true * np.log(p_1))
w_0 = 1 / N_0
w_1 = 1 / N_1
balanced_log_loss = 2*(w_0 * log_loss_0 + w_1 * log_loss_1) / (w_0 + w_1)
return balanced_log_loss/(N_0+N_1)
由于数据集存在类别分布不均衡的情况,因此建议按照原信息或比赛标签进行划分验证集。
kf = StratifiedKFold(n_splits=5, random_state=42, shuffle=True)
df['fold'] = -1
for fold, (train_idx, test_idx) in enumerate(kf.split(df, greeks['Alpha'])):
df.loc[test_idx, 'fold'] = fold
df.groupby('fold')["Class"].value_counts()
由于比赛是典型的数据挖掘赛题,因此建议使用lightgbm。然后在训练中,可以进行调参,加入early stop。
并记录下每折的精度,按照每折的权重作为最终的加权。这也是一种集成方法。
weights = []
for fold in range(5):
train_df = df[df['fold'] != fold]
valid_df = df[df['fold'] == fold]
valid_ids = valid_df.Id.values.tolist()
X_train, y_train = train_df.drop(['Id', 'Class', 'fold'], axis=1), train_df['Class']
X_valid, y_valid = valid_df.drop(['Id', 'Class', 'fold'], axis=1), valid_df['Class']
# 使用lightgbm进行训练和验证
lgb = LGBMClassifier(boosting_type='goss', learning_rate=0.06733232950390658, n_estimators = 50000,
early_stopping_round = 300, random_state=42,
subsample=0.6970532011679706,
colsample_bytree=0.6055755840633003,
class_weight='balanced',
metric='none', is_unbalance=True, max_depth=8)
# 存储每折的权重
weights.append(1/balanced_logloss)
final_valid_predictions = pd.DataFrame.from_dict(final_valid_predictions, orient="index").reset_index()
final_valid_predictions.columns = ['Id', 'class_0', 'class_1']
final_valid_predictions.to_csv(r"oof.csv", index=False)
test_dict = {}
test_dict.update(dict(zip(test.Id.values.tolist(), test_preds)))
submission = pd.DataFrame.from_dict(test_dict, orient="index").reset_index()
submission.columns = ['Id', 'class_0', 'class_1']
submission.to_csv(r"submission.csv", index=False)
submission