,#【干货】 xgboost如何自定义eval_metric ( feval ) ?
转载请注明原文地址:http://blog.csdn.net/weixin_38100489/article/details/78714251
问题来源:xgboost的eval_metric里如果没有你要用的,比如kaggle里有用到的rmsle就没有就需要自己定义。
http://xgboost.readthedocs.io/en/latest/parameter.html
上代码:(把eval_metric变成rmsle的例子)
dtrain=xgb.DMatrix(train_x,train_y)
dtest=xgb.DMatrix(test_x,test_y_real)
watchlist=[(dtrain,'train'),(dtest,'test')]
#############
def evalerror(preds, dtrain): # written by myself
labels = dtrain.get_label()
# return a pair metric_name, result
# since preds are margin(before logistic transformation, cutoff at 0)
return 'error', math.sqrt(mean_squared_log_error(preds,labels))
#############
params = {
'objective': 'reg:gamma',
'eta': 0.1,
'seed': 0,
#'eval_metric': 'rmse', # if it is useless, delete it
'missing': -999,
'silent' : 1,
'gamma' : 1,
'subsample' : 0.5,
'alpha' : 1,
'max_depth':10,
'min_child_weight':1
}
num_rounds=500
clf=xgb.train(params,dtrain,num_rounds,watchlist, feval=evalerror)
这样就可以啦,关键就是‘feval=’,在xgb.train内的实参是叫feval,不是叫eval_metric !!!
(上面代码参数我都可以改花了)
xgb.train(params, dtrain, num_boost_round, evals, obj, feval, maximize, early_stopping_rounds, evals_result, verbose_eval, learning_rates, xgb_model, callbacks)
中的第六个。
思路来源是https://www.zhihu.com/question/39496618 上的大神 lau phunter 的回答,https://github.com/dmlc/xgboost/blob/master/demo/guide-python/custom_objective.py
———————————————————————————————————————————————————————
~一把辛酸泪,可费劲了~
我的知乎:https://www.zhihu.com/people/xu-jian-zhi/posts