xgboost设置cpu核数

xg默认是满核 运行,对应文档内容如下:

nthread [default to maximum number of threads available if not set]

Number of parallel threads used to run XGBoost

我们也可以根据nthread 这个参数进行设置

试了一下:

设置不同核数看了下 运行速度,发现核数多确实快了很多

import xgboost as xgb
from sklearn.datasets import make_hastie_10_2
import  numpy as  np
import datetime
from multiprocessing import cpu_count
print('total cores is %d'%cpu_count())
NTHREAD=8
if __name__ == "__main__":
    starttime = datetime.datetime.now()
    X, y = make_hastie_10_2(n_samples=200000, random_state=42)
    xgb_model = xgb.XGBRegressor(nthread=NTHREAD)
    #xgb_model = xgb.XGBRegressor()
    xgb_model.fit(X, y)
    endtime = datetime.datetime.now()
    print('used %d cores'%NTHREAD)
    print('spend  time----:%d s'%(endtime - starttime).seconds) 

结果如下:

[root@localhost opt]# python tt.py
total cores is 8
used 1 cores
spend  time----:24 s
[root@localhost opt]# python tt.py
total cores is 8
used 2 cores
spend  time----:13 s
[root@localhost opt]# python tt.py
total cores is 8
used 3 cores
spend  time----:10 s
[root@localhost opt]# python tt.py
total cores is 8
used 4 cores
spend  time----:8 s
[root@localhost opt]# python tt.py
total cores is 8
used 5 cores
spend  time----:5 s
[root@localhost opt]# python tt.py
total cores is 8
used 6 cores
spend  time----:6 s
[root@localhost opt]# python tt.py
total cores is 8
used 7 cores
spend  time----:6 s
[root@localhost opt]# python tt.py
total cores is 8
used 8 cores
spend  time----:6 s

 

你可能感兴趣的:(机器学习,nthread,xgboost)