sklearn.model_selection.train_test_split(
*arrays,
test_size=None,
train_size=None,
random_state=None,
shuffle=True,
stratify=None,
)
train_test_split函数用于将数据集细分为训练集和测试集。参数*arrays表示待划分的数据集,可以是列表、numpy数组、scipy稀疏矩阵或pandas的数据。
参数test_size如果为浮点数,则表示测试集占数据集的百分比;如果为整数,则表示测试集的数量。参数train_size如果为浮点数,则表示训练集占数据集的百分比;如果为整数,则表示训练集的数量。
参数random_state表示随机数的种子,参数shuffle表示是否打乱数据。
参数stratify是为了保持split前类的分布。比如有100个数据,80个属于A类,20个属于B类。如果train_test_split(… test_size=0.25, stratify = y), 那么split之后数据如下:
training: 75个数据,其中60个属于A类,15个属于B类。
testing: 25个数据,其中20个属于A类,5个属于B类。
>>> from sklearn import model_selection, datasets
>>> data, labels = datasets.load_iris(return_X_y=True)
>>> train_data, test_data = model_selection.train_test_split(data, test_size=0.4)
>>> len(train_data)
90
>>> len(test_data)
60
>>> train_labels, test_labels = model_selection.train_test_split(labels, test_size=0.4)
>>> train_labels
array([0, 1, 0, 0, 0, 0, 1, 2, 0, 0, 2, 2, 2, 2, 1, 0, 0, 1, 1, 2, 0, 0,
2, 0, 1, 1, 1, 1, 0, 1, 2, 2, 1, 1, 0, 1, 2, 2, 2, 0, 2, 2, 1, 1,
1, 1, 2, 1, 2, 0, 0, 2, 0, 0, 1, 0, 2, 2, 0, 1, 2, 0, 1, 0, 2, 2,
0, 1, 0, 1, 1, 2, 1, 1, 1, 0, 2, 0, 1, 2, 1, 0, 0, 0, 1, 2, 0, 0,
0, 0])
>>> train_labels, test_labels = model_selection.train_test_split(labels, test_size=0.4, random_state=10)
>>> train_labels
array([0, 0, 2, 1, 2, 0, 2, 0, 1, 1, 0, 2, 2, 2, 2, 2, 0, 1, 2, 1, 0, 2,
1, 1, 0, 0, 0, 1, 2, 2, 1, 0, 0, 0, 2, 2, 1, 1, 2, 2, 2, 2, 1, 0,
0, 1, 0, 0, 2, 1, 0, 0, 0, 1, 0, 1, 0, 1, 2, 0, 1, 1, 2, 0, 2, 0,
1, 1, 2, 2, 0, 1, 2, 2, 1, 1, 2, 0, 2, 0, 0, 1, 0, 2, 2, 2, 1, 0,
2, 0])
>>> train_labels, test_labels = model_selection.train_test_split(labels, test_size=0.4, random_state=10, shuffle=False)
>>> train_labels
array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1])