原标题:利用sklearn做文本分类(特征提取、knn/svm聚类)
数据挖掘入门与实战 公众号: datadw
分为以下几个过程:
加载数据集
提feature
分类
Naive Bayes
KNN
SVM
聚类
http://qwone.com/~jason/20Newsgroups/
上给出了3个数据集,这里我们用最原始的
1.加载数据集
从下载数据集,解压到scikit_learn_data文件夹下,加载数据,详见code注释。
[python]
#first extract the 20 news_group dataset to /scikit_learn_data
fromsklearn.datasets importfetch_20newsgroups
#all categories
#newsgroup_train = fetch_20newsgroups(subset='train')
#part categories
categories = ['comp.graphics',
'comp.os.ms-windows.misc',
'comp.sys.ibm.pc.hardware',
'comp.sys.mac.hardware',
'comp.windows.x'];
newsgroup_train = fetch_20newsgroups(subset = 'train',categories = categories);
可以检验是否load好了:
[python]
#print category names
frompprint importpprint
pprint(list(newsgroup_train.target_names))结果:
['comp.graphics',
'comp.os.ms-windows.misc',
'comp.sys.ibm.pc.hardware',
'comp.sys.mac.hardware',
'comp.windows.x']
2. 提feature:
刚才load进来的newsgroup_train就是一篇篇document,我们要从中提取feature,即词频啊神马的,用fit_transform
Method 1. HashingVectorizer,规定feature个数
[python]
#newsgroup_train.data is the original documents, but we need to extract the
#feature vectors inorder to model the text data
fromsklearn.feature_extraction.text importHashingVectorizer
vectorizer = HashingVectorizer(stop_words = 'english',non_negative = True,
n_features = 10000)
fea_train = vectorizer.fit_transform(newsgroup_train.data)
fea_test = vectorizer.fit_transform(newsgroups_test.data);
#return feature vector 'fea_train' [n_samples,n_features]
print'Size of fea_train:'+ repr(fea_train.shape)
print'Size of fea_train:'+ repr(fea_test.shape)
#11314 documents, 130107 vectors for all categories
print'The average feature sparsity is {0:.3f}%'.format(
fea_train.nnz/float(fea_train.shape[0]*fea_train.shape[1])*100);
结果:
Size of fea_train:(2936, 10000)
Size of fea_train:(1955, 10000)
The average feature sparsity is 1.002%
因为我们只取了10000个词,即10000维feature,稀疏度还不算低。而实际上用TfidfVectorizer统计可得到上万维的feature,我统计的全部样本是13w多维,就是一个相当稀疏的矩阵了。
**************************************************************************************************************************
上面代码注释说TF-IDF在train和test上提取的feature维度不同,那么怎么让它们相同呢?有两种方法:
Method 2. CountVectorizer+TfidfTransformer
让两个CountVectorizer共享vocabulary:
[python]
#----------------------------------------------------
#method 1:CountVectorizer+TfidfTransformer
print'*************************nCountVectorizer+TfidfTransformern*************************'
fromsklearn.feature_extraction.text importCountVectorizer,TfidfTransformer
count_v1= CountVectorizer(stop_words = 'english', max_df = 0.5);
counts_train = count_v1.fit_transform(newsgroup_train.data);
print"the shape of train is "+repr(counts_train.shape)
count_v2 = CountVectorizer(vocabulary=count_v1.vocabulary_);
counts_test = count_v2.fit_transform(newsgroups_test.data);
print"the shape of test is "+repr(counts_test.shape)
tfidftransformer = TfidfTransformer();
tfidf_train = tfidftransformer.fit(counts_train).transform(counts_train);
tfidf_test = tfidftransformer.fit(counts_test).transform(counts_test);
结果:
*************************
CountVectorizer+TfidfTransformer
*************************
the shape of train is (2936, 66433)
the shape of test is (1955, 66433)
Method 3. TfidfVectorizer
让两个TfidfVectorizer共享vocabulary:
[python]
#method 2:TfidfVectorizer
print'*************************nTfidfVectorizern*************************'
fromsklearn.feature_extraction.text importTfidfVectorizer
tv = TfidfVectorizer(sublinear_tf = True,
max_df = 0.5,
stop_words = 'english');
tfidf_train_2 = tv.fit_transform(newsgroup_train.data);
tv2 = TfidfVectorizer(vocabulary = tv.vocabulary_);
tfidf_test_2 = tv2.fit_transform(newsgroups_test.data);
print"the shape of train is "+repr(tfidf_train_2.shape)
print"the shape of test is "+repr(tfidf_test_2.shape)
analyze = tv.build_analyzer()
tv.get_feature_names()#statistical features/terms
结果:
*************************
TfidfVectorizer
*************************
the shape of train is (2936, 66433)
the shape of test is (1955, 66433)
此外,还有sklearn里封装好的抓feature函数,fetch_20newsgroups_vectorized
Method 4. fetch_20newsgroups_vectorized
但是这种方法不能挑出几个类的feature,只能全部20个类的feature全部弄出来:
[python]
print'*************************nfetch_20newsgroups_vectorizedn*************************'
fromsklearn.datasets importfetch_20newsgroups_vectorized
tfidf_train_3 = fetch_20newsgroups_vectorized(subset = 'train');
tfidf_test_3 = fetch_20newsgroups_vectorized(subset = 'test');
print"the shape of train is "+repr(tfidf_train_3.data.shape)
print"the shape of test is "+repr(tfidf_test_3.data.shape)
结果:
*************************
fetch_20newsgroups_vectorized
*************************
the shape of train is (11314, 130107)
the shape of test is (7532, 130107)
3. 分类
3.1 Multinomial Naive Bayes Classifier
[python]
######################################################
#Multinomial Naive Bayes Classifier
print'*************************nNaive Bayesn*************************'
fromsklearn.naive_bayes importMultinomialNB
fromsklearn importmetrics
newsgroups_test = fetch_20newsgroups(subset = 'test',
categories = categories);
fea_test = vectorizer.fit_transform(newsgroups_test.data);
#create the Multinomial Naive Bayesian Classifier
clf = MultinomialNB(alpha = 0.01)
clf.fit(fea_train,newsgroup_train.target);
pred = clf.predict(fea_test);
calculate_result(newsgroups_test.target,pred);
#notice here we can see that f1_score is not equal to 2*precision*recall/(precision+recall)
#because the m_precision and m_recall we get is averaged, however, metrics.f1_score() calculates
#weithed average, i.e., takes into the number of each class into consideration.
注意我最后的3行注释,为什么f1≠2*(准确率*召回率)/(准确率+召回率)
其中,函数calculate_result计算f1:
[python]
defcalculate_result(actual,pred):
m_precision = metrics.precision_score(actual,pred);
m_recall = metrics.recall_score(actual,pred);
print'predict info:'
print'precision:{0:.3f}'.format(m_precision)
print'recall:{0:0.3f}'.format(m_recall);
print'f1-score:{0:.3f}'.format(metrics.f1_score(actual,pred));
3.2 KNN:
[python]
######################################################
#KNN Classifier
fromsklearn.neighbors importKNeighborsClassifier
print'*************************nKNNn*************************'
knnclf = KNeighborsClassifier()#default with k=5
knnclf.fit(fea_train,newsgroup_train.target)
pred = knnclf.predict(fea_test);
calculate_result(newsgroups_test.target,pred);
3.3 SVM:
[cpp]
######################################################
#SVM Classifier
from sklearn.svm import SVC
print '*************************nSVMn*************************'
svclf = SVC(kernel = 'linear')#defaultwith 'rbf'
svclf.fit(fea_train,newsgroup_train.target)
pred = svclf.predict(fea_test);
calculate_result(newsgroups_test.target,pred);
结果:
*************************
Naive Bayes
*************************
predict info:
precision:0.764
recall:0.759
f1-score:0.760
*************************
KNN
*************************
predict info:
precision:0.642
recall:0.635
f1-score:0.636
*************************
SVM
*************************
predict info:
precision:0.777
recall:0.774
f1-score:0.774
4. 聚类
[cpp]
######################################################
#KMeans Cluster
from sklearn.cluster import KMeans
print '*************************nKMeansn*************************'
pred = KMeans(n_clusters=5)
pred.fit(fea_test)
calculate_result(newsgroups_test.target,pred.labels_);
结果:
*************************
KMeans
*************************
predict info:
precision:0.264
recall:0.226
f1-score:0.213
回复“sk”获取。
数据挖掘入门与实战
搜索添加微信公众号:datadw
教你机器学习,教你数据挖掘
长按图片,识别二维码,点关注
公众号: weic2c
据分析入门与实战
长按图片,识别二维码,点关注返回搜狐,查看更多
责任编辑: