支持向量机,朴素贝叶斯,k近邻(分类) python实现

python 机器学习

      • (二) 支持向量机,朴素贝叶斯, k k k 近邻(分类) python实现
      • 1、 支持向量机(分类)
          • step1:手写体数据读取代码样例
          • step2:手写体数据分割代码样例
          • step3:使用支持向量机(分类)对手写体数字图像进行识别
          • step4:支持向量机(分类)对手写体数字图像进行识别能力的评估
      • 2、朴素贝叶斯
          • step1:读取20类新闻文本的数据细节
          • step2:20类新闻文本数据分割
          • step3:使用朴素贝叶斯分类器对新闻文本数据进行类别预测
          • step4:性能评估
      • 3、 k k k 近邻(分类)
          • step1:读取Iris数据集细节资料
          • step2:对Iris数据集进行分割
          • step3:使用 k k k 近邻分类器对数据进行分类预测
          • step4:对 k k k 近邻分类器数据分类预测进行性能评估

(二) 支持向量机,朴素贝叶斯, k k k 近邻(分类) python实现

1、 支持向量机(分类)

step1:手写体数据读取代码样例
# 从sklearn.datasets里导入手写体数字加载器。
from sklearn.datasets import load_digits
# 从通过数据加载器获得手写体数字的数码图像数据并储存在digits变量中。
digits = load_digits()
# 检视数据规模和特征维度。
digits.data.shape

(1797L, 64L)
step2:手写体数据分割代码样例
# 从sklearn.cross_validation中导入train_test_split用于数据分割。
from sklearn.cross_validation import train_test_split 
#python3中 cross_validation 替换为model_selection

# 随机选取75%的数据作为训练样本;其余25%的数据作为测试样本。
X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target, 
test_size=0.25, random_state=33)

y_train.shape
(1347L,)
y_test.shape
(450L,)
step3:使用支持向量机(分类)对手写体数字图像进行识别
# 从sklearn.preprocessing里导入数据标准化模块。
from sklearn.preprocessing import StandardScaler

# 从sklearn.svm里导入基于线性假设的支持向量机分类器LinearSVC。
from sklearn.svm import LinearSVC

# 从仍然需要对训练和测试的特征数据进行标准化。
ss = StandardScaler()
X_train = ss.fit_transform(X_train)
X_test = ss.transform(X_test)

# 初始化线性假设的支持向量机分类器LinearSVC。
lsvc = LinearSVC()
#进行模型训练
lsvc.fit(X_train, y_train)
# 利用训练好的模型对测试样本的数字类别进行预测,预测结果储存在变量y_predict中。
y_predict = lsvc.predict(X_test)

step4:支持向量机(分类)对手写体数字图像进行识别能力的评估
# 使用模型自带的评估函数进行准确性测评。python3中 print()
print 'The Accuracy of Linear SVC is', lsvc.score(X_test, y_test)
The Accuracy of Linear SVC is 0.953333333333
# 依然使用sklearn.metrics里面的classification_report模块对预测结果做更加详细的分析。
from sklearn.metrics import classification_report
print classification_report(y_test, y_predict, target_names=digits.target_names.astype(str))

             precision    recall  f1-score   support

          0       0.92      1.00      0.96        35
          1       0.96      0.98      0.97        54
          2       0.98      1.00      0.99        44
          3       0.93      0.93      0.93        46
          4       0.97      1.00      0.99        35
          5       0.94      0.94      0.94        48
          6       0.96      0.98      0.97        51
          7       0.92      1.00      0.96        35
          8       0.98      0.84      0.91        58
          9       0.95      0.91      0.93        44

avg / total       0.95      0.95      0.95       450

2、朴素贝叶斯

step1:读取20类新闻文本的数据细节
# 从sklearn.datasets里导入新闻数据抓取器fetch_20newsgroups。
from sklearn.datasets import fetch_20newsgroups
# 与之前预存的数据不同,fetch_20newsgroups需要即时从互联网下载数据。
news = fetch_20newsgroups(subset='all')
# 查验数据规模和细节。
print len(news.data)
print news.data[0]

WARNING:sklearn.datasets.twenty_newsgroups:Downloading dataset from 
http://people.csail.mit.edu/jrennie/20Newsgroups/20news-bydate.tar.gz (14 MB)


18846
From: Mamatha Devineni Ratnam 
Subject: Pens fans reactions
Organization: Post Office, Carnegie Mellon, Pittsburgh, PA
Lines: 12
NNTP-Posting-Host: po4.andrew.cmu.edu



I am sure some bashers of Pens fans are pretty confused about the lack
of any kind of posts about the recent Pens massacre of the Devils. Actually,
I am  bit puzzled too and a bit relieved. However, I am going to put an end
to non-PIttsburghers' relief with a bit of praise for the Pens. Man, they
are killing those Devils worse than I thought. Jagr just showed you why
he is much better than his regular season stats. He is also a lot
fo fun to watch in the playoffs. Bowman should let JAgr have a lot of
fun in the next couple of games since the Pens are going to beat t
he pulp out of Jersey anyway.  I was very disappointed not to see the
 Islanders lose the final regular season game.         PENS RULE!!!
step2:20类新闻文本数据分割
# 从sklearn.cross_validation 导入 train_test_split。
from sklearn.cross_validation import train_test_split
# 随机采样25%的数据样本作为测试集。
X_train, X_test, y_train, y_test = train_test_split(news.data, news.target, 
test_size=0.25, random_state=33)

step3:使用朴素贝叶斯分类器对新闻文本数据进行类别预测
# 从sklearn.feature_extraction.text里导入用于文本特征向量转化模块。详细介绍请读者参考3.1.1.1 特征抽取一节。
from sklearn.feature_extraction.text import CountVectorizer

vec = CountVectorizer()
X_train = vec.fit_transform(X_train)
X_test = vec.transform(X_test)

# 从sklearn.naive_bayes里导入朴素贝叶斯模型。
from sklearn.naive_bayes import MultinomialNB

# 从使用默认配置初始化朴素贝叶斯模型。
mnb = MultinomialNB()
# 利用训练数据对模型参数进行估计。
mnb.fit(X_train, y_train)
# 对测试样本进行类别预测,结果存储在变量y_predict中。
y_predict = mnb.predict(X_test)

step4:性能评估
# 从sklearn.metrics里导入classification_report用于详细的分类性能报告。
from sklearn.metrics import classification_report
print 'The accuracy of Naive Bayes Classifier is', mnb.score(X_test, y_test)
print classification_report(y_test, y_predict, target_names = news.target_names)

    The accuracy of Naive Bayes Classifier is 0.839770797963
                              precision    recall  f1-score   support
    
                 alt.atheism       0.86      0.86      0.86       201
               comp.graphics       0.59      0.86      0.70       250
     comp.os.ms-windows.misc       0.89      0.10      0.17       248
    comp.sys.ibm.pc.hardware       0.60      0.88      0.72       240
       comp.sys.mac.hardware       0.93      0.78      0.85       242
              comp.windows.x       0.82      0.84      0.83       263
                misc.forsale       0.91      0.70      0.79       257
                   rec.autos       0.89      0.89      0.89       238
             rec.motorcycles       0.98      0.92      0.95       276
          rec.sport.baseball       0.98      0.91      0.95       251
            rec.sport.hockey       0.93      0.99      0.96       233
                   sci.crypt       0.86      0.98      0.91       238
             sci.electronics       0.85      0.88      0.86       249
                     sci.med       0.92      0.94      0.93       245
                   sci.space       0.89      0.96      0.92       221
      soc.religion.christian       0.78      0.96      0.86       232
          talk.politics.guns       0.88      0.96      0.92       251
       talk.politics.mideast       0.90      0.98      0.94       231
          talk.politics.misc       0.79      0.89      0.84       188
          talk.religion.misc       0.93      0.44      0.60       158
    
                 avg / total       0.86      0.84      0.82      4712
    
    
   

朴素贝叶斯模型被广泛应用于海量互联网文本分类任务。由于其教强的特征条件独立假设,使得模型预测所需要估计的参数规模从幂指数数量级向线性量级减少,极大地节约了内存消耗和计算时间。但是这种强假设的限制,模型训练时无法将各个特征之间的联系考量在内,使得该模型在其他数据特征关联性较强的分类任务上的性能表现不佳。

3、 k k k 近邻(分类)

step1:读取Iris数据集细节资料
# 从sklearn.datasets 导入 iris数据加载器。
from sklearn.datasets import load_iris
# 使用加载器读取数据并且存入变量iris。
iris = load_iris()
# 查验数据规模。
iris.data.shape

(150L, 4L)
# 查看数据说明。对于一名机器学习的实践者来讲,这是一个好习惯。
print iris.DESCR

Iris Plants Database

Notes
-----
Data Set Characteristics:
    :Number of Instances: 150 (50 in each of three classes)
    :Number of Attributes: 4 numeric, predictive attributes and the class
    :Attribute Information:
        - sepal length in cm
        - sepal width in cm
        - petal length in cm
        - petal width in cm
        - class:
                - Iris-Setosa
                - Iris-Versicolour
                - Iris-Virginica
    :Summary Statistics:

    ============== ==== ==== ======= ===== ====================
                    Min  Max   Mean    SD   Class Correlation
    ============== ==== ==== ======= ===== ====================
    sepal length:   4.3  7.9   5.84   0.83    0.7826
    sepal width:    2.0  4.4   3.05   0.43   -0.4194
    petal length:   1.0  6.9   3.76   1.76    0.9490  (high!)
    petal width:    0.1  2.5   1.20  0.76     0.9565  (high!)
    ============== ==== ==== ======= ===== ====================

    :Missing Attribute Values: None
    :Class Distribution: 33.3% for each of 3 classes.
    :Creator: R.A. Fisher
    :Donor: Michael Marshall (MARSHALL%[email protected])
    :Date: July, 1988

This is a copy of UCI ML iris datasets.
http://archive.ics.uci.edu/ml/datasets/Iris

The famous Iris database, first used by Sir R.A Fisher

This is perhaps the best known database to be found in the
pattern recognition literature.  Fisher's paper is a classic in the field and
is referenced frequently to this day.  (See Duda & Hart, for example.)  The
data set contains 3 classes of 50 instances each, where each class refers to a
type of iris plant.  One class is linearly separable from the other 2; the
latter are NOT linearly separable from each other.

References
----------
   - Fisher,R.A. "The use of multiple measurements in taxonomic problems"
     Annual Eugenics, 7, Part II, 179-188 (1936); also in "Contributions to
     Mathematical Statistics" (John Wiley, NY, 1950).
   - Duda,R.O., & Hart,P.E. (1973) Pattern Classification and Scene Analysis.
     (Q327.D83) John Wiley & Sons.  ISBN 0-471-22361-1.  See page 218.
   - Dasarathy, B.V. (1980) "Nosing Around the Neighborhood: A New System
     Structure and Classification Rule for Recognition in Partially Exposed
     Environments".  IEEE Transactions on Pattern Analysis and Machine
     Intelligence, Vol. PAMI-2, No. 1, 67-71.
   - Gates, G.W. (1972) "The Reduced Nearest Neighbor Rule".  IEEE Transactions
     on Information Theory, May 1972, 431-433.
   - See also: 1988 MLC Proceedings, 54-64.  Cheeseman et al"s AUTOCLASS II
     conceptual clustering system finds 3 classes in the data.
   - Many, many more ...
step2:对Iris数据集进行分割
# 从sklearn.cross_validation里选择导入train_test_split用于数据分割。
from sklearn.cross_validation import train_test_split
# 从使用train_test_split,利用随机种子random_state采样25%的数据作为测试集。
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, 
test_size=0.25, random_state=33)

step3:使用 k k k 近邻分类器对数据进行分类预测
# 从sklearn.preprocessing里选择导入数据标准化模块。
from sklearn.preprocessing import StandardScaler
# 从sklearn.neighbors里选择导入KNeighborsClassifier,即K近邻分类器。
from sklearn.neighbors import KNeighborsClassifier

# 对训练和测试的特征数据进行标准化。
ss = StandardScaler()
X_train = ss.fit_transform(X_train)
X_test = ss.transform(X_test)

# 使用K近邻分类器对测试数据进行类别预测,预测结果储存在变量y_predict中。
knc = KNeighborsClassifier()
knc.fit(X_train, y_train)
y_predict = knc.predict(X_test)

step4:对 k k k 近邻分类器数据分类预测进行性能评估
# 使用模型自带的评估函数进行准确性测评。
print 'The accuracy of K-Nearest Neighbor Classifier is', knc.score(X_test, y_test) 

The accuracy of K-Nearest Neighbor Classifier is 0.894736842105
# 依然使用sklearn.metrics里面的classification_report模块对预测结果做更加详细的分析。
from sklearn.metrics import classification_report
print classification_report(y_test, y_predict, target_names=iris.target_names)

             precision    recall  f1-score   support

     setosa       1.00      1.00      1.00         8
 versicolor       0.73      1.00      0.85        11
  virginica       1.00      0.79      0.88        19

avg / total       0.92      0.89      0.90        38

你可能感兴趣的:(机器学习基础python实现,统计机器学习,机器学习,python,人工智能)