03_Classification_02_confusion matrix _.reshape([-1])_score(average="macro")_interpolation.shift正则表达

03_Classification_cannot import name 'fetch_mldata'_cross_val_predict()_ML_Project Checklist : https://blog.csdn.net/Linli522362242/article/details/103786116

Error误差 Analysis

Of course, if this were a real project, you would follow the steps in your Machine Learning project checklist: exploring data preparation options, trying out multiple models, shortlisting the best ones and fine-tuning their hyperparameters using GridSearchCV, and automating as much as possible, as you did in the previous https://blog.csdn.net/Linli522362242/article/details/103387527. Here, we will assume that you have found a promising model and you want to find ways to improve it. One way to do this is to analyze the types of errors it makes.
First, you can look at the confusion matrix. You need to make predictions using the cross_val_predict() function, then call the confusion_matrix() function, just like you did earlier:

y_train_pred = cross_val_predict(sgd_clf, X_train_scaled, y_train, cv=3)

conf_mx = confusion_matrix(y_train, y_train_pred)
conf_mx

03_Classification_02_confusion matrix _.reshape([-1])_score(average=

That’s a lot of numbers. It’s often more convenient to look at an image representation of the confusion matrix, using Matplotlib’s matshow() function:

def plot_confusion_matrix(matrix):
    #if you prefer color and colorbar
    fig = plt.figure(figsize=(8,8))
    ax = fig.add_subplot(111)
    cax = ax.matshow(matrix)
    fig.colorbar(cax)

plot_confusion_matrix(conf_mx)

03_Classification_02_confusion matrix _.reshape([-1])_score(average=

plt.matshow(conf_mx, cmap=plt.cm.gray)
plt.show()

03_Classification_02_confusion matrix _.reshape([-1])_score(average=03_Classification_02_confusion matrix _.reshape([-1])_score(average=

This confusion matrix looks fairly good, since most images are on the main diagonal, which means that they were classified correctly. The 5s look slightly darker than the other digits, which could mean that there are fewer images of 5s in the dataset or that the classifier does not perform as well on 5s as on other digits. In fact, you can verify that both are the case.

Let’s focus the plot on the errors. First, you need to divide each value in the confusion matrix by the number of images in the corresponding class, so you can compare error rates instead of absolute number of errors (which would make abundant classes look unfairly bad):

row_sums = conf_mx.sum(axis=1, keepdims= True)
norm_conf_mx = conf_mx / row_sums

row_sums

03_Classification_02_confusion matrix _.reshape([-1])_score(average=03_Classification_02_confusion matrix _.reshape([-1])_score(average=

import pandas as pd
pd.DataFrame(norm_conf_mx)   # 22/5923=0.003714

03_Classification_02_confusion matrix _.reshape([-1])_score(average=

Now let’s fill the diagonal with zeros to keep only the errors, and let’s plot the result:

np.fill_diagonal(norm_conf_mx, 0)

import pandas as pd
pd.DataFrame(norm_conf_mx)

03_Classification_02_confusion matrix _.reshape([-1])_score(average=

plt.matshow(norm_conf_mx, cmap=plt.cm.gray)
plt.show()

03_Classification_02_confusion matrix _.reshape([-1])_score(average=

plot_confusion_matrix(norm_conf_mx)

03_Classification_02_confusion matrix _.reshape([-1])_score(average=
Now you can clearly see the kinds of errors the classifier makes. Remember that rows represent actual classes, while columns represent predicted classes. The columns for classes 8 is quite bright, which tells you that many images get misclassified as 8s(FP). Similarly, the rows for classes 8 and 9 are also quite bright, telling you that 8s and 9s are often confused with other digits(FN). Conversely, some rows are pretty dark, such as row 1: this means that most 1s are classified correctly (a few are confused with 8s, but that’s about it). Notice that the errors are not perfectly symmetrical; for example, there are more 5s misclassified as 8s than the reverse.

Analyzing the confusion matrix can often give you insights on ways to improve your classifier. Looking at this plot, it seems that your efforts should be spent on improving classification of 8s and 9s, as well as fixing the specific 3/5 confusion混淆. For example, you could try to gather more training data for these digits. Or you could engineer new features that would help the classifier — for example, writing an algorithm to count the number of closed loops (e.g., 8 has two, 6 has one, 5 has none). Or you could preprocess the images (e.g., using Scikit-Image, Pillow, or OpenCV) to make some patterns stand out more, such as closed loops.

Analyzing individual errors can also be a good way to gain insights on what your classifier is doing and why it is failing, but it is more difficult and time-consuming. For example, let’s plot examples of 3s(non-target class) and 5s(target class) (the plot_digits() function just uses Matplotlib’s imshow() function; ):

cl_3, cl_5 = 3,5
X_33 = X_train[ (y_train==cl_3) & (y_train_pred == cl_3) ]
X_35 = X_train[ (y_train==cl_3) & (y_train_pred == cl_5) ]
X_53 = X_train[ (y_train==cl_5) & (y_train_pred == cl_3) ]
X_55 = X_train[ (y_train==cl_5) & (y_train_pred == cl_5) ]

plt.figure( figsize=(8,8) )
plt.subplot(221); plot_digits(X_33[:25], images_per_row=5)
plt.subplot(222); plot_digits(X_35[:25], images_per_row=5)
plt.subplot(223); plot_digits(X_53[:25], images_per_row=5)
plt.subplot(224); plot_digits(X_55[:25], images_per_row=5)
plt.show()

03_Classification_02_confusion matrix _.reshape([-1])_score(average=
The two 5×5 blocks on the left show digits classified as 3s, and the two 5×5 blocks on the right show images classified as 5s. Some of the digits that the classifier gets wrong (i.e., in the bottom-left and topright blocks) are so badly written that even a human would have trouble classifying them (e.g., the 5 on the 6th row and 2nd column truly looks like a 3). However, most misclassified images seem like obvious errors to us, and it’s hard to understand why the classifier made the mistakes it did. The reason is that we used a simple SGDClassifier, which is a linear model. All it does is assign a weight per class to each pixel, and when it sees a new image it just sums up the weighted pixel intensities强度 to get a score for each class. So since 3s and 5s differ only by a few pixels, this model will easily confuse them.

The main difference between 3s and 5s is the position of the small line that joins the top line to the bottom arc. If you draw a 3 with the junction slightly shifted to the left, the classifier might classify it as a 5, and vice versa. In other words, this classifier is quite sensitive to image shifting and rotation. So one way to reduce the 3/5 confusion would be to preprocess the images to ensure that they are well centered and not too rotated. This will probably help reduce other errors as well.

Multilabel Classification

Until now each instance has always been assigned to just one class. In some cases you may want your classifier to output multiple classes for each instance. For example, consider a face-recognition classifier: what should it do if it recognizes several people on the same picture? Of course it should attach one label per person it recognizes. Say the classifier has been trained to recognize three faces, Alice, Bob, and Charlie; then when it is shown a picture of Alice and Charlie, it should output [1, 0, 1]
(meaning “Alice yes, Bob no, Charlie yes”). Such a classification system that outputs multiple binary labels is called a multilabel classification system.

We won’t go into face recognition just yet, but let’s look at a simpler example, just for illustration purposes:

from sklearn.neighbors import KNeighborsClassifier

y_train_large = (y_train>=7) #class 7,8,9 #whether or not the digit is large (7, 8, or 9)
y_train_odd = (y_train%2==1) #whether or not it is odd.
               #Translates slice objects to concatenation along the second axis
y_multilabel = np.c_[y_train_large, y_train_odd]#containing two target labels for each digit image

#The next lines create a KNeighborsClassifier instance (which supports multilabel classification, 
#but not all classifiers do) and we train it using the multiple targets array
knn_clf = KNeighborsClassifier()
knn_clf.fit(X_train, y_multilabel)

#Now you can make a prediction, and notice that it outputs two labels

knn_clf.predict([some_digit])   #some_digit=X_train[0] y_train[0]==5

#And it gets it right! The digit 5 is indeed not large (False) and odd (True).

There are many ways to evaluate a multilabel classifier, and selecting the right metric really depends on your project. For example, one approach is to measure the F1 score for each individual label (or any other binary classifier metric discussed earlier), then simply compute the average score.

This code computes the average F1 score across all labels:

y_train_knn_pred = cross_val_predict(knn_clf, X_train, y_train, cv=3)
f1_score(y_train, y_train_knn_pred, average="macro")

This assumes that all labels are equally important(average='macro'), which may not be the case. In particular, if you have many more pictures of Alice than of Bob or Charlie, you may want to give more weight to the classifier’s score on pictures of Alice. One simple option is to give each label a weight equal to its support (i.e., the number of instances with that target label). To do this, simply set average="weighted" in the preceding code.
##############################################################

from sklearn import metrics

y_true = [2, 0, 2, 2, 0, 1]
y_pred = [0, 0, 2, 2, 0, 2]

confusion_matrix(y_true, y_pred)

03_Classification_02_confusion matrix _.reshape([-1])_score(average=

target_names = ['class 0', 'class 1', 'class 2']

from sklearn.metrics import classification_report

print( classification_report(y_true, y_pred, target_names=target_names) )
              precision    recall  f1-score   support

     class 0       0.67      1.00      0.80         2
     class 1       0.00      0.00      0.00         1
     class 2       0.67      0.67      0.67         3

    accuracy                           0.67         6      Accuracy = (TP+TN)/(TP+FP+FN+TN)=(2+0+2)/(2+1+1+2) =0.67
   macro avg       0.44      0.56      0.49         6 进行简单算术平均 unweighted mean;AllClassesAreEquallyImportant
weighted avg       0.56      0.67      0.60         6 权重为各类别数在 y_true中所占比例

0.44 = (0.67+0.00+0.67)/3    0.56=(1.00+0.00+0.67)/3    0.49= (0.80+0.00+0.67)/3

0.56=(0.67*2 +0.00*1 + 0.67*3)/(1+2+3)  : 1, 2, 3 are weights

0.67=(1.00*2 +0.00*1 + 0.67*3)/(1+2+3)

0.60=(0.80*2 +0.00*1 + 0.67*3)/(1+2+3)

http://www.freesion.com/article/776116265/

##############################################################

Multioutput Classification

The last type of classification task we are going to discuss here is called multioutput-multiclass classification (or simply multioutput classification). It is simply a generalization of multilabel classification where each label can be multiclass (i.e., it can have more than two possible values).

To illustrate this, let’s build a system that removes noise from images. It will take as input a noisy digit image, and it will (hopefully) output a clean digit image, represented as an array of pixel intensities, just like the MNIST images. Notice that the
classifier’s output is multilabel (one label per pixel) 28*28 and each label can have multiple values (pixel intensity ranges from 0 to 255). It is thus an example of a multioutput classification system.

NOTE
The line界限 between classification and regression is sometimes blurry模糊的, such as in this example. Arguably, predicting pixel intensity is more akin to regression than to classification. Moreover, multioutput systems are not limited to classification tasks; you could even have a system that outputs multiple labels per instance(eg, several people_labels on the same picture_instance), including both class labels and value labels.

Let’s start by creating the training and test sets by taking the MNIST images and adding noise to their pixel intensities using NumPy’s randint() function. The target images will be the original images:

noise = np.random.randint( 0,100, (len(X_train), 784) )
X_train_mod = X_train + noise
noise = np.random.randint( 0,100, (len(X_test), 784) )
X_test_mod = X_test + noise

y_train_mod = X_train #new labels
y_test_mod = X_test   #new labels

X_train_mod

03_Classification_02_confusion matrix _.reshape([-1])_score(average=

some_index =0

plt.subplot(121); plot_digit(X_test_mod[some_index])
plt.subplot(122); plot_digit(y_test_mod[some_index]) #label
plt.show()

03_Classification_02_confusion matrix _.reshape([-1])_score(average=03_Classification_02_confusion matrix _.reshape([-1])_score(average=

On the left is the noisy input image, and on the right is the clean target image. Now let’s train the classifier and make it clean this image:

knn_clf.fit(X_train_mod, y_train_mod) #learning
clean_digit = knn_clf.predict([X_test_mod[some_index]])
plot_digit(clean_digit)

03_Classification_02_confusion matrix _.reshape([-1])_score(average=

Looks close enough to the target! This concludes our tour of classification. Hopefully you should now know how to select good metrics for classification tasks, pick the appropriate precision/recall tradeoff, compare classifiers, and more generally build good classification systems for a variety of tasks.

4  Extra material

Dummy (ie. random) classifier

from sklearn.dummy import DummyClassifier
from sklearn.model_selection import cross_val_predict
from sklearn.datasets import fetch_openml
from sklearn.neighbors import KNeighborsClassifier
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np

mnist = fetch_openml('mnist_784', version=1)

X, y = mnist["data"], mnist["target"]
y=y.astype(np.uint8) #very important to openCV
X_train, X_test, y_train, y_test = X[:60000], X[60000:], y[:60000], y[60000:] #for tracking the steps

y_train_5 = (y_train==5)
y_test_5 = (y_test==5)
dmy_clf = DummyClassifier()
y_probas_dmy = cross_val_predict(dmy_clf, X_train, y_train_5, cv=3, method="predict_proba")
y_probas_dmy

03_Classification_02_confusion matrix _.reshape([-1])_score(average=

#0 column: the chance belonging to class non-5

#1 column: the chance belonging to class 5

The predict_proba() method returns an array containing a row per instance and a column per class, each containing the probability that the given instance belongs to the given class (e.g., 70% chance that the image represents a 5):

def plot_roc_curve(fpr, tpr, label=None):
    plt.plot(fpr, tpr, lw=2, label=label)
    plt.plot([0,1], [0,1], 'b--') #dashed diagnoal
    plt.axis([0,1, 0,1])
    plt.xlabel("False Positive Rate (Fall-Out)", fontsize=16)
    plt.ylabel("True Positive Rate (Recall)", fontsize=16)
    plt.grid(True)

y_scores_dmy = y_probas_dmy[:,1]

from sklearn.metrics import roc_curve

fprr, tprr, thresholdsr = roc_curve(y_train_5, y_scores_dmy)
plot_roc_curve(fprr, tprr)

03_Classification_02_confusion matrix _.reshape([-1])_score(average=

Exercises


1. Try to build a classifier for the MNIST dataset that achieves over 97% accuracy on the test set. Hint: the KNeighborsClassifier works quite well for this task; you just need to find good hyperparameter values (try a grid search on the weights and n_neighbors hyperparameters).
https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html#sklearn.neighbors.KNeighborsClassifier

class sklearn.neighbors.KNeighborsClassifier(n_neighbors=5weights='uniform'algorithm='auto'leaf_size=30p=2metric='minkowski'metric_params=Nonen_jobs=None**kwargs)[source]¶

weights: str or callable, optional (default = ‘uniform’)

weight function used in prediction. Possible values:

  • ‘uniform’ : uniform weights. All points in each neighborhood are weighted equally.

  • ‘distance’ : weight points by the inverse of their distance. in this case, closer neighbors of a query point will have a greater influence than neighbors which are further away.

from sklearn.model_selection import GridSearchCV

param_grid = [{'weights':["uniform", "distance"], 
               "n_neighbors":[3,4,5]
              }
             ]

knn_clf = KNeighborsClassifier()
grid_search = GridSearchCV(knn_clf, param_grid, cv=5, verbose=3)
grid_search.fit(X_train, y_train)

03_Classification_02_confusion matrix _.reshape([-1])_score(average=
... ...

03_Classification_02_confusion matrix _.reshape([-1])_score(average=

find good hyperparameter values

grid_search.best_params_

grid_search.best_score_  #over 97% accuracy(default) on the train set. 

over 97% accuracy on the test set.

from sklearn.metrics import accuracy_score

y_pred = grid_search.predict(X_test)
accuracy_score(y_test, y_pred)

2. Write a function that can shift an MNIST image in any direction (left, right, up, or down) by one pixel. Then, for each image in the training set, create four shifted copies (one per direction) and add them to the training set. Finally, train your best model on this expanded training set and measure its accuracy on the test set. You should observe that your model performs even better now! This technique of artificially growing the training set is called data augmentation or training set expansion.

Write a function that can shift an MNIST image in any direction (left, right, up, or down) by one pixel

https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.shift.html#scipy.ndimage.shift

https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.reshape.html

from scipy.ndimage.interpolation import shift

def shift_image(image, dx, dy):
    image = image.reshape((28,28)) #(784,) --> (28,28)
                   #[dy,dx]  #shifts the image dy pixels down and dx pixel to the right. 
    #The input is extended by filling all values beyond the edge with the same "constant" value,
    #defined by the "cval" parameter(here is 0).    
    shifted_image = shift(image, [dy, dx], cval=0, mode="constant")
    return shifted_image.reshape([-1]) #(28*28) --> (784,)


image = X_train[1000] #digit 0
#image.shape==(784,)
shifted_image_down = shift_image(image, 0, 5)#intead 1 with 5
shifted_image_left = shift_image(image, -5, 0)#instead 1 with -5

plt.figure(figsize=(12,3))

plt.subplot(131)
plt.title("Original", fontsize=14
#https://matplotlib.org/gallery/images_contours_and_fields/interpolation_methods.html?highlight=interpolation
plt.imshow(image.reshape((28,28)), interpolation="nearest", cmap="Greys")

plt.subplot(132)
plt.title("Shifted down", fontsize=14)
plt.imshow(shifted_image_down.reshape((28,28)), interpolation="nearest", cmap="Greys" )

plt.subplot(133)
plt.title("Shifted down", fontsize=14)
plt.imshow(shifted_image_left.reshape((28,28)), interpolation="nearest", cmap="Greys" )
plt.show()

03_Classification_02_confusion matrix _.reshape([-1])_score(average=

X_train=X_train.astype(np.uint8) #very important to cv if your code run out of your RAM

X_train_augmented = [image for image in X_train] #image represents each row or every instance
y_train_augmented = [label for label in y_train]
         #shift   left, right,     up,  down
for dx, dy in ( (-1,0), (1,0), (0,-1), (0,1) ): #, (0,-1), (0,1)
    for image, label in zip(X_train, y_train):
        X_train_augmented.append( shift_image(image, dx,dy) )
        y_train_augmented.append( label )
        
X_train_augmented = np.array(X_train_augmented)
y_train_augmented = np.array(y_train_augmented)

                                    # 300000
shuffle_idx = np.random.permutation(len(X_train_augmented))
X_train_augmented = X_train_augmented[shuffle_idx]
y_train_augmented = y_train_augmented[shuffle_idx]

knn_clf = KNeighborsClassifier(n_neighbors= 4, weights= 'distance')
knn_clf.fit(X_train_augmented, y_train_augmented)
KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski',
                     metric_params=None, n_jobs=None, n_neighbors=4, p=2,
                     weights='distance')
y_pred = knn_clf.predict(X_test)

from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_pred)
0.9763

By simply augmenting the data, we got a about 0.5% accuracy boost. :)

3. Tackle the Titanic dataset

The goal is to predict whether or not a passenger survived based on attributes such as their age, sex, passenger class, where they embarked and so on.

First, login to Kaggle and go to the Titanic challenge to download train.csv and test.csv. Save them to the datasets/titanicdirectory.

Next, let's load the data:
03_Classification_02_confusion matrix _.reshape([-1])_score(average=

import os

Titanic_Path = os.path.join("../datasets", "titanic")

import pandas as pd

def load_titanic_data(filename, titanic_path = Titanic_Path):
    csv_path = os.path.join(titanic_path, filename)
    return pd.read_csv(csv_path)

train_data = load_titanic_data("train.csv")
test_data = load_titanic_data("test.csv")

The data is already split into a training set and a test set. However, the test data does not contain the labels: your goal is to train the best model you can using the training data, then make your predictions on the test data and upload them to Kaggle to see your final score.

Let's take a peek at the top few rows of the training set and the test set:

train_data.head() 

03_Classification_02_confusion matrix _.reshape([-1])_score(average=

test_data.head()

The attributes have the following meaning:

Survived: that's the target, 0 means the passenger did not survive, while 1 means he/she survived.

Pclass: passenger class.1 = 1st = Upper, 2 = 2nd = Middle, 3 = 3rd = Lower

Name, Sex, Age: self-explanatory

SibSp: how many siblings & spouses of the passenger aboard the Titanic.

Parch: how many children & parents of the passenger aboard the Titanic.

Ticket: ticket id

Fare: price paid (in pounds)

Cabin: passenger's cabin number

Embarked: where the passenger embarked the Titanic; C = Cherbourg, Q = Queenstown, S = Southampton

train_data.info()

03_Classification_02_confusion matrix _.reshape([-1])_score(average=
Okay, the Age, Cabin and Embarked attributes are sometimes null (less than 891 non-null), especially the Cabin (77% are null). We will ignore the Cabin for now and focus on the rest. The Age attribute has about 19% null values, so we will need to decide what to do with them. Replacing null values with the median age seems reasonable.

The Name and Ticket attributes may have some value, but they will be a bit tricky to convert into useful numbers that a model can consume. So for now, we will ignore them.

################Let's take a look at the numerical attributes:################

train_data.describe()

03_Classification_02_confusion matrix _.reshape([-1])_score(average=

To consider meaningful numerical attributes, Ignore: ID(eg PassengerID), categorical attributes(Pclass),  

Survived: Yikes, only 38%=0.383838 Survived :( That's close enough to 40%, so accuracy will be a reasonable metric to evaluate our model.

Fare: The mean Fare was £32.20, which does not seem so expensive (but it was probably a lot of money back then).

Age: The mean Age was less than 30 years old.

Let's check that the target ("Survived")is indeed 0 or 1:

train_data['Survived'].value_counts()

################Now let's take a quick look at all the categorical attributes:################

train_data['Pclass'].value_counts()

train_data['Sex'].value_counts()

train_data['Embarked'].value_counts()

The Embarked attribute tells us where the passenger embarked: C=Cherbourg, Q=Queenstown, S=Southampton.

Note: the code below uses a mix of Pipeline, FeatureUnion and a custom DataFrameSelector to preprocess some columns differently. Since Scikit-Learn 0.20, it is preferable to use a ColumnTransformer.

Now let's build our preprocessing pipelines. We will reuse the DataframeSelector to select specific attributes from the DataFrame:

from sklearn.base import BaseEstimator, TransformerMixin

class DataFrameSelector(BaseEstimator, TransformerMixin):
    def __init__(self, attribute_names):
        self.attribute_names = attribute_names
        
    def fit(self, X, y=None):
        return self
    
    def transform(self, X):
        return X[self.attribute_names]

Let's build the pipeline for the numerical attributes:

from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer

num_pipeline = Pipeline([
    ("select_numeric", DataFrameSelector(["Age", "SibSp", "Parch", "Fare"])),
    ("imputer", SimpleImputer(strategy="median"))#to estimate
])

num_pipeline.fit_transform(train_data) 

03_Classification_02_confusion matrix _.reshape([-1])_score(average=03_Classification_02_confusion matrix _.reshape([-1])_score(average=

We will also need an imputer for the string categorical columns (the regular SimpleImputer does not work on those):

# Inspired from stackoverflow.com/questions/25239958
class MostFrequentImputer(BaseEstimator, TransformerMixin):
    def fit(self, X, y=None):
        self.most_frequent_ = pd.Series([X[col].value_counts().index[0] for col in X],
                                        index=X.columns
                                       )
        return self
    
    def transform(self, X, y=None):
        return X.fillna(self.most_frequent_)

from sklearn.preprocessing import OneHotEncoder

cat_pipeline = Pipeline([
    ("select_cat", DataFrameSelector(["Pclass", "Sex", "Embarked"])),
    ("imputer", MostFrequentImputer()),
    ("cat_encoder", OneHotEncoder(sparse=False))
])

cat_pipeline.fit_transform(train_data)

03_Classification_02_confusion matrix _.reshape([-1])_score(average=             

Finally, let's join the numerical and categorical pipelines:

from sklearn.pipeline import FeatureUnion

preprocess_pipeline = FeatureUnion(transformer_list=[
                                    ('num_pipeline', num_pipeline),
                                    ('cat_pipeline', cat_pipeline),
                                  ])

Cool! Now we have a nice preprocessing pipeline that takes the raw data and outputs numerical input features that we can feed to any Machine Learning model we want.

X_train = preprocess_pipeline.fit_transform(train_data)
X_train  
#"Age", "SibSp", "Parch", "Fare", "Pclass #1","Pclass #2","Pclass #3", "Sex Female", 
#"Sex male", "Embarked C=Cherbourg", "Embarked Q=Queenstown", "Embarked S=Southampton"

03_Classification_02_confusion matrix _.reshape([-1])_score(average=                 

Let's not forget to get the labels:

y_train = train_data['Survived']  #Survived: 0_No, 1_Yes
y_train.head()

03_Classification_02_confusion matrix _.reshape([-1])_score(average=
We are now ready to train a classifier. Let's start with an SVC:

from sklearn.svm import SVC

svm_clf = SVC(gamma="auto") # gamma = 1/n_features
svm_clf.fit(X_train, y_train)


Great, our model is trained, let's use it to make predictions on the test set:

X_test = preprocess_pipeline.transform(test_data)
y_pred = svm_clf.predict(X_test)

And now we could just build a CSV file with these predictions (respecting the format excepted by Kaggle), then upload it and hope for the best. But wait! We can do better than hope. Why don't we use cross-validation to have an idea of how good our model is?

from sklearn.model_selection import cross_val_score

svm_scores = cross_val_score(svm_clf, X_train, y_train, cv=10)
svm_scores

svm_scores.mean()


Okay, over 73% accuracy, clearly better than random chance, but it's not a great score. Looking at the leaderboard for the Titanic competition on Kaggle, you can see that you need to reach above 80% accuracy to be within the top 10% Kagglers. Some reached 100%, but since you can easily find the list of victims of the Titanic, it seems likely that there was little Machine Learning involved in their performance! ;-) So let's try to build a model that reaches 80% accuracy.

Let's try a RandomForestClassifier:

from sklearn.ensemble import RandomForestClassifier

forest_clf = RandomForestClassifier(n_estimators=100, random_state=42)
forest_scores = cross_val_score(forest_clf, X_train, y_train, cv=10)
forest_scores.mean()

That's much better!

Instead of just looking at the mean accuracy across the 10 cross-validation folds, let's plot all 10 scores for each model, along with a box plot highlighting the lower and upper quartiles, and "whiskers" showing the extent of the scores (thanks to Nevin Yilmaz for suggesting this visualization). Note that the boxplot() function detects outliers (called "fliers") and does not include them within the whiskers. Specifically, if the lower quartile is  and the upper quartile is , then the interquartile range  (this is the box's height), and any score lower than  is a flier, and so is any score greater than .
https://blog.csdn.net/Linli522362242/article/details/91037961
03_Classification_02_confusion matrix _.reshape([-1])_score(average=03_Classification_02_confusion matrix _.reshape([-1])_score(average=

plt.figure(figsize=(8,6))
plt.plot([1]*10, svm_scores, "x", c='b')
plt.plot([2]*10, forest_scores, "x", c='b')
plt.boxplot( [svm_scores, forest_scores], labels=("SVM", "Random Forest") )
plt.ylabel("Accuracy", fontsize=14)
plt.show()

03_Classification_02_confusion matrix _.reshape([-1])_score(average=

To improve this result further, you could:

  • Compare many more models and tune hyperparameters using cross validation and grid search,
  • Do more feature engineering, for example:
    • replace SibSp and Parch with their sum,
    • try to identify parts of names that correlate well with the Survived attribute (e.g. if the name contains "Countess", then survival seems more likely),
  • try to convert numerical attributes to categorical attributes: for example, different age groups had very different survival rates (see below), so it may help to create an age bucket category and use it instead of the age. Similarly, it may be useful to have a special category for people traveling alone since only 30% of them survived (see below).
train_data["AgeBucket"] = train_data["Age"] //15*15
train_data[["AgeBucket", "Survived"]].groupby(["AgeBucket"]).mean()

03_Classification_02_confusion matrix _.reshape([-1])_score(average=

train_data["RelativesOnboard"] = train_data["SibSp"] + train_data["Parch"]
train_data[["RelativesOnboard", "Survived"]].groupby(['RelativesOnboard']).mean()

03_Classification_02_confusion matrix _.reshape([-1])_score(average=
4.Build a spam classifier (a more challenging exercise):#################

  • Download examples of spam and ham from Apache SpamAssassin’s public datasets.03_Classification_02_confusion matrix _.reshape([-1])_score(average=
    • 20030228_spam.tar: https://pan.baidu.com/s/1RstRNDz0WEMxbiaTITDB-Q
    • 20030228_easy_ham.tar: https://pan.baidu.com/s/1LBcNQWhnivDFLkQKiRnqtQ
  • Unzip the datasets and familiarize yourself with the data format.
  • import os
    import tarfile
    import urllib
    
    Download_Root = "http://spamassassin.apache.org/old/publiccorpus/"
    HAM_URL = Download_Root + "20030228_easy_ham.tar.bz2"
    SPAM_URL = Download_Root + "20030228_spam.tar.bz2"
    SPAM_PATH = os.path.join("../datasets", "spam") #'../datasets\\spam'

    ...

    def fetch_spam_data(spam_url = SPAM_URL, spam_path=SPAM_PATH):
        if not os.path.isdir(spam_path):
            os.makedirs(spam_path)#create a folder "spam" under the current computer's directory #../datasets/spam
        # 20030228_spam.tar.bz2/20030228_spam.tar/spam/...datafiles...
        # 20030228_easy_ham.tar.bz2/20030228_easy_ham.tar/easy_ham/...datafiles...
        # e.g. after unziped file, we got a folder named spam or easy_ham 
    
        for filename, url in( ("ham.tar.bz2", HAM_URL), ("spam.tar.bz2", SPAM_URL) ):
            path = os.path.join(spam_path, filename)    #current computer's directory+specified filename
            if not os.path.isfile(path):  #../datasets/spam/filename in ["ham.tar.bz2", "spam.tar.bz2"]
                urllib.request.urlretrieve(url, path)#download file from url then save it to specified path
            tar_bz2_file = tarfile.open(path)    #specified path: current computer's directory+specified filename
            tar_bz2_file.extractall(path=SPAM_PATH) #Save Path['../datasets/spam'] of Unziped File 
            tar_bz2_file.close()   
    
    fetch_spam_data()

    03_Classification_02_confusion matrix _.reshape([-1])_score(average=

  • Next, let's load all the emails:

    HAM_DIR = os.path.join(SPAM_PATH, "easy_ham") #'../datasets\\spam\\easy_ham'
    SPAM_DIR = os.path.join(SPAM_PATH, "spam")  #'../datasets\\spam\\spam'
    ham_filenames = [name for name in sorted( os.listdir(HAM_DIR) ) if len(name) > 20 ]
    spam_filenames = [name for name in sorted( os.listdir(SPAM_DIR) ) if len(name) >20 ]
    
    #len(ham_filenames)  #2500
    #len(spam_filenames) #500

    We can use Python's email module to parse these emails (this handles headers, encoding, and so on):

    import email
    import email.policy
                                                #'../datasets\\spam'
    def load_email(is_spam, filename, spam_path = SPAM_PATH):
        directory = "spam" if is_spam else "easy_ham"
        with open( os.path.join(spam_path, directory, filename), "rb" ) as f:
            return email.parser.BytesParser(policy=email.policy.default).parse(f)
    
    ham_emails = [ load_email(is_spam=False, filename=name) for name in ham_filenames ]
    spam_emails = [ load_email(is_spam=True, filename=name) for name in spam_filenames ]

    03_Classification_02_confusion matrix _.reshape([-1])_score(average=

  • 03_Classification_02_confusion matrix _.reshape([-1])_score(average=

  • Let's look at one example of ham and one example of spam, to get a feel of what the data looks like:

    print(ham_emails[1].get_content().strip())

    03_Classification_02_confusion matrix _.reshape([-1])_score(average=

  • 03_Classification_02_confusion matrix _.reshape([-1])_score(average=

print(spam_emails[6].get_content().strip())

03_Classification_02_confusion matrix _.reshape([-1])_score(average=

03_Classification_02_confusion matrix _.reshape([-1])_score(average=

Some emails are actually multipart, with images and attachments (which can have their own attachments). Let's look at the various types of structures we have:     https://docs.python.org/3/library/email.message.html

def get_email_structure(email):
    if isinstance(email, str):
        return email
    
    #if email is an object then
    payload = email.get_payload()
    if isinstance(payload, list):
        return "multipart({})".format(", ".join([get_email_structure(sub_email)
                                                 for sub_email in payload
                                                ]
                                               )
                                     )
    else:
        return email.get_content_type()


from collections import Counter

def structures_counter(emails):
    structures = Counter()
    for email in emails:
        structure = get_email_structure(email)
        structures[structure] += 1
    return structures
#spam_emails[14] just a email with multi_items(e.g. (key: value)=('X-Info2' : 'SGO')=(email header: value )
structures_counter(spam_emails[13]).most_common() #most_common including sort

03_Classification_02_confusion matrix _.reshape([-1])_score(average=

#spam_emails[13:14]: a list including an email object[14]
structures_counter(spam_emails[13:14]).most_common()

print(spam_emails[13].get_content().strip())

03_Classification_02_confusion matrix _.reshape([-1])_score(average=

Data file

03_Classification_02_confusion matrix _.reshape([-1])_score(average=

show html content of data in google chrome

03_Classification_02_confusion matrix _.reshape([-1])_score(average=

#################################### count the structure of all emails with python dictionary format

structures_counter(ham_emails).most_common()

03_Classification_02_confusion matrix _.reshape([-1])_score(average=

structures_counter(spam_emails).most_common()

03_Classification_02_confusion matrix _.reshape([-1])_score(average=

It seems that the ham emails are more often plain text, while spam has quite a lot of HTML. Moreover, quite a few ham emails are signed using PGP, while no spam is. In short, it seems that the email structure is useful information to have.

Now let's take a look at the email headers:

#items(): Return a list of 2-tuples containing all the message’s field headers and values.
for header, value in spam_emails[13].items():
    print(header, ":", value)

03_Classification_02_confusion matrix _.reshape([-1])_score(average=

There's probably a lot of useful information in there, such as the sender's email address (Great Offers [email protected] looks fishy), but we will just focus on the Subject header(ad info):

spam_emails[13]["Subject"]

  • Okay, before we learn too much about the data, let's not forget to split it into a training set and a test set:
import numpy as np
from sklearn.model_selection import train_test_split

X = np.array(ham_emails + spam_emails)
                        #Label 0             #Label 1
y = np.array([0] * len(ham_emails) + [1]*len(spam_emails))

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

Okay, let's start writing the preprocessing functions.

First, we will need a function to convert HTML to plain text. Arguably the best way to do this would be to use the great BeautifulSoup library, but I would like to avoid adding another dependency to this project, so let's hack a quick & dirty solution using regular expressions (at the risk of un̨ho͞ly radiańcé destro҉ying all enli̍̈́̂̈́ghtenment). The following function first drops the  section, then converts all  tags to the word HYPERLINK, then it gets rid of all HTML tags, leaving only the plain text. For readability, it also replaces multiple newlines with single newlines, and finally it unescapes html entities (such as > or  ):
#########################################Extra Materials ( Regular Expression) extract content from html

https://docs.python.org/2/library/re.html

import re
inputStr="hello python,ni hao c,zai jian python"
replaceStr=re.sub(r"hello (\w+),ni hao (\w+),zai jian python","PHP",inputStr, flags=re.M | re.S)
print (replaceStr)

                                                                            #Label 1: spam email  
html_spam_emails = [email for email in X_train[y_train==1] if get_email_structure(email) =="text/html"
                   ]
sample_html_spam = html_spam_emails[7]
sample_html_spam_content=sample_html_spam.get_content().strip()[:1000]
print(sample_html_spam_content, "...")

03_Classification_02_confusion matrix _.reshape([-1])_score(average=

first drops the  section

import re
#    matches only one “”
# '.*?'
        #  matches only one start with "", and the following any number of characters and end with ""
text = re.sub('.*?', '', sample_html_spam_content, flags=re.M | re.S | re.I)
print(text)

03_Classification_02_confusion matrix _.reshape([-1])_score(average=

then converts all  tags to the word HYPERLINK

# \s matches any whitespace character
# \s.*? matches any whitespace character, and matches the following any number of characters one time
# start with ""
text = re.sub('', " HYPERLINK ", text, flags=re.M|re.S|re.I)
print(text) #no effect since not exists hyperlink 

03_Classification_02_confusion matrix _.reshape([-1])_score(average=

then it gets rid of all HTML tags, leaving only the plain text

#<.*?> matches only any start with "<" and end with ">" except nesting mode eg <<..>
#In other words, any tags
text = re.sub('<.*?>', '', text, flags = re.M | re.S) #extract string from tags
print(text)

03_Classification_02_confusion matrix _.reshape([-1])_score(average=

For readability, it also replaces multiple newlines with single newlines

#(...) Matches whatever regular expression is inside the parentheses
#r'...' raw string notation for regular expression patterns(...)
#\s  matches any whitespace character, this is equivalent to the set [ \t\n\r\f\v]
#matches the expression "\s*\n" (white space character plus \n) at least one time(1+)
text = re.sub(r'(\s*\n)+', '\n', text, flags=re.M | re.S )
print(text)

 finally it unescapes html entities (such as > or  )

from html import unescape

unescape(text)

from html import unescape
print(unescape(text))

#########################################

import re
from html import unescape

def html_to_plain_text(html):
    # re.I (ignore case;  re.M (multi-line); re.S (let dot or "." matches all including "a newline")
    text = re.sub('.*?', '', html, flags=re.M | re.S | re.I)
    text = re.sub('', " HYPERLINK ", text, flags=re.M | re.S | re.I)
    text = re.sub('<.*?>', '', text, flags = re.M | re.S) #extract string from tags
    text = re.sub(r'(\s*\n)+', '\n', text, flags=re.M | re.S )
    return unescape(text)

                                              #Label 1: spam email  
html_spam_emails = [email for email in X_train[y_train==1] if get_email_structure(email) =="text/html"
                   ]
sample_html_spam = html_spam_emails[7]
sample_html_spam_content=sample_html_spam.get_content().strip()[:1000]
print(sample_html_spam_content, "...")

03_Classification_02_confusion matrix _.reshape([-1])_score(average=

And this is the resulting plain text:

print( html_to_plain_text( sample_html_spam.get_content())[:1000], "..." )

03_Classification_02_confusion matrix _.reshape([-1])_score(average=

Great! Now let's write a function that takes an email as input and returns its content as plain text, whatever its format is:

https://docs.python.org/3/library/email.message.html

03_Classification_02_confusion matrix _.reshape([-1])_score(average=

def email_to_text(email):
        html = None
        #The walk() method is an all-purpose generator which can be used to iterate over all the parts and subparts of 
        #a message object tree, in depth-first traversal order. You will typically use walk() as the iterator in a for
        #loop; each iteration returns the next subpart.
        for part in email.walk():
            ctype = part.get_content_type()
            if not ctype in ("text/plain", "text/html"):
                continue #go to next for loop
            try:
                content = part.get_content()
            except:# in case of encoding issues
                content = str(part.get_payload())
                
            if ctype=="text/plain":
                return content
            else:
                html=content
        if html:
            return html_to_plain_text(html)

print( email_to_text(sample_html_spam)[:100], "..." )

03_Classification_02_confusion matrix _.reshape([-1])_score(average=

Let's throw in some stemming! For this to work, you need to install the Natural Language Toolkit (NLTK http://www.nltk.org/ ). It's as simple as running the following command (don't forget to activate your virtualenv first; if you don't have one, you will likely need administrator rights, or use the --user option):

$ pip3 install nltk

https://blog.csdn.net/qq_34504481/article/details/82150889

try:
    import nltk
    
    stemmer = nltk.PorterStemmer()
    for word in ("Computations", "Computation", "Computing", "Computed", "Compute", "Compulsive"):
        print( word, "=>", stemmer.stem(word) )
except ImportError:
    print("Error: stemming requires the NLTK module.")
    stemmer = None

03_Classification_02_confusion matrix _.reshape([-1])_score(average=

We will also need a way to replace URLs with the word "URL". For this, we could use hard core regular expressions (https://mathiasbynens.be/demo/url-regex) but we will just use the urlextract (https://github.com/lipoja/URLExtract )library. You can install it with the following command (don't forget to activate your virtualenv first; if you don't have one, you will likely need administrator rights, or use the --user option):

$ pip3 install urlextract

try:
    import urlextract # may require an Internet connection to download root domain names
    
    url_extractor = urlextract.URLExtract()
    print(url_extractor.find_urls("Will it detect github.com and https://youtu.be/7Pq-S557XQU?t=3m32s")
         )
except ImportError:
    print("Error:  replacing URLs requires the urlextract module.")
    url_extractor = None

We are ready to put all this together into a transformer that we will use to convert emails to word counters. Note that we split sentences into words using Python's split() method, which uses whitespaces for word boundaries. This works for many written languages, but not all. For example, Chinese and Japanese scripts generally don't use spaces between words, and Vietnamese often uses spaces even between syllables. It's okay in this exercise, because the dataset is (mostly) in English.

https://www.cnblogs.com/jin-zhe/p/9773081.html

https://docs.python.org/3/library/re.html

from sklearn.base import BaseEstimator, TransformerMixin

class EmailToWordCounterTransformer(BaseEstimator, TransformerMixin):
    def __init__(self, strip_headers=True, lower_case=True, remove_punctuation=True, 
                 replace_urls=True, replace_numbers=True, stemming=True):
        self.strip_headers = strip_headers
        self.lower_case = lower_case
        self.remove_punctuation = remove_punctuation
        self.replace_urls = replace_urls
        self.replace_numbers = replace_numbers
        self.stemming = stemming
        
    def fit(self, X, y=None):
        return self
    
    def transform(self, X, y=None):
        X_transformed = []
        for email in X:
            text = email_to_text(email) or ""
            if self.lower_case:
                text = text.lower()
            if self.replace_urls and url_extractor is not None:
                urls = list( set(url_extractor.find_urls(text)) )
                #sort by the length of each url
                urls.sort(key=lambda eachUrl: len(eachUrl), reverse=True)
                for url in urls:
                    text = text.replace(url, " URL ")
                    
            if self.replace_numbers: #e.g. 1.99714E13=19971400000000  e.go 123
                #? 匹配前面的子表达式零次或一次。例如,"do(es)?" 可以匹配 "do" 或 "does" 中的"do" 。? 等价于 {0,1}
                #?: 匹配 pattern 但不获取匹配结果,也就是说这是一个非获取匹配,不进行存储供以后使用。
                #这在使用 "或" 字符 (|) 来组合一个模式的各个部分是很有用。例如, 
                #'industr(?:y|ies) 就是一个比 'industry|industries' 更简略的表达式。
                text = re.sub(r'\d+(?:\.\d*(?:[eE]\d+))?', 'NUMBER', text)
                
            if self.remove_punctuation: # \W == [^A-Za-z0-9_] ==匹配任何"非"单词字符
                text = re.sub(r'\W+', ' ', text, flags=re.M)
            
            #words split and count
            word_counts = Counter(text.split())#text.split() return a word list then be converted to word_counts Dict
            if self.stemming and stemmer is not None: #stemmer 词干分析器
                stemmed_word_counts = Counter() #from collections import  Counter Dict
                for word, count in word_counts.items():
                    stemmed_word = stemmer.stem(word) # stemmed_word词干
                    stemmed_word_counts[stemmed_word] += count # stemmed_word_counts Dict
                word_counts = stemmed_word_counts
            X_transformed.append(word_counts)
        return np.array(X_transformed)

Let's try this transformer on a few emails:

X_few = X_train[:3]
X_few_wordcounts = EmailToWordCounterTransformer().fit_transform(X_few)
X_few_wordcounts

03_Classification_02_confusion matrix _.reshape([-1])_score(average=

This looks about right!

Now we have the word counts, and we need to convert them to vectors. For this, we will build another transformer whose fit()method will build the vocabulary (an ordered list of the most common words) and whose transform() method will use the vocabulary to convert word counts to vectors. The output is a sparse matrix.

from scipy.sparse import csr_matrix

class WordCounterToVectorTransformer(BaseEstimator, TransformerMixin):
    def __init__(self, vocabulary_size=1000):
        self.vocabulary_size = vocabulary_size
        
    def fit(self, X, y=None):
        total_count = Counter() #create a dict for all emails' words{ word: min(amount,10) }
        for word_count in X:
            for word, count in word_count.items():
                total_count[word] += min(count, 10)
                                 #sort   
        most_common = total_count.most_common()[:self.vocabulary_size]
        self.most_common =most_common
        #most_common:  
        #[('the',10),('of',10),('and',10),('to',6),('url',5),('all',4),('in',3),('christian',3),('on',3),('by',3)]
        #{word: index+1}
        self.vocabulary_ = {word: index+1 for index, (word, count) in enumerate(most_common)}
        #{'the': 1, 'of': 2, 'and': 3, 'to': 4, 'url': 5, 'all': 6, 'in': 7, 'christian': 8, 'on': 9, 'by': 10}
        return self
    
    def transform(self, X, y=None):
        rows = []
        cols = []
        data = []
        #print(self.vocabulary_)
        #{'the': 1, 'of': 2, 'and': 3, 'to': 4, 'url': 5, 'all': 6, 'in': 7, 'christian': 8, 'on': 9, 'by': 10}
        for row, word_count in enumerate(X):
            for word, count in word_count.items():
                rows.append(row)
                cols.append(self.vocabulary_.get(word,0)) #put the data on 0th column if we don't find the word 
                                #put the data on specified column=index if we find the word and return the index
                data.append(count)
                #print("count",count)
                #print("self.vocabulary_.get(word,0)",word,self.vocabulary_.get(word,0))
            print("rows: ", rows)    
            print("cols: ", cols)
            print("data: ", data)
        return csr_matrix( (data, (rows, cols)), shape=(len(X), self.vocabulary_size+1) ) #0+10 ==11 columns

03_Classification_02_confusion matrix _.reshape([-1])_score(average=

#Counter({'chuck': 1, 'murcko': 1, 'wrote': 1, 'stuff': 1, 'yawn': 1, 'r': 1}), ==>
vocabulary_: {'the': 1, 'of': 2, 'and': 3, 'to': 4, 'url': 5, 'all': 6, 'in': 7, 'christian': 8, 'on': 9, 'by': 10}
#there does not any word from Counter Dict that can be found in vocabulary_, so ==> 
X_few_vectors.toarray()

csr_matrix will cumsum all values in same position(same column and same row) , so here 6 = sum(data[1,1,1,1,1,1])
########################### Extra ###########################
rows=[0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
cols=[0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 0, 0, 0, 0, 0, 6, 1, 0, 0, 2, 0, 3, 0, 0, 0, 7, 0, 0, 8, 0, 0, 0, 0, 0, 0, 0, 9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 10, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
data=[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 1, 3, 11, 1, 2, 9, 1, 8, 1, 1, 1, 1, 1, 1, 3, 2, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 2, 1, 1, 1, 3, 1, 2, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
print( csr_matrix( (data, (rows, cols)), shape=(3, 11) ).toarray() )

###########################################################

#vocab_transformer = WordCounterToVectorTransformer(vocabulary_size=10)
#X_few_vectors = vocab_transformer.fit_transform(X_few_wordcounts)
03_Classification_02_confusion matrix _.reshape([-1])_score(average=

#print(self.vocabulary_)
#{'the': 1, 'of': 2, 'and': 3, 'to': 4, 'url': 5, 'all': 6, 'in': 7, 'christian': 8, 'on': 9, 'by': 10}

What does this matrix mean?  The data in first column means the number of words from email not be displayed in self.vocabulary_

The 11 in the second row, second column(near the number 99), means that the second email contains 11 words that are part of self.vocabulary_.

Well, the 67 in the third row, first column, means that the third email contains 67 words that are not part of self.vocabulary_. The 1 next to it means that the first word {"of": 2} in the vocabulary is present once in this email. The 2 next to it means that the second word{"and": 3} is present twice, and so on. You can look at the vocabulary to know which words we are talking about. 

vocab_transformer.vocabulary_

03_Classification_02_confusion matrix _.reshape([-1])_score(average=

Write a data preparation pipeline to convert each email into a feature vector. Your preparation pipeline should transform an email into a (sparse) vector indicating the presence or absence of each possible word. For example, if all emails only ever contain four words, “Hello,” “how,” “are,” “you,” then the email “Hello you Hello Hello you” would be converted into a vector [1,
0, 0, 1] (meaning [“Hello” is present, “how” is absent, “are” is absent, “you” is present]), or [3, 0, 0, 2] if you prefer to count the number of occurrences of each word. You may want to add hyperparameters to your preparation pipeline to control whether or not to strip off email headers, convert each email to lowercase, remove punctuation, replace all URLs with “URL,” replace all numbers with “NUMBER,” or even perform stemming (i.e., trim off word endings; there are Python libraries available to do this).

We are now ready to train our first spam classifier! Let's transform the whole dataset:

from sklearn.pipeline import Pipeline

preprocess_pipeline = Pipeline([
    ("email_to_wordcount", EmailToWordCounterTransformer()),
    ("wordcount_to_vector", WordCounterToVectorTransformer()),
])
X_train_transformed = preprocess_pipeline.fit_transform(X_train)

 

Then try out several classifiers and see if you can build a great spam classifier, with both high
recall and high precision.

Note: to be future-proof, we set solver="lbfgs" since this will be the default value in Scikit-Learn 0.22.

#################STOP: TOTAL NO. of ITERATIONS EXCEEDS LIMIT.##################

from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score

log_clf = LogisticRegression(solver="lbfgs", random_state=42)
score = cross_val_score(log_clf, X_train_transformed, y_train, cv=3, verbose=3)
score.mean()

03_Classification_02_confusion matrix _.reshape([-1])_score(average=

##################################################################set max_iter=1000

from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score

log_clf = LogisticRegression(solver="lbfgs", random_state=42, max_iter=1000)
score = cross_val_score(log_clf, X_train_transformed, y_train, cv=3, verbose=3)
score.mean()

03_Classification_02_confusion matrix _.reshape([-1])_score(average=

Over 98.6%, not bad for a first try! :) However, remember that we are using the "easy" dataset. You can try with the harder datasets, the results won't be so amazing. You would have to try multiple models, select the best ones and fine-tune them using cross-validation, and so on.

But you get the picture, so let's stop now, and just print out the precision/recall we get on the test set:

from sklearn.metrics import precision_score, recall_score

X_test_transformed = preprocess_pipeline.transform(X_test)

log_clf = LogisticRegression(solver="lbfgs", random_state=42, max_iter=1000)
log_clf.fit(X_train_transformed, y_train)

y_pred = log_clf.predict(X_test_transformed)

print( "Precision: {:.2f}%".format(100*precision_score(y_test, y_pred)) )
print( "Recall: {:.2f}%".format(100*recall_score(y_test, y_pred)))

你可能感兴趣的:(03_Classification_02_confusion matrix _.reshape([-1])_score(average="macro")_interpolation.shift正则表达)