第十四次作业----垃圾邮件分类2

1.读取

# 1、导入数据
sms=open(r'D:\Download\SMSSpamCollection','r',encoding='utf-8')#读取文件
sms_data = []
sms_lable = []
csv_reader = csv.reader(sms, delimiter='\t')
for r in csv_reader:
    sms_lable.append(r[0])
    sms_data.append(preprocessing(r[1]))  # 对每封邮件做预处理
sms.close()

2.数据预处理

import  csv
import nltk
from nltk.stem import WordNetLemmatizer
from nltk.corpus import stopwords
sms=open(r'D:\Download\SMSSpamCollection','r',encoding='utf-8')#读取文件
csv_reader=csv.reader(sms,delimiter='\t')
sms_data=[]#构建实际邮件数据
sms_label=[]#构建邮件类别
stops=stopwords.words('english')#构建停用器
lemmatizer = WordNetLemmatizer()#构建词性转换器
def preprocessing(text):
    tokens=[word for sent in nltk.sent_tokenize(text) for word in nltk.word_tokenize(sent)]#分词
    print(tokens)
    tokens=[token for token in tokens if token not in stops]#停用词
    nltk.pos_tag(tokens)#词性标注
    tokens=[lemmatizer.lemmatize(token,pos='n') for token in tokens]#名词词性还原
    tokens = [lemmatizer.lemmatize(token, pos='a') for token in tokens]#
    tokens = [lemmatizer.lemmatize(token, pos='v') for token in tokens]#动词词性还原
    return tokens#返回处理结果
for line in csv_reader:
    sms_label.append(line[0])#获取邮件类别
    sms_data.append(preprocessing(line[1]))#获取处理后邮件数据
sms.close()#关闭读取流
print(sms_label)#输出邮件类别
print(sms_data)#输出处理后的邮件数据

3.数据划分—训练集和测试集数据划分

import numpy as np
from sklearn.model_selection import train_test_split

sms_data = np.array(sms_data)
sms_label = np.array(sms_label)
x_train, x_test, y_train, y_test = train_test_split(sms_data, sms_label, test_size=0.2, random_state=0,stratify=sms_label)#划分训练集和测试集

4.文本特征提取

sklearn.feature_extraction.text.CountVectorizer

https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html?highlight=sklearn%20feature_extraction%20text%20tfidfvectorizer

sklearn.feature_extraction.text.TfidfVectorizer

https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html?highlight=sklearn%20feature_extraction%20text%20tfidfvectorizer#sklearn.feature_extraction.text.TfidfVectorizer

from sklearn.feature_extraction.text import TfidfVectorizer

tfidf2 = TfidfVectorizer()

观察邮件与向量的关系

向量还原为邮件

from sklearn.feature_extraction.text import TfidfVectorizer

vectorizer = TfidfVectorizer()
X_train = vectorizer.fit_transform(x_train)
X_test = vectorizer.transform(x_test)

print(X_train.toarray().shape)
print(X_test.toarray().shape)

4.模型选择

from sklearn.naive_bayes import GaussianNB

from sklearn.naive_bayes import MultinomialNB

说明为什么选择这个模型?

m_model = MultinomialNB()
m_model.fit(x_train, y_train)
y_m_pre = m_model.predict(x_test)

因为我们的垃圾邮件数据是离散型数据的,所以我们应该采用朴素贝叶斯的分类方法。

5.模型评价:混淆矩阵,分类报告

from sklearn.metrics import confusion_matrix

confusion_matrix = confusion_matrix(y_test, y_predict)

说明混淆矩阵的含义

from sklearn.metrics import classification_report

说明准确率、精确率、召回率、F值分别代表的意义

混淆矩阵是数据科学、数据分析和机器学习中总结分类模型预测结果的情形分析表,以矩阵形式将数据集中的记录按照真实的类别与分类模型作出的分类判断两个标准进行汇总。

 准确率 =所有预测正确的样本/总的样本  (TP+TN)/总

 精确率=  将正类预测为正类 / 所有预测为正类 TP/(TP+FP)

 召回率 = 将正类预测为正类 / 所有正真的正类 TP/(TP+FN)

 F值 = 精确率 * 召回率 * 2 / ( 精确率 + 召回率) (F 值即为精确率和召回率的调和平均值)
 


from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report

# 混淆矩阵
cm = confusion_matrix(y_test, y_nb_pred)
print('nb_confusion_matrix:')
print(cm)
# 主要分类指标的文本报告
cr = classification_report(y_test, y_nb_pred)
print('nb_classification_report:')
print(cr)

6.比较与总结

如果用CountVectorizer进行文本特征生成,与TfidfVectorizer相比,效果如何?

CountVectorizer对样本个体预测的误差要高于TfidfVectorizer。因为TfidfVectorizer能够过滤掉一些常见的却无关紧要本的词语,同时保留影响整个文本的重要字词,更适用于垃圾邮件分类

猜你喜欢

转载自www.cnblogs.com/codekid/p/12941124.html