Feature extraction in machine learning

Feature extraction is to convert arbitrary data (such as text or images) into digital features that can be used for machine learning. Feature extraction is for computers to better understand the data.

Feature extraction can be roughly divided into three categories:

  • Dictionary feature extraction (feature discretization)
  • Text feature extraction
  • Image feature extraction (deep learning)

In this article, we only discuss the first two feature extraction methods.

1. Dictionary feature extraction

Function: Characterize dictionary data.

API

sklearn.feature_extraction.DictVectorizer(sparse=True,…)

  • DictVectorizer.fit_transform(X)
    • X: The return value of a dictionary or an iterator containing a dictionary
    • Return the sparse matrix
  • DictVectorizer.get_feature_names() returns category names

Instance

Now there is a set of dictionaries as follows, and now we need to perform feature extraction on them.

        data = [{"Algorithm Engineer": 10000}, {"Front End": 8000}, {"Database Engineer": 8500}, {"Data Analysis": 9000}, {"Architect": 15000}]

from sklearn.feature_extraction import DictVectorizer


def dict_f():
    """
    对字典类型的数据进行特征抽取
    """
    # 获取数据
    data = [{"job": "算法工程师","salary": 10000}, {"job": "前端", "salary": 8000}, 
            {"job": "数据库工程师", "salary": 8500}, {"job": "数据分析", "salary": 9500},
            {"job": "架构师", "salary": 15000}]
    # 实例化字典特征提取对象
    transfer = DictVectorizer(sparse=False)
    # 特征提取
    new_data = transfer.fit_transform(data)
    new_data = new_data.astype(int)
    print(type(new_data))
    print(transfer.feature_names_)
    print("提取后的特征:\n", new_data)


dict_f()

from sklearn.feature_extraction import DictVectorizer


def dict_f():
    """
    对字典类型的数据进行特征抽取
    """
    # 获取数据
    data = [{"job": "算法工程师","salary": 10000}, {"job": "前端", "salary": 8000}, 
            {"job": "数据库工程师", "salary": 8500}, {"job": "数据分析", "salary": 9500},
            {"job": "架构师", "salary": 15000}]
    # 实例化字典特征提取对象
    transfer = DictVectorizer(sparse=True)
    # 特征提取
    new_data = transfer.fit_transform(data)
    new_data = new_data.astype(int)
    print(type(new_data))
    print(transfer.feature_names_)
    print("提取后的特征:\n", new_data)


dict_f()

Comparing the sparse parameter in the feature extraction object to False and True, we can see the difference between the two. When sparse=True, sparse matrix is ​​returned, and the storage of sparse matrix saves memory relatively. The return type of sparse=False is similar to the data structure of one-hot encoding, except that it is of float type. It is more comfortable to look at after type conversion.

Second, text feature extraction

Role: Characterize text data

API

  • sklearn.feature_extraction.text.CountVectorizer(stop_words=[])

    • Return word frequency matrix
    • CountVectorizer.fit_transform(X)
      • X: text or iterable object containing text string
      • Return value: return sparse matrix
    • CountVectorizer.get_feature_names() return value: word list
  • sklearn.feature_extraction.text.TfidfVectorizer

English case

from sklearn.feature_extraction.text import CountVectorizer

def english_CountVectorizer_f():
    """
    对字典类型的数据进行特征抽取
    """
    # 获取数据
    data = ["There are moments in life when you miss someone so much that you just want to pick them from your dreams and hug them for real! Dream what you want to dream;go where you want to go;be what you want to be,because you have only one life and one chance to do all the things you want to do."]
    # 实例化
    transfer = CountVectorizer()
    # 特征提取
    new_data = transfer.fit_transform(data)
    print("特征名称:",transfer.get_feature_names())
    print("提取后的特征:\n", new_data)


english_CountVectorizer_f()

Chinese case

from sklearn.feature_extraction.text import CountVectorizer

# jieba 是一个中文的分词工具
import jieba

def cut_word(text):
    """
    对中文进行分词
    """
    text = " ".join(list(jieba.cut(text)))
    return text


def chinese_CountVectorizer_f():
    """
    对字典类型的数据进行特征抽取
    """
    # 获取数据
    data = ['人生永没有终点。”只有等到你瞑目的那一刻,才能说你走完了人生路,在此之前,新的第一次始终有,新的挑战依然在,新的感悟不断涌现',
           '母爱是一种无私的感情,母爱像温暖的阳光,洒落在我们心田,虽然悄声无息,但它让一棵棵生命的幼苗感受到了雨后的温暖。']
  
    text_list = []
    for sent in data:
        text_list.append(cut_word(sent))
    print(text_list)
    
    # 实例化文本特征提取对象
    transfer = CountVectorizer()
    # 特征提取
    new_data = transfer.fit_transform(text_list)
    print("特征名称:",transfer.get_feature_names())
    print("提取后的特征:\n", new_data)


chinese_CountVectorizer_f()

 

Three, Tf-idf text feature extraction

  • The main idea of ​​F-IDF is: if a word or phrase has a high probability of appearing in an article and rarely appears in other articles , it is considered that the word or phrase has good classification ability and is suitable for use classification.
  • TF-IDF function: to evaluate the importance of a word to a document set or a document in a corpus.

official

  • Term frequency (term frequency, tf) refers to the frequency of a given word in the document
  • Inverse document frequency (idf) is a measure of the universal importance of words. The idf of a specific word can be obtained by dividing the total number of documents by the number of documents containing the word, and then taking the obtained quotient to the logarithm of the base 10.

举例:
假如一篇文章的总词语数是100个,而词语"非常"出现了5次,那么"非常"一词在该文件中的词频就是5/100=0.05。
而计算文件频率(IDF)的方法是以文件集的文件总数,除以出现"非常"一词的文件数。
所以,如果"非常"一词在1,0000份文件出现过,而文件总数是10,000,000份的话,
其逆向文件频率就是lg(10,000,000 / 1,0000)=3。
最后"非常"对于这篇文档的tf-idf的分数为0.05 * 3=0.15

​​​​​​​Case

from sklearn.feature_extraction.text import TfidfVectorizer
import jieba

def cut_word(text):
    """
    对中文进行分词
    "我爱北京天安门"————>"我 爱 北京 天安门"
    :param text:
    :return: text
    """
    # 用结巴对中文字符串进行分词
    text = " ".join(list(jieba.cut(text)))

    return text

def text_chinese_tfidf_demo():
    """
    对中文进行特征抽取
    :return: None
    """
    data = ["一种还是一种今天很残酷,明天更残酷,后天很美好,但绝对大部分是死在明天晚上,所以每个人不要放弃今天。",
            "我们看到的从很远星系来的光是在几百万年之前发出的,这样当我们看到宇宙时,我们是在看它的过去。",
            "如果只用一种方式了解某样事物,你就不会真正了解它。了解事物真正含义的秘密取决于如何将其与我们所了解的事物相联系。"]
    # 将原始数据转换成分好词的形式
    text_list = []
    for sent in data:
        text_list.append(cut_word(sent))
    print(text_list)

    # 1、实例化一个转换器类
    # transfer = CountVectorizer(sparse=False)
    transfer = TfidfVectorizer(stop_words=['一种', '不会', '不要'])
    # 2、调用fit_transform
    data = transfer.fit_transform(text_list)
    print("文本特征抽取的结果:\n", data.toarray())
    print("返回特征名字:\n", transfer.get_feature_names())

    return None

Return result:

['一种 还是 一种 今天 很 残酷 , 明天 更 残酷 , 后天 很 美好 , 但 绝对 大部分 是 死 在 明天 晚上 , 所以 每个 人 不要 放弃 今天 。', '我们 看到 的 从 很 远 星系 来 的 光是在 几百万年 之前 发出 的 , 这样 当 我们 看到 宇宙 时 , 我们 是 在 看 它 的 过去 。', '如果 只用 一种 方式 了解 某样 事物 , 你 就 不会 真正 了解 它 。 了解 事物 真正 含义 的 秘密 取决于 如何 将 其 与 我们 所 了解 的 事物 相 联系 。']
文本特征抽取的结果:
 [[ 0.          0.          0.          0.43643578  0.          0.          0.
   0.          0.          0.21821789  0.          0.21821789  0.          0.
   0.          0.          0.21821789  0.21821789  0.          0.43643578
   0.          0.21821789  0.          0.43643578  0.21821789  0.          0.
   0.          0.21821789  0.21821789  0.          0.          0.21821789
   0.        ]
 [ 0.2410822   0.          0.          0.          0.2410822   0.2410822
   0.2410822   0.          0.          0.          0.          0.          0.
   0.          0.2410822   0.55004769  0.          0.          0.          0.
   0.2410822   0.          0.          0.          0.          0.48216441
   0.          0.          0.          0.          0.          0.2410822
   0.          0.2410822 ]
 [ 0.          0.644003    0.48300225  0.          0.          0.          0.
   0.16100075  0.16100075  0.          0.16100075  0.          0.16100075
   0.16100075  0.          0.12244522  0.          0.          0.16100075
   0.          0.          0.          0.16100075  0.          0.          0.
   0.3220015   0.16100075  0.          0.          0.16100075  0.          0.
   0.        ]]
返回特征名字:
 ['之前', '了解', '事物', '今天', '光是在', '几百万年', '发出', '取决于', '只用', '后天', '含义', '大部分', '如何', '如果', '宇宙', '我们', '所以', '放弃', '方式', '明天', '星系', '晚上', '某样', '残酷', '每个', '看到', '真正', '秘密', '绝对', '美好', '联系', '过去', '还是', '这样']

 

Guess you like

Origin blog.csdn.net/qq_39197555/article/details/115325832