Haystack全文检索框架中使用jieba分词包

1.安装jieba

pip install jieba


2.引入jieba

cd 到haystack安装目录backends下, 新建文件ChineseAnalyzer.py,键入内容

import jieba
from whoosh.analysis import Tokenizer, Token

class ChineseTokenizer(Tokenizer): def __call__(self, value, positions=False, chars=False, keeporiginal=False, removestops=True, start_pos=0, start_char=0, mode='', **kwargs): t = Token(positions, chars, removestops=removestops, mode=mode, **kwargs) seglist = jieba.cut(value, cut_all=True) for w in seglist: t.original = t.text = w t.boost = 1.0 if positions: t.pos = start_pos + value.find(w) if chars: t.startchar = start_char + value.find(w) t.endchar = start_char + value.find(w) + len(w) yield t def ChineseAnalyzer(): return ChineseTokenizer()


3.更改haystack的后台文件

文件夹下cp并修改whoosh_backend.py, 增加jieba.

cp whoosh_backend.py whoosh_cn_backend.py


# 文件名是惯例, 可自行修改

修改whoosh_cn_backend.py

# 导入模块

from .ChineseAnalyzer import ChineseAnalyzer

查找

analyzer=StemmingAnalyzer()

改为

analyzer=ChineseAnalyzer()



4.Django内settings内修改相应的haystack后台文件名.

HAYSTACK_CONNECTIONS = {
    'default': {
        'ENGINE': 'haystack.backends.whoosh_cn_backend.WhooshEngine', 'PATH': os.path.join(BASE_DIR, 'whoosh_index'), } }

设置完成,重新生成索引即可使用jieba分词.

python manage.py rebuild-index

猜你喜欢

转载自www.cnblogs.com/jrri/p/11613993.html