keras实现textcnn

https://github.com/MoyanZitto/keras-cn/blob/master/docs/legacy/blog/word_embedding.md  这个链接将带有embeding层的cnn实现及训练的过程讲的很清楚
构建好带有embedding层的textcnn模型后,model.fit时传入的x_train是二维的要训练的词对应的标号。下面的代码会将词进行标号。

import keras.preprocessing.text as T
from keras.preprocessing.text import Tokenizer

text1 = 'some/thing to eat'
text2 = 'some thing to drink'
texts = [text1, text2]
print(' '.join(text1.split('/')))
tokenizer = Tokenizer(num_words=None) # num_words:None或整数,处理的最大单词数量。少于此数的单词丢掉
tokenizer.fit_on_texts(texts)
sequences = tokenizer.texts_to_sequences(texts)
print(sequence)
word_index = tokenizer.word_index
data = pad_sequences(sequences, maxlen=10)
print(data)
print('Found %s unique tokens.' % len(word_index))
print(tokenizer.word_counts) # [('some', 2), ('thing', 2), ('to', 2), ('eat', 1), ('drink', 1)]
print(tokenizer.word_index) # {'some': 1, 'thing': 2,'to': 3 ','eat': 4, drink': 5}
print(tokenizer.word_docs) # {'some': 2, 'thing': 2, 'to': 2, 'drink': 1, 'eat': 1}
print(tokenizer.index_docs)

猜你喜欢

转载自www.cnblogs.com/kjkj/p/10528244.html