关于huggingface中seq2seq的pytorch转pytorchscript踩雷

官网转的demo:

from transformers import BertModel, BertTokenizer, BertConfig
import torch

enc = BertTokenizer.from_pretrained("bert-base-uncased")

# Tokenizing input text
text = "[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]"
tokenized_text = enc.tokenize(text)

# Masking one of the input tokens
masked_index = 8
tokenized_text[masked_index] = "[MASK]"
indexed_tokens = enc.convert_tokens_to_ids(tokenized_text)
segments_ids = [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1]

# Creating a dummy input
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([segments_ids])
dummy_input = [tokens_tensor, segments_tensors]

# If you are instantiating the model with *from_pretrained* you can also easily set the TorchScript flag
model = BertModel.from_pretrained("bert-base-uncased", torchscript=True)
model.save_pretrained("./bert-base-uncased")
# Creating the trace
traced_model = torch.jit.trace(model, [tokens_tensor, segments_tensors])
torch.jit.save(traced_model, "traced_bert.pt")

loaded_model = torch.jit.load("traced_bert.pt")
loaded_model.eval()

all_encoder_layers, pooled_output = loaded_model(*dummy_input)
print(all_encoder_layers, pooled_output)

中利用的模型是BertModel,而我们要用的库是

AutoModelForSeq2SeqLM

其中我用到的翻译模型是

Helsinki-NLP/opus-mt-en-zh
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, pipeline
import torch
#  将bin模型转为支持jit的模型,加速推理
text = " Who was Jim Henson ? "
mode_name = 'Helsinki-NLP/opus-mt-en-zh'
tokenizer = AutoTokenizer.from_pretrained(mode_name)
inputs = tokenizer(text, return_tensors="pt").input_ids
attention_mask =tokenizer(text, return_tensors="pt").attention_mask
# 这个地方不知道,字符串中要写什么,解码器
decoder_input_ids = tokenizer('吉姆・亨森是谁', return_tensors='pt').input_ids
model = AutoModelForSeq2SeqLM.from_pretrained(mode_name,torchscript=True)
model.save_pretrained("./opus-mt-en-zh")

traced_model = torch.jit.trace(model,(inputs, attention_mask,decoder_input_ids))
torch.jit.save(traced_model, "opus-mt-en-zh.pt")

 ## 测试   text转为中文
loaded_model = torch.jit.load("opus-mt-en-zh.pt")
loaded_model.eval()
pooled_output = loaded_model(inputs,attention_mask,decoder_input_ids)
print("pooled_output:",pooled_output)
print(tokenizer.decode(pooled_output))

根据官网给的demo,进行仿写,结果会发现,在最后pooled_output的时候,他输出的结果是不对的,类似于把整个模型的参数输出出来了,没办法进行解码,导致最后打印错误。

Helsinki-NLP/opus-mt-en-zh

他的模型是MarianMT

MarianMT — transformers 2.9.1 documentation

一般seq2seq不能按照huggingface官方提供的方法进行改写成torchscript

应该参考pytorch 的官方文档

Deploying a Seq2Seq Model with TorchScript — PyTorch Tutorials 2.0.0+cu117 documentation

官方例子是encoder 和decoder同时trace,然后再用script包起来

具体怎么实现较为麻烦,我就没有继续深究,放弃这种方法了。

猜你喜欢

转载自blog.csdn.net/lishijie258/article/details/129805664