中文译英文 模型

Helsinki-NLP/opus-mt-zh-en · Hugging FaceWe’re on a journey to advance and democratize artificial intelligence through open source and open science.https://huggingface.co/Helsinki-NLP/opus-mt-zh-en?text=%E6%88%91%E5%8F%AB%E6%B2%83%E5%B0%94%E5%A4%AB%E5%86%88%EF%BC%8C%E6%88%91%E4%BD%8F%E5%9C%A8%E6%9F%8F%E6%9E%97%E3%80%82NLP(四十一)使用HuggingFace翻译模型的一次尝试_huggingface 翻译_山阴少年的博客-CSDN博客  本文将如何如何使用HuggingFace中的翻译模型。  HuggingFace是NLP领域中响当当的团体,它在预训练模型方面作出了很多接触的工作,并开源了许多预训练模型和已经针对具体某个NLP人物训练好的直接可以使用的模型。本文将使用HuggingFace提供的可直接使用的翻译模型。  HuggingFace的翻译模型可参考网址:https://huggingface.co/models?pipeline_tag=translation ,该部分模型中的绝大部分是由Helsinki-NLP(Lanhttps://blog.csdn.net/jclian91/article/details/114647084

# -*- coding: utf-8 -*-
import sys
sys.path.append("/home/sniss/local_disk/stable_diffusion_api/")

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("/home/sniss/local_disk/stable_diffusion_api/models/opus-mt-zh-en")
model = AutoModelForSeq2SeqLM.from_pretrained("/home/sniss/local_disk/stable_diffusion_api/models/opus-mt-zh-en")

def translation_zh_en(text):
    # Tokenize the text
    batch = tokenizer.prepare_seq2seq_batch(src_texts=[text], return_tensors='pt', max_length=512)
#     batch = tokenizer.prepare_seq2seq_batch(src_texts=[text])

    # Make sure that the tokenized text does not exceed the maximum
    # allowed size of 512
#     import pdb;pdb.set_trace()
#     batch["input_ids"] = batch["input_ids"][:, :512]
#     batch["attention_mask"] = batch["attention_mask"][:, :512]

    # Perform the translation and decode the output
    translation = model.generate(**batch)
    result = tokenizer.batch_decode(translation, skip_special_tokens=True)
    return result

if __name__ == "__main__":
    text = "从时间上看,中国空间站的建造比国际空间站晚20多年。"
    result = translation_zh_en(text)
    print(result)

猜你喜欢

转载自blog.csdn.net/u012193416/article/details/130315292