探索 LangChain: 架构、组件和应用

介绍每个组件及其用途:

1. Model

Model 组件是 LangChain 的核心,它抽象并提供了大语言模型(LLM)的接口。

LLM 模型
  • 用途:提供与多种 LLM 供应商的接口,如 OpenAI、Google PaLM2 、Ollama 等。
  • OpenAI示例:
from langchain.llms import OpenAI

openai_llm = OpenAI(model_name="gpt-3.5-turbo")
response = openai_llm({
   
    
    "prompt": "What is the capital of France?"})
print(response["choices"][1]["text"])  # 输出: The capital of France is Paris.
  • Ollama 示例:
from langchain_ollama.llms import OllamaLLM
from langchain_core.prompts import ChatPromptTemplate

template = """ Question: {question}

Answer: Let's think step by step.

"""
prompt = ChatPromptTemplate.from_template(template=template)

model = OllamaLLM(base_url="http://132.148.160.94:11434" ,model="gemma2")

chain = prompt | model

output = chain.invoke({
   
    
    "question":"what is langchain"})

print(f"output: {
     
      
      output}")
ChatModel
  • 用途:专门为聊天场景设计,接受多轮对话历史作为输入。
  • 示例:
from langchain.chat_models import ChatGPTMessage
from langchain.llms import ChatGPT

messages = [
    ChatGPTMessage(role="system", content="You are a helpful assistant."),
    ChatGPTMessage(role="user", content="How can I learn Python?"),
    ChatGPTMessage(role="assistant", content="To learn Python, start with online tutorials like Codecademy or DataCamp. Practice regularly and work on small projects.")
]

chat_gpt = ChatGPT()
response = chat_gpt(messages + [ChatGPTMessage(role="user", content="What's next after mastering basics?")])
print(response.choices[1].message.content)  # 输出: After mastering the basics, focus on advanced topics like decorators, generators, and asynchronous programming.
设置参数
  • 用途:允许调整 LLM 的行为,如设置温度、最大输出长度等。
  • 示例:
from langchain.llms import OpenAI

openai_llm = OpenAI(model_name="gpt-3.5-turbo", temperature=0.7, max_tokens=100)
response =

猜你喜欢

转载自blog.csdn.net/canduecho/article/details/142632044