3 ways to run LangChain | JD Cloud technical team

When using LangChain to develop LLM applications, machines are required for GLM deployment. Many students were dismissed at the first step, so how to bypass this step and learn the application of LLM models first, so as to quickly get started with Langchain? This film explains 3 methods to run LangChain, please correct any mistakes.

Langchain official document address: https://python.langchain.com/

basic function

LLM calls

  • Support multiple model interfaces, such as OpenAI, HuggingFace, AzureOpenAI…
  • Fake LLM for testing
  • Cache support, such as in-mem (memory), SQLite, Redis, SQL
  • usage record
  • Support stream mode (that is, return word by word, similar to typing effect)

Prompt management, support various custom templates

Has a large number of document loaders, such as Email, Markdown, PDF, Youtube...

Support for indexes

  • document splitter
  • vectorization
  • Docking vector storage and search, such as Chroma, Pinecone, Qdrand

Chains

  • LLMChain
  • Various Tools Chain
  • LangChainHub

For detailed address, please refer to:
https://www.langchain.cn/t/topic/35

3 methods to test Langchain project:

1 Use FakeListLLM provided by Langchian

In order to save time, directly on the code

import os
from decouple import config
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from langchain.agents import load_tools

Here mock ChatGPT, use mockLLm

#from langchain.llms import OpenAI
from langchain.llms.fake import FakeListLLM
os.environ["OPENAI_API_KEY"] = config('OPENAI_API_KEY')

REPL, short for "Read–Eval–Print Loop", is a simple, interactive programming environment.

In the REPL environment, the user can enter one or more programming statements, and the system will immediately execute these statements and output the results. This approach is great for quick code experimentation and debugging.

tools = load_tools(["python_repl"])
responses=[
    "Action: Python REPL\nAction Input: chatGpt原理",
    "Final Answer: mock答案"
]
llm = FakeListLLM(responses=responses)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("chatGpt原理2")

2 Using HumanInputLLM provided by Langchian, visit Wikipedia query

from langchain.llms.human import HumanInputLLM
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.agents import AgentType
from wikipedia import set_lang

Use Wikipedia Tools

tools = load_tools(["wikipedia"])

It must be set to the Chinese url prefix, otherwise it will not be accessible

set_lang("zh")

Initialize the LLM

llm = HumanInputLLM(prompt_func=lambda prompt: print(f"\n===PROMPT====\n{prompt}\n=====END OF PROMPT======"))

Initialize agent

agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
agent.run("喜羊羊")

Use hugging face

https://huggingface.co/docs

1. Register an account

2. Create Access Tokens

Demo: Summarizing Documents Using Models

from langchain.document_loaders import UnstructuredFileLoader
from langchain.chains.summarize import load_summarize_chain
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain import HuggingFaceHub
import os
from decouple import config

from langchain.agents import load_tools

Here mock ChatGPT, use HUGGINGFACEHUB

os.environ["HUGGINGFACEHUB_API_TOKEN"] = config('HUGGINGFACEHUB_API_TOKEN')

import text

loader = UnstructuredFileLoader("docment_store\helloLangChain.txt")

Convert text to Document object

document = loader.load()
print(f'documents:{len(document)}')

Initialize the text splitter

text_splitter = RecursiveCharacterTextSplitter(
    chunk_size = 500,
    chunk_overlap = 0
)

split text

split_documents = text_splitter.split_documents(document)
print(f'documents:{len(split_documents)}')

Load the LLM model

overal_temperature = 0.1
flan_t5xxl = HuggingFaceHub(repo_id="google/flan-t5-xxl", 
                         model_kwargs={"temperature":overal_temperature, 
                                       "max_new_tokens":200}
                         ) 

llm = flan_t5xxl
tools = load_tools(["llm-math"], llm=llm)

Create summary chain

chain = load_summarize_chain(llm, chain_type="refine", verbose=True)

execution summary chain

chain.run(split_documents)

Author: JD Technology Yang Jian

Source: JD Cloud Developer Community

Graduates of the National People’s University stole the information of all students in the school to build a beauty scoring website, and have been criminally detained. The new Windows version of QQ based on the NT architecture is officially released. The United States will restrict China’s use of Amazon, Microsoft and other cloud services that provide training AI models . Open source projects announced to stop function development LeaferJS , the highest-paid technical position in 2023, released: Visual Studio Code 1.80, an open source and powerful 2D graphics library , supports terminal image functions . The number of Threads registrations has exceeded 30 million. "Change" deepin adopts Asahi Linux to adapt to Apple M1 database ranking in July: Oracle surges, opening up the score again
{{o.name}}
{{m.name}}

Guess you like

Origin my.oschina.net/u/4090830/blog/10086279