LlamaIndex中高级提示技巧(变量映射、函数)

高级提示技术(变量映射、函数)
在本笔记本中,我们将展示一些高级提示技术。这些特性允许您定义更多自定义/表达性的提示,重用现有的提示,并且还可以用更少的代码行来表达某些操作。

我们展示了以下功能:

  • 部分格式
  • 提示模板变量映射
  • 提示函数映射
pip install llama-index-core
pip install llama-index-llms-openai
from llama_index.core import PromptTemplate
from llama_index.llms.openai import OpenAI

1. 部分格式

部分格式化(partial_format)允许您对提示进行部分格式化,填充一些变量,而将其他变量留待以后填充。

这是一个很好的方便函数,因此您不必一直维护所有必需的提示变量直到格式化,您可以在它们进入时部分格式化。

这将创建提示模板的副本。

qa_prompt_tmpl_str = """\
Context information is below.
---------------------
{context_str}
---------------------
Given the context information and not prior knowledge, answer the query.
Please write the answer in the style of {tone_name}
Query: {query_str}
Answer: \
"""

prompt_tmpl = PromptTemplate(qa_prompt_tmpl_str)
partial_prompt_tmpl = prompt_tmpl.partial_format(tone_name="Shakespeare")
partial_prompt_tmpl.kwargs

{‘tone_name’: ‘Shakespeare’}

fmt_prompt = partial_prompt_tmpl.format(
    context_str="In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters",
    query_str="How many params does llama 2 have",
)
print(fmt_prompt)

Context information is below.
---------------------
In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large >language models (LLMs) ranging in scale from 7 billion to 70 billion parameters
---------------------
Given the context information and not prior knowledge, answer the query.
Please write the answer in the style of Shakespeare
Query: How many params does llama 2 have
Answer:

2. 提示模板变量映射

模板var映射允许你从“预期的”提示键(例如用于响应合成的context_strquery_str)中指定映射,这些键实际上在模板中。

这允许您重用现有的字符串模板,而不必恼人地更改模板变量。

# NOTE: here notice we use `my_context` and `my_query` as template variables

qa_prompt_tmpl_str = """\
Context information is below.
---------------------
{my_context}
---------------------
Given the context information and not prior knowledge, answer the query.
Query: {my_query}
Answer: \
"""

template_var_mappings = {
    
    "context_str": "my_context", "query_str": "my_query"}

prompt_tmpl = PromptTemplate(
    qa_prompt_tmpl_str, template_var_mappings=template_var_mappings
)
fmt_prompt = partial_prompt_tmpl.format(
    context_str="In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters",
    query_str="How many params does llama 2 have",
)
print(fmt_prompt)

Context information is below.
---------------------
In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large > > > language models (LLMs) ranging in scale from 7 billion to 70 billion parameters
---------------------
Given the context information and not prior knowledge, answer the query.
Please write the answer in the style of Shakespeare
Query: How many params does llama 2 have
Answer:

3. 提示函数映射关系

您还可以将函数作为模板变量而不是固定值传入。

这允许您在查询期间动态注入依赖于其他值的某些值。

这里有一些基本的例子。我们在RAG的提示工程指南中展示了更高级的示例(例如,几个镜头示例)。

qa_prompt_tmpl_str = """\
Context information is below.
---------------------
{context_str}
---------------------
Given the context information and not prior knowledge, answer the query.
Query: {query_str}
Answer: \
"""


def format_context_fn(**kwargs):
    # format context with bullet points
    context_list = kwargs["context_str"].split("\n\n")
    fmtted_context = "\n\n".join([f"- {
      
      c}" for c in context_list])
    return fmtted_context


prompt_tmpl = PromptTemplate(
    qa_prompt_tmpl_str, function_mappings={
    
    "context_str": format_context_fn}
)
context_str = """\
In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters.

Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases.

Our models outperform open-source chat models on most benchmarks we tested, and based on our human evaluations for helpfulness and safety, may be a suitable substitute for closed-source models.
"""

fmt_prompt = prompt_tmpl.format(
    context_str=context_str, query_str="How many params does llama 2 have"
)
print(fmt_prompt)

Context information is below.

  • In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large >language models (LLMs) ranging in scale from 7 billion to 70 billion parameters.
  • Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases.
  • Our models outperform open-source chat models on most benchmarks we tested, and based on >our human evaluations for helpfulness and safety, may be a suitable substitute for closed-source >models.
    ---------------------
    Given the context information and not prior knowledge, answer the query.
    Query: How many params does llama 2 have
    Answer:

猜你喜欢

转载自blog.csdn.net/weixin_40986713/article/details/143205064