OpenAI Function Calls: From Basics to Advanced Techniques in Python

dedbb952820de274aa5245b6ccd8ede0.jpeg

OpenAi recently introduced a cool new feature called "Function Calls". This article discusses how this feature can improve your existing products by providing the ability for users to interact with them through natural language. Additionally this article will show an example of a basic application built in Python using this new feature.

What is an "OpenAi function call"?

Open ai function calling is a new feature that lets Openai models (such as GPT-3.r Turbo and GPT-4) respond to natural language input from the user by calling functions written in your code. It works by telling the model what functions are available on your system and explaining their purpose and parameters. OpenAi can then understand when the user's input matches these descriptions and make appropriate function calls. This adds a new level of interactivity and user-friendliness, allowing non-technical users to interact with technical features using simple prompts.

GPT-4 and GPT-3.5-turbo updates

Below is the mapping between stable, old and new models.

8b3d890ae97772a8c0a300aa04eb39c4.jpeg

Each model has been released with the suffix 0613.

These versions can also be compared and evaluated using Evals, a framework for evaluating models.

gpt-3.5-turbo-16k, which did not exist in the old model, has appeared. It is a model that quadruples the maximum number of tokens, as shown below.

The maximum number of tokens for l gpt-3.5-turbo: 4096tokens

l The maximum number of tokens for gpt-3.5-turbo-16k: 16384token

As discussed below, 16k doubles the fee but quadruples the token length, so perhaps it can be applied to applications that have been difficult to apply in the past without using a separate framework.

The official announcement states that the token length of 16k is enough to process approximately 20 pages of text with one request.

As stated in the API documentation, TRAINING DATA is still "Up to Sep 2021" and uses training data before September 2021 as before.

It seems that the life cycle of each model in the future is planned like this.

l Access with stable model names will be automatically upgraded to the above new models on June 27

l Thereafter, if explicitly designated, access to older models will be available, but access to older models will end on September 13th.

l Accessing the old model from now on will cause the request to fail

The official announcement also stated that OpenAI hopes to complete the waiting list for the new version in the next few weeks, and more people are expected to be able to use it.

price

The good news is that pricing has also been revised and some features are now cheaper

First, regarding the price modification of gpt-3.5-turbo , the cost is divided between the input token and the output token as follows.

baf1f8d937561bbab5687bacaf85e72f.jpeg

Although the calculation method is slightly more complicated, the cost of inputting the token is 25% cheaper.

The price of GPT-4 has not changed, and the 16k version has been added to GPT-3.5, so the overall cost structure is as follows.

9d092dc1a0d6bbcca7c47c1e534535d8.jpeg

Additionally, the pricing for the popular embedding API text-embedding-ada-002 has been modified as follows:

23e58f1b4f0326802ca0820fe86dcde5.jpeg

The Embeddings API is often used to create vectorized data, so a 75% price reduction is not bad either.

let's try it

1. Set up your environment

1. First create venv on your local computer .

Open a terminal and create a virtual environment.

python -m venv venv

Then activate it:

venv\Scripts\activate

You can now see (  Venv ) in the terminal .

Now, let's install the required dependencies:

Pip install python-dotenv==1.0.0 , open ai == 0.27.7 

Finally, we need to set an environment variable for the OpenAI API key:

set&nbsp;OPENAI_API_KEY=<YOUR_API_KEY>

Now, everything is ready, let's get started!

Create a file called " main.py" where we will write the function that answers the question.

Let's import the required dependencies:

import&nbsp;openai
from&nbsp;dotenv import&nbsp;load_dotenv
import&nbsp;os

read file

load_dotenv()
openai_key = os.getenv("OPENAI_API_KEY")

Try it with the following code

model_name = "gpt-3.5-turbo-0613"

question = "Please tell me how to set up an environment with pyenv and pipenv."

response = openai.ChatCompletion.create(
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;model=model_name,
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;messages=[
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;{"role": "user", "content": question},
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;],
)
print(response.choices[0]["message"]["content"].strip())

"pyenv is a Python version control tool that allows you to install multiple Python versions and switch between them. pipenv is a tool that simplifies the creation of virtual environments and package management when used in combination with pyenv."

I tried gpt-3.5-turbo-0613 and gpt-4–0613 and can confirm that it works.

Something to note here is that GPT-4 may not always work as it depends on the status of the waitlist.

Let's try calling the function

I'll try it based on the API documentation below.

First, define a function to get the weather. This time we return fixed weather information.

import&nbsp;json

def&nbsp;get_current_weather(location, unit="fahrenheit"):
&nbsp;&nbsp;&nbsp;&nbsp;weather_info = {
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"location": location,
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"temperature": "30",
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"unit": unit,
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;"forecast": ["sunny", "windy"],
&nbsp;&nbsp;&nbsp;&nbsp;}
&nbsp;&nbsp;&nbsp;&nbsp;return&nbsp;json.dumps(weather_info)

Next, in order to provide information about this feature to the OpenAI API, create data containing the following description.

functions=[
    {
        "name": "get_current_weather",
        "description": "Get the current weather at the specified location.",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {
                    "type": "string",
                    "description": "City names, place names, prefecture names,",
                },
                "unit": { "type": "string", "enum": ["celsius", "fahrenheit"]},
            },
            "required": ["location"],
        },

From here, it is necessary to devise some methods of using the OpenAI API.

When using call functions, you may call the OpenAI API multiple times.

The first time is a normal query, but the parameters of the function give information about the previously created function.

(function_call should be “auto”)

model_name = "gpt-3.5-turbo-0613"

question = "Please tell me about the weather in New York, USA"

response = openai.ChatCompletion.create(
    model=model_name,
    messages=[
        { "role": "user", "content": question},
    ],
    functions=functions,
    function_call="auto",
)

function_call The second request will be thrown if the first response's message contains the following.

message = response["choices"][0]["message"]
message

<OpenAIObject at 0x7f4260054c20> JSON: {
  "role": "assistant",
  "content": null,
  "function_call": {
    "name": "get_current_weather",
    "arguments": "{\n  \"location\": \"\u6771\u4eac\u90fd\u6e2f\u533a\"\n}"
  }

Additionally, this response also includes the content provided as parameters, so you can call your own defined function based on that information and send a second request.

The implementation looks like this:

function_name = message["function_call"]["name"]

arguments = json.loads(message["function_call"]["arguments"])
function_response = get_current_weather(
    location=arguments.get("location"),
    unit=arguments.get("unit"),
)

second_response = openai.ChatCompletion.create(
    model=model_name,
    messages=[
        { "role": "user", "content": question},
        message,
        {
            "role": "function",
            "name": function_name,
            "content": function_response,
        },
    ],
)

print(second_response.choices[0]["message"]["content"].strip())

The current weather in New York, USA is clear, with a temperature of 30 degrees. It's windy.

I was able to confirm that I could respond based on my own defined functions.

function_call can get the answer in the first response if it doesn't contain a content (if it does as usual), so if you implement a conditional branch like below you can also handle requests unrelated to the defined function...

if message.get("function_call"):

    function_name = message["function_call"]["name"]

    arguments = json.loads(message["function_call"]["arguments"])
    function_response = get_current_weather(
        location=arguments.get("location"),
        unit=arguments.get("unit"),
    )

    second_response = openai.ChatCompletion.create(
        model=model_name,
        messages=[
            { "role": "user", "content": question},
            message,
            {
                "role": "function",
                "name": function_name,
                "content": function_response,
            },
        ],
    )

    print(second_response.choices[0]["message"]["content"].strip())

else:
    print(response.choices[0]["message"]["content"].strip())

You can also use it to post non-weather related queries.

model_name = "gpt-3.5-turbo-0613"

question = "Please tell me how to set up an environment with pyenv and pipenv."

response = openai.ChatCompletion.create(
    model=model_name,
    messages=[
        { "role": "user", "content": question},
    ],
    functions=functions,
    function_call="auto",
)

message = response["choices"][0]["message"]

if message.get("function_call"):

    function_name = message["function_call"]["name"]

    arguments = json.loads(message["function_call"]["arguments"])
    function_response = get_current_weather(
        location=arguments.get("location"),
        unit=arguments.get("unit"),
    )

    second_response = openai.ChatCompletion.create(
        model=model_name,
        messages=[
            { "role": "user", "content": question},
            message,
            {
                "role": "function",
                "name": function_name,
                "content": function_response,
            },
        ],
    )

    print(second_response.choices[0]["message"]["content"].strip())

else:
    print(response.choices[0]["message"]["content"].strip())

Sure!
Pyenv and Pipenv are handy tools for managing Python environments. Here, we will explain how to set up an environment on macOS or Linux.

In summary:

What I can confirm is that processing can be done without using a function defined this way.

For developers, this is a very good update. A variety of functions can be set in the function, and the scope of use is very wide.

Guess you like

Origin blog.csdn.net/specssss/article/details/131433136#comments_28604127