The Assistant API allows you to build AI assistants in your applications. Assistants have instructions and can leverage models, tools, and files to respond to user queries. Currently, the Assistant API supports three types of tools: code interpreters, file searches, and function calls.
You can explore the capabilities of the Assistant API via the Assistant playground, or build by following the step-by-step integration steps in this guide.

Overview

The typical integration process of the assistant API is as follows:

Create an assistant by defining its custom directive and selecting a model. If necessary, add files and enable tools such as Code Interpreter, File Search, and Function calling.
When a user starts a conversation, create a Thread.
When a user asks a question, add a message to the Thread.
Run helpers call models and tools on Thread to generate responses.

This getting started guide walks you through the key steps of creating and running an assistant that uses Code Interpreter. In this example, we will create a personal math tutoring assistant enabled with the Code Interpreter tool.

Calls to the Assistants API require you to pass a beta HTTP header. This is handled automatically if you use OpenAI’s official Python or Node.js SDK.

OpenAI-Beta: assistants=v2

Step 1: Create an assistant
An assistant represents an entity that can be configured based on multiple parameters such as models, instructions, and tools to respond to user messages.

from openai import OpenAI
client = OpenAI()

assistant = client.beta.assistants.create(
name=”Math Tutor”,
instructions=”You are a personal math tutor. Write and run code to answer math questions.”,
tools=[{“type”: “code_interpreter”}],
model=”gpt-4-turbo”,
)

Step 2: Create the thread
A thread represents a conversation between a user and one or more assistants. When a user (or your AI application) starts a conversation with your assistant, you can create a thread.

thread = client.beta.threads.create()

Step 3: Add a message to the thread
Message content created by the user or application is added to the thread as a message object. Messages can contain text and files. There’s no limit to the number of messages you can add to a thread — we’ll intelligently truncate anything that doesn’t fit within the model’s context window.

message = client.beta.threads.messages.create(
thread_id=thread.id,
role=”user”,
content=”I need to solve the equation 3x + 11 = 14. Can you help me?”
)

Step 4: Create a run
Once all user messages are added to the thread, you can use any helper to run the thread. Create a run that uses the models and tools associated with the assistant to generate responses. These responses are added to the thread as helper messages.

Leave a Reply

Your email address will not be published. Required fields are marked *