Developer quickstart
Take your first steps with the OpenAI API.
The OpenAI API provides a simple interface to state-of-the-art AI models for text generation, natural language processing, computer vision, and more. This example generates text output from a prompt, as you might using ChatGPT.
Generate text from a model
import OpenAI from "openai";
const client = new OpenAI();
const completion = await client.chat.completions.create({
model: "gpt-4.1",
messages: [
{
role: "user",
content: "Write a one-sentence bedtime story about a unicorn.",
},
],
});
console.log(completion.choices[0].message.content);
from openai import OpenAI
client = OpenAI()
completion = client.chat.completions.create(
model="gpt-4.1",
messages=[
{
"role": "user",
"content": "Write a one-sentence bedtime story about a unicorn."
}
]
)
print(completion.choices[0].message.content)
curl "https://api.openai.com/v1/chat/completions" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-4.1",
"messages": [
{
"role": "user",
"content": "Write a one-sentence bedtime story about a unicorn."
}
]
}'
[
Configure your development environment
Install and configure an official OpenAI SDK to run the code above.
](/docs/libraries)[
Text generation and prompting
Learn more about prompting, message roles, and building conversational apps.
](/docs/openai/guides/text)
Analyze image inputs
You can provide image inputs to the model as well. Scan receipts, analyze screenshots, or find objects in the real world with computer vision.
Analyze the content of an image
import OpenAI from "openai";
const openai = new OpenAI();
const response = await openai.chat.completions.create({
model: "gpt-4o-mini",
messages: [
{
role: "user",
content: [
{ type: "text", text: "What's in this image?" },
{
type: "image_url",
image_url: {
url: "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg",
},
},
],
},
],
});
console.log(response.choices[0].message.content);
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{
"role": "user",
"content": [
{"type": "text", "text": "What's in this image?"},
{
"type": "image_url",
"image_url": {
"url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg",
},
},
],
}
],
)
print(response.choices[0].message.content)
curl https://api.openai.com/v1/chat/completions -H "Content-Type: application/json" -H "Authorization: Bearer $OPENAI_API_KEY" -d '{
"model": "gpt-4o-mini",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "What is in this image?"
},
{
"type": "image_url",
"image_url": {
"url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
}
}
]
}
],
"max_tokens": 300
}'
[
Computer vision guide
Learn to use image inputs to the model and extract meaning from images.
](/docs/openai/guides/images)
Extend the model with tools
Give the model access to new data and capabilities using tools. You can either call your own custom code, or use one of OpenAI's powerful built-in tools. This example uses web search to give the model access to the latest information on the Internet.
Get information for the completion from the Internet
import OpenAI from "openai";
const client = new OpenAI();
const completion = await client.chat.completions.create({
model: "gpt-4o-search-preview",
web_search_options: {},
messages: [{
"role": "user",
"content": "What was a positive news story from today?"
}],
});
console.log(completion.choices[0].message.content);
from openai import OpenAI
client = OpenAI()
completion = client.chat.completions.create(
model="gpt-4o-search-preview",
web_search_options={},
messages=[
{
"role": "user",
"content": "What was a positive news story from today?",
}
],
)
print(completion.choices[0].message.content)
curl -X POST "https://api.openai.com/v1/chat/completions" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-type: application/json" \
-d '{
"model": "gpt-4o-search-preview",
"web_search_options": {},
"messages": [{
"role": "user",
"content": "What was a positive news story from today?"
}]
}'
[
Use built-in tools
Learn about powerful built-in tools like web search and file search.
](/docs/openai/guides/tools)[
Function calling guide
Learn to enable the model to call your own custom code.
](/docs/openai/guides/function-calling)
Deliver blazing fast AI experiences
Using either the new Realtime API or server-sent streaming events, you can build high performance, low-latency experiences for your users.
Stream server-sent events from the API
import OpenAI from "openai";
const openai = new OpenAI();
const stream = await openai.chat.completions.create({
model: "gpt-4.1",
messages: [
{
role: "user",
content: "Say 'double bubble bath' ten times fast." ,
}
],
stream: true,
});
for await (const chunk of stream) {
console.log(chunk);
console.log(chunk.choices[0].delta);
console.log("****************");
}
from openai import OpenAI
client = OpenAI()
stream = client.chat.completions.create(
model="gpt-4.1",
messages=[
{
"role": "user",
"content": "Say 'double bubble bath' ten times fast.",
},
],
stream=True,
)
for chunk in stream:
print(chunk)
print(chunk.choices[0].delta)
print("****************")
[
Use streaming events
Use server-sent events to stream model responses to users fast.
](/docs/openai/guides/streaming-responses)[
Get started with the Realtime API
Use WebRTC or WebSockets for super fast speech-to-speech AI apps.
](/docs/openai/guides/realtime)
Build agents
Use the OpenAI platform to build agents capable of taking action—like controlling computers—on behalf of your users. Use the Agent SDK for Python to create orchestration logic on the backend.
from agents import Agent, Runner
import asyncio
spanish_agent = Agent(
name="Spanish agent",
instructions="You only speak Spanish.",
)
english_agent = Agent(
name="English agent",
instructions="You only speak English",
)
triage_agent = Agent(
name="Triage agent",
instructions="Handoff to the appropriate agent based on the language of the request.",
handoffs=[spanish_agent, english_agent],
)
async def main():
result = await Runner.run(triage_agent, input="Hola, ¿cómo estás?")
print(result.final_output)
if __name__ == "__main__":
asyncio.run(main())
# ¡Hola! Estoy bien, gracias por preguntar. ¿Y tú, cómo estás?
[
Build agents that can take action
Learn how to use the OpenAI platform to build powerful, capable AI agents.
](/docs/openai/guides/agents)
Explore further
We've barely scratched the surface of what's possible with the OpenAI platform. Here are some resources you might want to explore next.
[
Go deeper with prompting and text generation
Learn more about prompting, message roles, and building conversational apps like chat bots.
](/docs/openai/guides/text)[
Analyze the content of images
Learn to use image inputs to the model and extract meaning from images.
](/docs/openai/guides/images)[
Generate structured JSON data from the model
Generate JSON data from the model that conforms to a JSON schema you specify.
](/docs/openai/guides/structured-outputs)[
Call custom code to help generate a response
Empower the model to invoke your own custom code to help generate a response. Do this to give the model access to data or systems it wouldn't be able to access otherwise.
](/docs/openai/guides/function-calling)[
Full API Reference
View the full API reference for the OpenAI platform.
](/docs/api-reference)