OpenAI API Integration Tutorial | 2026 Python SDK Complete Guide from Scratch
OpenAI API Integration Tutorial | 2026 Python SDK Complete Guide from Scratch
3 Lines of Python to Call GPT
Many people think integrating AI APIs is complex.
It's not.
OpenAI's Python SDK is designed to be very clean. Install the package, set the key, call the API -- three steps done. This tutorial walks you through the entire process step by step, with copy-paste-ready code at every stage.
Whether you're a first-time AI API user or a developer migrating from another platform, this guide is for you.
Want to get OpenAI API? Purchase through CloudInsight -- no credit card issues, enterprise discounts and invoices included.

TL;DR
Install the openai package -> Set API Key environment variable -> Call client.chat.completions.create() and you're done. This tutorial covers text generation, multi-turn conversations, Streaming, image analysis, and Function Calling, with complete runnable code examples.
Environment Setup & OpenAI SDK Installation
Answer-First: You need Python 3.8+ and pip, plus a single pip install openai to get started.
Installation Steps
# Recommended: use a virtual environment
python -m venv openai-env
source openai-env/bin/activate # macOS / Linux
# openai-env\Scripts\activate # Windows
# Install OpenAI SDK
pip install openai
Verify installation:
python -c "import openai; print(openai.__version__)"
System Requirements
| Item | Requirement |
|---|---|
| Python | 3.8 or above (3.11+ recommended) |
| openai package | Latest 1.x version |
| OS | Windows / macOS / Linux |
| Network | Must be able to connect to api.openai.com |
Obtaining API Key & Security Configuration
Answer-First: Create an API Key at platform.openai.com, store it in environment variables, and never hardcode it in your source code.
Create an API Key
- Log in to platform.openai.com
- Click "API Keys" in the left sidebar
- Click "Create new secret key"
- Copy the generated key
For complete account registration steps, refer to OpenAI API Registration Complete Tutorial.
Securely Store the API Key
# macOS / Linux - add to ~/.bashrc or ~/.zshrc
export OPENAI_API_KEY="sk-your-key-here"
# Windows PowerShell
$env:OPENAI_API_KEY="sk-your-key-here"
In Python, the SDK automatically reads the OPENAI_API_KEY environment variable:
from openai import OpenAI
# Automatically reads API Key from environment variable
client = OpenAI()
# Or specify manually
client = OpenAI(api_key="sk-your-key-here") # Not recommended
Security reminders:
- Never commit API Keys to Git
- If using
.envfiles, add them to.gitignore - For production, use Secret Managers (e.g., AWS Secrets Manager, GCP Secret Manager)
For more complete API Key security management practices, refer to API Key Management & Security Best Practices.
Text Generation: Your First API Call
Answer-First: Use client.chat.completions.create() with a model name and messages array to get AI text responses.
The Simplest Call
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "user", "content": "Explain what an API is in one sentence"}
]
)
print(response.choices[0].message.content)
That's it. 5 lines of effective code.
Using System Prompt to Control AI Behavior
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a senior software engineer. Answer in a casual yet professional tone."},
{"role": "user", "content": "What is a REST API?"}
],
temperature=0.7,
max_tokens=500
)
The system role message sets the AI's behavior pattern. A good System Prompt can significantly improve response quality.
Parameter Tuning Guide
| Parameter | Description | Recommended Value |
|---|---|---|
| temperature | Creativity level (0-2) | Translation 0.1, Q&A 0.7, Creative 1.2 |
| max_tokens | Maximum output length | Set based on needs; smaller saves money |
| top_p | Sampling range | Usually adjust either this or temperature, not both |
| frequency_penalty | Avoid repetition (0-2) | 0.3-0.5 reduces repetition |
Multi-Turn Conversations
messages = [
{"role": "system", "content": "You are a friendly assistant"},
{"role": "user", "content": "Which is better for beginners, Python or JavaScript?"},
]
response = client.chat.completions.create(model="gpt-4o", messages=messages)
assistant_reply = response.choices[0].message.content
print(assistant_reply)
# Continue the conversation: add the AI's response back
messages.append({"role": "assistant", "content": assistant_reply})
messages.append({"role": "user", "content": "What if I want to build websites?"})
response = client.chat.completions.create(model="gpt-4o", messages=messages)
print(response.choices[0].message.content)
The key to multi-turn conversations: pass the complete conversation history with each call. This is how the AI understands context.
But be careful -- the longer the conversation, the more tokens consumed. When you exceed the Context Window limit, you'll need to truncate earlier messages.
Purchase OpenAI API through CloudInsight for exclusive enterprise discounts and invoices. Learn about enterprise plans

Streaming Responses: Display AI Replies in Real Time
Answer-First: Add the stream=True parameter to display AI responses character by character in real time, significantly improving user experience.
stream = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "user", "content": "Recommend 5 must-visit night markets in Taiwan"}
],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="", flush=True)
Streaming is particularly useful in these scenarios:
- Chatbots: Users don't have to stare at a blank screen
- Long responses: Users can start reading early during long text generation
- Real-time feel: Makes AI responses feel more like human conversation
Downside: In streaming mode, you can't get usage info (token consumption) all at once. You need to set stream_options={"include_usage": True} to get it in the final chunk.
Image Analysis: Vision API Usage
Answer-First: GPT-4o and GPT-5 support image input. Simply include an image URL or base64 encoding in the messages to let AI understand image content.
Sending Images via URL
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{
"role": "user",
"content": [
{"type": "text", "text": "What's in this image?"},
{
"type": "image_url",
"image_url": {"url": "https://example.com/photo.jpg"}
}
]
}
]
)
Sending Local Images via Base64
import base64
with open("receipt.jpg", "rb") as f:
image_data = base64.b64encode(f.read()).decode("utf-8")
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{
"role": "user",
"content": [
{"type": "text", "text": "Please identify the amount on this receipt"},
{
"type": "image_url",
"image_url": {"url": f"data:image/jpeg;base64,{image_data}"}
}
]
}
]
)
Practical use cases:
- Receipt/invoice OCR recognition
- Product image classification
- UI screenshot analysis
- Chart data extraction
Note: Image analysis consumes significantly more tokens. One image is roughly equivalent to 85-1,700 tokens, depending on resolution.
Function Calling: Let AI Use Tools
Answer-First: Function Calling lets you define a list of functions. The AI determines when to call which function with the correct parameters -- ideal for building AI Agents that can query data and operate systems.
import json
# Define tools
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get real-time weather information for a specified city",
"parameters": {
"type": "object",
"properties": {
"city": {
"type": "string",
"description": "City name, e.g., Tokyo, New York"
}
},
"required": ["city"]
}
}
}
]
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "What's the weather like in Tokyo today?"}],
tools=tools
)
# Check if the AI wants to call a function
message = response.choices[0].message
if message.tool_calls:
tool_call = message.tool_calls[0]
function_name = tool_call.function.name
arguments = json.loads(tool_call.function.arguments)
print(f"AI wants to call: {function_name}({arguments})")
Function Calling workflow:
- You define a list of available functions
- User asks a question
- AI determines whether a function call is needed
- If needed, AI returns the function name and parameters
- You execute the function locally and get results
- You send results back to the AI, which responds to the user in natural language
Error Handling & Best Practices
Answer-First: Production environments must handle common errors like Rate Limit (429), Timeout, and Invalid Request, with exponential backoff retry mechanisms for stability.
Complete Error Handling Example
from openai import OpenAI, APIError, RateLimitError, APIConnectionError
import time
client = OpenAI()
def call_openai_with_retry(messages, max_retries=3):
for attempt in range(max_retries):
try:
response = client.chat.completions.create(
model="gpt-4o",
messages=messages
)
return response.choices[0].message.content
except RateLimitError:
wait_time = 2 ** attempt # 1, 2, 4 seconds
print(f"Rate limit exceeded, retrying in {wait_time} seconds...")
time.sleep(wait_time)
except APIConnectionError:
print("Connection failed, retrying in 2 seconds...")
time.sleep(2)
except APIError as e:
print(f"API error: {e}")
raise
raise Exception("Max retries exceeded")
Common Errors
| Error | Cause | Solution |
|---|---|---|
| RateLimitError (429) | Too many requests | Exponential backoff retry |
| AuthenticationError (401) | Invalid API Key | Verify key is correct |
| BadRequestError (400) | Malformed request | Check messages format and parameters |
| APIConnectionError | Network issues | Check network, retry later |
| InternalServerError (500) | OpenAI server issue | Wait a few seconds and retry |

Next Steps: From Examples to Products
You've learned the basics of OpenAI API Python integration.
What's next?
- Try different models: Use GPT-4o-mini for simple tasks to save money, GPT-4o or GPT-5 for complex tasks
- Learn the Assistants API: Build more complete AI assistants
- Understand cost control: Set Budget Limits, monitor token usage
- Build RAG systems: Combine with Embeddings API for knowledge base Q&A
For the complete features and enterprise plans of OpenAI API, check out GPT-5 & OpenAI API Complete Guide.
If you're curious about GPT-5's capabilities and pricing, What Is GPT-5? Latest Features & Tutorial has a more detailed analysis. If you're also interested in Google's AI API, Gemini API Complete Developer Guide is a great comparison reference. For a pricing summary across providers, check out AI API Pricing Comparison Guide.
Need enterprise-grade OpenAI API plans? CloudInsight offers bulk token purchasing discounts, invoices, and technical support. Get a quote for enterprise plans, or join our LINE official account for instant technical support.
References
- OpenAI -- Python SDK Documentation (https://platform.openai.com/docs/libraries/python-library)
- OpenAI -- Chat Completions API Reference (https://platform.openai.com/docs/api-reference/chat)
- OpenAI -- Vision Guide (https://platform.openai.com/docs/guides/vision)
- OpenAI Cookbook -- GitHub (https://github.com/openai/openai-cookbook)
Need Professional Cloud Advice?
Whether you're evaluating cloud platforms, optimizing existing architecture, or looking for cost-saving solutions, we can help
Book Free ConsultationRelated Articles
Gemini API Python Tutorial: 2026 Complete Guide to Calling Google AI Models from Scratch
2026 Gemini API Python integration complete tutorial. From SDK installation, API Key setup to implementing text generation and image understanding, with full code examples for beginners to quickly get started with Google Gemini development.
AI APIClaude API Integration Tutorial | 2026 Anthropic API Complete Beginner's Guide
2026 Claude API integration tutorial! From Anthropic API Key setup, Python SDK installation to your first code example, a step-by-step guide to Claude API integration.
AI APIOpenAI API Tutorial | 2026 Complete Guide from API Key to Code Examples
2026 OpenAI API tutorial! From API Key application, Python environment setup to complete code examples -- beginners can get started with OpenAI API quickly.