OpenAI API Documentation
Umfassende Dokumentation für die Integration von OpenAI-Funktionen in Chatbots, Apps und andere AI-Agents.
Base URL
https://theserver-open-ai.replit.app
Verfügbare Endpoints
/chat/completions
- Chat Konversationen mit Streaming
/images/generate
- DALL-E Bildgenerierung
/audio/speech
- Text-to-Speech
/audio/transcriptions
- Audio Transkription
/embeddings
- Text Embeddings
/moderations
- Content Moderation
/models
- Liste aller Modelle
Authentication
API Key Setup
Alle Requests benötigen einen API-Key im Authorization-Header:
Authorization: Bearer YOUR_OPENAI_API_KEY
Request Header Beispiel
{
"Content-Type": "application/json",
"Authorization": "Bearer sk-proj-..."
}
Sicherheitshinweise
- Niemals API-Keys im Client-Code hardcoden
- Verwende Environment Variables für Keys
- Implementiere Server-seitige API-Aufrufe
- Nutze Rate Limiting zum Schutz vor Missbrauch
Error Handling
HTTP Status Codes
Error Response Format
{
"error": {
"message": "Invalid API key provided",
"type": "invalid_request_error",
"code": "invalid_api_key"
}
}
Chat Completions
/chat/completions
Erstelle Chat-Konversationen mit GPT-Modellen, optional mit Streaming.
Request Parameter
model
REQUIRED
String - GPT-Modell (z.B. "gpt-4o", "gpt-4o-mini", "gpt-3.5-turbo")
messages
REQUIRED
Array - Liste von Message-Objekten mit "role" und "content"
temperature
OPTIONAL
Number (0-2) - Kreativität der Antwort (default: 0.7)
stream
OPTIONAL
Boolean - Streaming aktivieren (default: false)
max_tokens
OPTIONAL
Number - Maximum Tokens in der Antwort
Request Beispiel (JavaScript)
const response = await fetch('https://theserver-open-ai.replit.app/chat/completions', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer YOUR_API_KEY'
},
body: JSON.stringify({
model: 'gpt-4o',
messages: [
{ role: 'system', content: 'Du bist ein hilfreicher Assistent.' },
{ role: 'user', content: 'Was ist maschinelles Lernen?' }
],
temperature: 0.7,
stream: true
})
});
const reader = response.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = decoder.decode(value);
const lines = chunk.split('\n').filter(line => line.trim());
for (const line of lines) {
if (line.startsWith('data: ')) {
const data = line.slice(6);
if (data === '[DONE]') continue;
const parsed = JSON.parse(data);
const content = parsed.choices[0]?.delta?.content;
if (content) {
console.log(content);
}
}
}
}
Request Beispiel (Python)
import requests
import json
url = "https://theserver-open-ai.replit.app/chat/completions"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer YOUR_API_KEY"
}
data = {
"model": "gpt-4o",
"messages": [
{"role": "system", "content": "Du bist ein hilfreicher Assistent."},
{"role": "user", "content": "Was ist maschinelles Lernen?"}
],
"temperature": 0.7,
"stream": True
}
response = requests.post(url, headers=headers, json=data, stream=True)
for line in response.iter_lines():
if line:
line = line.decode('utf-8')
if line.startswith('data: '):
data = line[6:]
if data == '[DONE]':
break
parsed = json.loads(data)
content = parsed.get('choices', [{}])[0].get('delta', {}).get('content')
if content:
print(content, end='', flush=True)
Response Format (Stream)
data: {"choices":[{"delta":{"content":"Maschinelles"},"index":0}]}
data: {"choices":[{"delta":{"content":" Lernen"},"index":0}]}
data: {"choices":[{"delta":{"content":" ist..."},"index":0}]}
data: [DONE]
Image Generation
/images/generate
Generiere Bilder mit DALL-E basierend auf Text-Prompts.
Request Parameter
prompt
REQUIRED
String - Beschreibung des zu generierenden Bildes
model
OPTIONAL
String - "dall-e-3" oder "dall-e-2" (default: "dall-e-3")
size
OPTIONAL
String - "1024x1024", "1792x1024", "1024x1792" (default: "1024x1024")
quality
OPTIONAL
String - "standard" oder "hd" (default: "standard")
Request Beispiel
const response = await fetch('https://theserver-open-ai.replit.app/images/generate', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer YOUR_API_KEY'
},
body: JSON.stringify({
prompt: 'Ein futuristischer Roboter in einer Cyberpunk-Stadt',
model: 'dall-e-3',
size: '1024x1024',
quality: 'hd'
})
});
const data = await response.json();
console.log(data.data[0].url);
Response Format
{
"created": 1711234567,
"data": [
{
"url": "https://oaidalleapiprodscus.blob.core.windows.net/...",
"revised_prompt": "A futuristic robot in a cyberpunk city..."
}
]
}
Audio Processing
Text-to-Speech
/audio/speech
Parameter
input
REQUIRED
String - Text der gesprochen werden soll
voice
REQUIRED
String - "alloy", "echo", "fable", "onyx", "nova", "shimmer"
model
OPTIONAL
String - "tts-1" oder "tts-1-hd" (default: "tts-1")
Request Beispiel
const response = await fetch('https://theserver-open-ai.replit.app/audio/speech', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer YOUR_API_KEY'
},
body: JSON.stringify({
input: 'Hallo, ich bin eine KI-generierte Stimme!',
voice: 'alloy',
model: 'tts-1-hd'
})
});
const audioBlob = await response.blob();
const audioUrl = URL.createObjectURL(audioBlob);
const audio = new Audio(audioUrl);
audio.play();
Transcriptions
/audio/transcriptions
Parameter
file
REQUIRED
File - Audio-Datei (mp3, mp4, wav, etc.)
model
OPTIONAL
String - "whisper-1" (default)
Request Beispiel
const formData = new FormData();
formData.append('file', audioFile);
formData.append('model', 'whisper-1');
const response = await fetch('https://theserver-open-ai.replit.app/audio/transcriptions', {
method: 'POST',
headers: {
'Authorization': 'Bearer YOUR_API_KEY'
},
body: formData
});
const data = await response.json();
console.log(data.text);
Embeddings
/embeddings
Generiere Vektor-Embeddings für Texte (für Semantic Search, Clustering, etc.).
Request Parameter
input
REQUIRED
String oder Array - Text(e) für Embedding-Generierung
model
REQUIRED
String - "text-embedding-3-small", "text-embedding-3-large", "text-embedding-ada-002"
Request Beispiel
const response = await fetch('https://theserver-open-ai.replit.app/embeddings', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer YOUR_API_KEY'
},
body: JSON.stringify({
input: 'Maschinelles Lernen ist ein Teilgebiet der KI',
model: 'text-embedding-3-small'
})
});
const data = await response.json();
console.log(data.data[0].embedding);
Response Format
{
"object": "list",
"data": [
{
"object": "embedding",
"index": 0,
"embedding": [0.0023, -0.009, 0.015, ...]
}
],
"model": "text-embedding-3-small",
"usage": {
"prompt_tokens": 8,
"total_tokens": 8
}
}
Moderations
/moderations
Prüfe Texte auf potenziell problematische Inhalte (Hate, Violence, Sexual, etc.).
Request Parameter
input
REQUIRED
String - Zu prüfender Text
Request Beispiel
const response = await fetch('https://theserver-open-ai.replit.app/moderations', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer YOUR_API_KEY'
},
body: JSON.stringify({
input: 'Dies ist ein Beispieltext zur Moderation'
})
});
const data = await response.json();
console.log(data.results[0]);
Response Format
{
"id": "modr-...",
"model": "text-moderation-007",
"results": [
{
"flagged": false,
"categories": {
"sexual": false,
"hate": false,
"violence": false,
"self-harm": false,
"sexual/minors": false,
"hate/threatening": false,
"violence/graphic": false
},
"category_scores": {
"sexual": 0.00001,
"hate": 0.00001,
"violence": 0.00001,
...
}
}
]
}
Models
/models
Rufe eine Liste aller verfügbaren OpenAI-Modelle ab.
Request Beispiel
const response = await fetch('https://theserver-open-ai.replit.app/models', {
headers: {
'Authorization': 'Bearer YOUR_API_KEY'
}
});
const data = await response.json();
console.log(data.data);
Response Format
{
"object": "list",
"data": [
{
"id": "gpt-4o",
"object": "model",
"created": 1687882411,
"owned_by": "openai"
},
{
"id": "gpt-4o-mini",
"object": "model",
"created": 1687882411,
"owned_by": "openai"
},
...
]
}
Integration Guide
Schnellstart für AI-Agents
Diese API ist optimiert für die Integration in Chatbots und AI-Agents. Folge diesen Schritten:
1. API-Key Setup
Speichere deinen OpenAI API-Key sicher in Environment Variables
2. Base URL konfigurieren
Verwende: https://theserver-open-ai.replit.app
3. Request Headers setzen
Content-Type: application/json + Authorization: Bearer {API_KEY}
4. Endpoint auswählen
Wähle den passenden Endpoint für deinen Use Case (Chat, Images, etc.)
5. Error Handling implementieren
Fange Fehler ab und behandle Rate Limits korrekt
Beispiel: Chatbot Integration
class OpenAIChatbot {
constructor(apiKey) {
this.apiKey = apiKey;
this.baseURL = 'https://theserver-open-ai.replit.app';
this.conversationHistory = [];
}
async sendMessage(userMessage) {
this.conversationHistory.push({
role: 'user',
content: userMessage
});
try {
const response = await fetch(`${this.baseURL}/chat/completions`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${this.apiKey}`
},
body: JSON.stringify({
model: 'gpt-4o',
messages: this.conversationHistory,
temperature: 0.7,
stream: true
})
});
if (!response.ok) {
throw new Error(`API Error: ${response.status}`);
}
let fullResponse = '';
const reader = response.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = decoder.decode(value);
const lines = chunk.split('\n').filter(line => line.trim());
for (const line of lines) {
if (line.startsWith('data: ')) {
const data = line.slice(6);
if (data === '[DONE]') continue;
const parsed = JSON.parse(data);
const content = parsed.choices[0]?.delta?.content;
if (content) {
fullResponse += content;
}
}
}
}
this.conversationHistory.push({
role: 'assistant',
content: fullResponse
});
return fullResponse;
} catch (error) {
console.error('Chatbot Error:', error);
throw error;
}
}
clearHistory() {
this.conversationHistory = [];
}
}
const bot = new OpenAIChatbot('YOUR_API_KEY');
const answer = await bot.sendMessage('Hallo!');
TypeScript Type Definitions
interface ChatMessage {
role: 'system' | 'user' | 'assistant';
content: string;
}
interface ChatCompletionRequest {
model: string;
messages: ChatMessage[];
temperature?: number;
stream?: boolean;
max_tokens?: number;
}
interface ImageGenerationRequest {
prompt: string;
model?: 'dall-e-3' | 'dall-e-2';
size?: '1024x1024' | '1792x1024' | '1024x1792';
quality?: 'standard' | 'hd';
}
interface EmbeddingRequest {
input: string | string[];
model: 'text-embedding-3-small' | 'text-embedding-3-large' | 'text-embedding-ada-002';
}
interface ModerationRequest {
input: string;
}
Safety & Rate Limits
Rate Limiting Best Practices
- Exponential Backoff: Bei 429-Fehlern exponentiell warten (1s, 2s, 4s, 8s...)
- Request Queuing: Implementiere eine Queue für API-Requests
- Caching: Cache häufige Requests um Kosten zu sparen
- Timeout Handling: Setze angemessene Timeouts (30-60s für Chat)
Safe API Call Implementation
async function safeAPICall(endpoint, options, maxRetries = 3) {
let retries = 0;
let delay = 1000;
while (retries < maxRetries) {
try {
const response = await fetch(endpoint, options);
if (response.status === 429) {
console.log(`Rate limited. Waiting ${delay}ms...`);
await new Promise(resolve => setTimeout(resolve, delay));
delay *= 2;
retries++;
continue;
}
if (!response.ok) {
const error = await response.json();
throw new Error(`API Error: ${error.error.message}`);
}
return response;
} catch (error) {
if (retries === maxRetries - 1) {
throw error;
}
retries++;
await new Promise(resolve => setTimeout(resolve, delay));
delay *= 2;
}
}
}
const response = await safeAPICall(
'https://theserver-open-ai.replit.app/chat/completions',
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${API_KEY}`
},
body: JSON.stringify({...})
}
);