╔═══════════════════════════════════════════════════════════════════════════════╗ ║ UNIVERSAL AI & FINANCIAL DATA PROXY SERVER ║ ║ API INTEGRATION MASTER GUIDE ║ ╚═══════════════════════════════════════════════════════════════════════════════╝ BASE URL: https://theserver-open-ai.replit.app TOTAL ENDPOINTS: 101 across 7 major AI/Data providers ═══════════════════════════════════════════════════════════════════════════════ 📋 TABLE OF CONTENTS ═══════════════════════════════════════════════════════════════════════════════ 1. OpenAI API (18 endpoints) - Chat, Images, Audio, Embeddings, Models, Files, Assistants, Threads 2. Mistral AI (36 endpoints) - Chat, Embeddings, FIM, Files, Fine-tuning, Agents, Conversations 3. Claude/Anthropic (10 endpoints) - Messages, Batches, Files, Organization, Usage 4. Perplexity AI (3 endpoints) - Chat, Search, Cache Stats 5. Google Gemini (14 endpoints) - Generate, Stream, Embeddings, Files, Images, Video 6. RapidAPI Services (12 endpoints) - Jobs, Yahoo Finance, Google Search, Scraper, Amazon 7. EOD Historical Data (8 endpoints) - Stock Market Data, News, Dividends, Search TOTAL: 101 ENDPOINTS ═══════════════════════════════════════════════════════════════════════════════ 🤖 MASTER PROMPT FOR LLMs - HOW TO USE THIS DOCUMENT ═══════════════════════════════════════════════════════════════════════════════ PROMPT TEMPLATE: """ Convert my hardcoded HTML page to use the Universal AI Proxy Server. Backend URL: https://theserver-open-ai.replit.app Service I need: [OpenAI/Mistral/Claude/Perplexity/Gemini/RapidAPI/EOD] Specific endpoint: [endpoint name from this document] Requirements: 1. Replace any direct API calls with calls to the proxy server 2. Use the exact endpoint URL and request format specified below 3. Handle errors properly (display user-friendly messages) 4. Add loading states during API calls 5. Parse and display the response data correctly [Copy the INTEGRATION CODE from the specific endpoint section below] """ ═══════════════════════════════════════════════════════════════════════════════ 1️⃣ OPENAI API - 18 ENDPOINTS ═══════════════════════════════════════════════════════════════════════════════ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1 CHAT COMPLETIONS (Streaming Support) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ENDPOINT: POST /api/chat RATE LIMIT: 20 requests/minute REQUEST BODY: { "model": "gpt-4o", // or "gpt-4o-mini", "gpt-4-turbo", "gpt-3.5-turbo" "messages": [ { "role": "system", "content": "You are a helpful assistant" }, { "role": "user", "content": "Hello!" } ], "stream": true, // optional, enables streaming "temperature": 0.7, // optional, 0-2 "max_tokens": 1000 // optional } RESPONSE (Non-streaming): { "id": "chatcmpl-...", "object": "chat.completion", "created": 1234567890, "model": "gpt-4o", "choices": [{ "index": 0, "message": { "role": "assistant", "content": "Hello! How can I help?" }, "finish_reason": "stop" }], "usage": { "prompt_tokens": 10, "completion_tokens": 8, "total_tokens": 18 } } RESPONSE (Streaming): data: {"id":"chatcmpl-...","object":"chat.completion.chunk","created":1234567890,"model":"gpt-4o","choices":[{"index":0,"delta":{"role":"assistant","content":"Hello"},"finish_reason":null}]} INTEGRATION CODE: ```javascript const BACKEND_URL = 'https://theserver-open-ai.replit.app'; // Non-streaming example async function sendChatMessage(userMessage) { try { const response = await fetch(`${BACKEND_URL}/api/chat`, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ model: 'gpt-4o', messages: [{ role: 'user', content: userMessage }], temperature: 0.7 }) }); if (!response.ok) throw new Error(`HTTP ${response.status}`); const data = await response.json(); return data.choices[0].message.content; } catch (error) { console.error('Chat error:', error); throw error; } } // Streaming example async function sendChatStreaming(userMessage, onChunk) { const response = await fetch(`${BACKEND_URL}/api/chat`, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ model: 'gpt-4o', messages: [{ role: 'user', content: userMessage }], stream: true }) }); // Check for errors before reading stream if (!response.ok) { const error = await response.json().catch(() => ({ error: 'Unknown error' })); throw new Error(error.error || `HTTP ${response.status}`); } const reader = response.body.getReader(); const decoder = new TextDecoder(); while (true) { const { done, value } = await reader.read(); if (done) break; const chunk = decoder.decode(value); const lines = chunk.split('\n').filter(line => line.trim().startsWith('data: ')); for (const line of lines) { const data = line.replace('data: ', ''); if (data === '[DONE]') return; try { const parsed = JSON.parse(data); const content = parsed.choices[0]?.delta?.content; if (content) onChunk(content); } catch (e) {} } } } ``` LLM PROMPT: """ Update my chat interface to use the OpenAI proxy at https://theserver-open-ai.replit.app/api/chat Current hardcoded behavior: [describe your current implementation] Requirements: - Replace direct OpenAI API calls with proxy calls - Support both streaming and non-streaming modes - Use gpt-4o model by default - Show loading state during requests - Display streaming responses character by character - Handle errors gracefully Use the INTEGRATION CODE above as reference. """ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.2 EMBEDDINGS ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ENDPOINT: POST /api/embeddings REQUEST BODY: { "input": "Your text here", // or array of strings "model": "text-embedding-3-small" // or "text-embedding-3-large", "text-embedding-ada-002" } RESPONSE: { "object": "list", "data": [{ "object": "embedding", "index": 0, "embedding": [0.123, -0.456, ...] // vector array }], "model": "text-embedding-3-small", "usage": { "prompt_tokens": 8, "total_tokens": 8 } } INTEGRATION CODE: ```javascript async function generateEmbeddings(text) { const response = await fetch(`${BACKEND_URL}/api/embeddings`, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ input: text, model: 'text-embedding-3-small' }) }); if (!response.ok) throw new Error(`HTTP ${response.status}`); const data = await response.json(); return data.data[0].embedding; // Returns vector array } ``` LLM PROMPT: """ Convert my embedding generation to use https://theserver-open-ai.replit.app/api/embeddings Replace any hardcoded calls with the proxy endpoint shown above. """ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.3 IMAGE GENERATION ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ENDPOINT: POST /api/images/generate RATE LIMIT: 10 requests/hour REQUEST BODY: { "prompt": "A futuristic city at sunset", "model": "dall-e-3", // or "dall-e-2" "n": 1, // number of images (1-10 for dall-e-2, only 1 for dall-e-3) "size": "1024x1024", // "256x256", "512x512", "1024x1024", "1792x1024", "1024x1792" "quality": "standard", // "standard" or "hd" (dall-e-3 only) "style": "vivid" // "vivid" or "natural" (dall-e-3 only) } RESPONSE: { "created": 1234567890, "data": [{ "url": "https://oaidalleapiprodscus.blob.core.windows.net/...", "revised_prompt": "..." // dall-e-3 only }] } INTEGRATION CODE: ```javascript async function generateImage(prompt) { const response = await fetch(`${BACKEND_URL}/api/images/generate`, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ prompt: prompt, model: 'dall-e-3', size: '1024x1024', quality: 'standard' }) }); if (!response.ok) throw new Error(`HTTP ${response.status}`); const data = await response.json(); return data.data[0].url; // Returns image URL } ``` LLM PROMPT: """ Update my image generation feature to use https://theserver-open-ai.replit.app/api/images/generate Replace hardcoded DALL-E calls with the proxy. Display the generated image in an tag. """ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.4 IMAGE EDIT ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ENDPOINT: POST /api/images/edit RATE LIMIT: 10 requests/hour CONTENT-TYPE: multipart/form-data FORM DATA: - image: (file) PNG image to edit - mask: (file) PNG mask image (optional) - prompt: (string) Description of the edit - model: (string) "dall-e-2" (default) - n: (number) 1-10 - size: (string) "256x256", "512x512", or "1024x1024" INTEGRATION CODE: ```javascript async function editImage(imageFile, maskFile, prompt) { const formData = new FormData(); formData.append('image', imageFile); if (maskFile) formData.append('mask', maskFile); formData.append('prompt', prompt); formData.append('size', '1024x1024'); const response = await fetch(`${BACKEND_URL}/api/images/edit`, { method: 'POST', body: formData // No Content-Type header needed with FormData }); if (!response.ok) throw new Error(`HTTP ${response.status}`); const data = await response.json(); return data.data[0].url; } ``` ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.5 IMAGE VARIATIONS ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ENDPOINT: POST /api/images/variations RATE LIMIT: 10 requests/hour CONTENT-TYPE: multipart/form-data FORM DATA: - image: (file) PNG image - model: (string) "dall-e-2" - n: (number) 1-10 - size: (string) "256x256", "512x512", or "1024x1024" INTEGRATION CODE: ```javascript async function createImageVariations(imageFile) { const formData = new FormData(); formData.append('image', imageFile); formData.append('n', '2'); formData.append('size', '1024x1024'); const response = await fetch(`${BACKEND_URL}/api/images/variations`, { method: 'POST', body: formData }); if (!response.ok) throw new Error(`HTTP ${response.status}`); const data = await response.json(); return data.data.map(img => img.url); // Returns array of image URLs } ``` ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.6 AUDIO TRANSCRIPTIONS (Whisper) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ENDPOINT: POST /api/audio/transcriptions CONTENT-TYPE: multipart/form-data FORM DATA: - file: (audio file) mp3, mp4, mpeg, mpga, m4a, wav, or webm - model: (string) "whisper-1" - language: (string) ISO-639-1 code (optional, e.g., "en", "de") - response_format: (string) "json", "text", "srt", "verbose_json", or "vtt" RESPONSE: { "text": "Transcribed audio content here..." } INTEGRATION CODE: ```javascript async function transcribeAudio(audioFile) { const formData = new FormData(); formData.append('file', audioFile); formData.append('model', 'whisper-1'); formData.append('language', 'en'); const response = await fetch(`${BACKEND_URL}/api/audio/transcriptions`, { method: 'POST', body: formData }); if (!response.ok) throw new Error(`HTTP ${response.status}`); const data = await response.json(); return data.text; } ``` LLM PROMPT: """ Add audio transcription using https://theserver-open-ai.replit.app/api/audio/transcriptions Allow users to upload audio files and display the transcribed text. """ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.7 AUDIO TRANSLATIONS ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ENDPOINT: POST /api/audio/translations CONTENT-TYPE: multipart/form-data FORM DATA: - file: (audio file) Non-English audio - model: (string) "whisper-1" - response_format: (string) "json" (default), "text", "srt", "verbose_json", or "vtt" RESPONSE: { "text": "Translated to English content..." } INTEGRATION CODE: ```javascript async function translateAudio(audioFile) { const formData = new FormData(); formData.append('file', audioFile); formData.append('model', 'whisper-1'); const response = await fetch(`${BACKEND_URL}/api/audio/translations`, { method: 'POST', body: formData }); if (!response.ok) throw new Error(`HTTP ${response.status}`); const data = await response.json(); return data.text; } ``` ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.8 TEXT-TO-SPEECH (TTS) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ENDPOINT: POST /api/audio/speech REQUEST BODY: { "model": "tts-1", // or "tts-1-hd" "input": "Hello, this is a test", "voice": "alloy", // "alloy", "echo", "fable", "onyx", "nova", "shimmer" "response_format": "mp3", // "mp3", "opus", "aac", "flac" "speed": 1.0 // 0.25 to 4.0 } RESPONSE: Audio file (binary) INTEGRATION CODE: ```javascript async function generateSpeech(text, voice = 'alloy') { const response = await fetch(`${BACKEND_URL}/api/audio/speech`, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ model: 'tts-1', input: text, voice: voice }) }); if (!response.ok) throw new Error(`HTTP ${response.status}`); const audioBlob = await response.blob(); const audioUrl = URL.createObjectURL(audioBlob); // Play audio const audio = new Audio(audioUrl); audio.play(); return audioUrl; } ``` ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.9 MODERATIONS ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ENDPOINT: POST /api/moderations REQUEST BODY: { "input": "Text to check for violations" } RESPONSE: { "id": "modr-...", "model": "text-moderation-007", "results": [{ "flagged": false, "categories": { "sexual": false, "hate": false, "harassment": false, "self-harm": false, "sexual/minors": false, "hate/threatening": false, "violence/graphic": false, "self-harm/intent": false, "self-harm/instructions": false, "harassment/threatening": false, "violence": false }, "category_scores": { "sexual": 0.00001, "hate": 0.00002, ... } }] } INTEGRATION CODE: ```javascript async function moderateContent(text) { const response = await fetch(`${BACKEND_URL}/api/moderations`, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ input: text }) }); if (!response.ok) throw new Error(`HTTP ${response.status}`); const data = await response.json(); return data.results[0]; // Returns moderation result } ``` ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.10 LIST MODELS ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ENDPOINT: GET /api/models RESPONSE: { "object": "list", "data": [ { "id": "gpt-4o", "object": "model", "created": 1234567890, "owned_by": "system" }, ... ] } INTEGRATION CODE: ```javascript async function listModels() { const response = await fetch(`${BACKEND_URL}/api/models`); if (!response.ok) throw new Error(`HTTP ${response.status}`); const data = await response.json(); return data.data; // Returns array of models } ``` ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.11-1.13 FILES MANAGEMENT ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ UPLOAD FILE: POST /api/files (multipart/form-data) LIST FILES: GET /api/files DELETE FILE: DELETE /api/files/:file_id INTEGRATION CODE: ```javascript // Upload file async function uploadFile(file, purpose = 'assistants') { const formData = new FormData(); formData.append('file', file); formData.append('purpose', purpose); const response = await fetch(`${BACKEND_URL}/api/files`, { method: 'POST', body: formData }); if (!response.ok) throw new Error(`HTTP ${response.status}`); return await response.json(); } // List files async function listFiles() { const response = await fetch(`${BACKEND_URL}/api/files`); if (!response.ok) throw new Error(`HTTP ${response.status}`); return await response.json(); } // Delete file async function deleteFile(fileId) { const response = await fetch(`${BACKEND_URL}/api/files/${fileId}`, { method: 'DELETE' }); if (!response.ok) throw new Error(`HTTP ${response.status}`); return await response.json(); } ``` ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.14-1.18 ASSISTANTS & THREADS (5 ENDPOINTS) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ CREATE ASSISTANT: POST /api/assistants LIST ASSISTANTS: GET /api/assistants CREATE THREAD: POST /api/threads ADD MESSAGE TO THREAD: POST /api/threads/:thread_id/messages RUN THREAD: POST /api/threads/:thread_id/runs INTEGRATION CODE: ```javascript // Create assistant async function createAssistant(name, instructions, model = 'gpt-4o') { const response = await fetch(`${BACKEND_URL}/api/assistants`, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ name, instructions, model }) }); if (!response.ok) throw new Error(`HTTP ${response.status}`); return await response.json(); } // List assistants async function listAssistants() { const response = await fetch(`${BACKEND_URL}/api/assistants`); if (!response.ok) throw new Error(`HTTP ${response.status}`); return await response.json(); } // Create thread async function createThread(messages = []) { const response = await fetch(`${BACKEND_URL}/api/threads`, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ messages }) }); if (!response.ok) throw new Error(`HTTP ${response.status}`); return await response.json(); } // Add message to thread async function addMessageToThread(threadId, content) { const response = await fetch(`${BACKEND_URL}/api/threads/${threadId}/messages`, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ role: 'user', content: content }) }); if (!response.ok) throw new Error(`HTTP ${response.status}`); return await response.json(); } // Run thread with assistant async function runThread(threadId, assistantId, instructions = null) { const response = await fetch(`${BACKEND_URL}/api/threads/${threadId}/runs`, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ assistant_id: assistantId, ...(instructions && { instructions }) }) }); if (!response.ok) throw new Error(`HTTP ${response.status}`); return await response.json(); } ``` LLM PROMPT: """ Add OpenAI Assistants support using the proxy at https://theserver-open-ai.replit.app Implement: 1. Create an assistant with custom instructions 2. Create a conversation thread 3. Add user messages to the thread 4. Run the assistant on the thread 5. Display the assistant's responses Use the integration code above. Handle async runs properly (poll for completion). """ ═══════════════════════════════════════════════════════════════════════════════ 2️⃣ MISTRAL AI - 36 ENDPOINTS ═══════════════════════════════════════════════════════════════════════════════ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.1 MISTRAL CHAT COMPLETIONS ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ENDPOINT: POST /mistral/chat RATE LIMIT: 20 requests/minute REQUEST BODY: { "model": "mistral-large-latest", // or "mistral-medium-2508", "codestral-2508", "pixtral-large-latest" "messages": [ { "role": "user", "content": "Hello!" } ], "stream": true, // optional "temperature": 0.7, "max_tokens": 1000 } RESPONSE: Same format as OpenAI chat INTEGRATION CODE: ```javascript async function mistralChat(userMessage) { const response = await fetch(`${BACKEND_URL}/mistral/chat`, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ model: 'mistral-large-latest', messages: [{ role: 'user', content: userMessage }] }) }); if (!response.ok) throw new Error(`HTTP ${response.status}`); const data = await response.json(); return data.choices[0].message.content; } ``` LLM PROMPT: """ Convert my chat to use Mistral AI via https://theserver-open-ai.replit.app/mistral/chat Use mistral-large-latest model. Support streaming responses. """ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.2 MISTRAL EMBEDDINGS ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ENDPOINT: POST /mistral/embeddings REQUEST BODY: { "model": "mistral-embed", "input": ["Text to embed"] } INTEGRATION CODE: ```javascript async function mistralEmbeddings(texts) { const response = await fetch(`${BACKEND_URL}/mistral/embeddings`, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ model: 'mistral-embed', input: Array.isArray(texts) ? texts : [texts] }) }); if (!response.ok) throw new Error(`HTTP ${response.status}`); return await response.json(); } ``` ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.3 MISTRAL FIM (Fill-in-Middle) COMPLETIONS ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ENDPOINT: POST /mistral/fim/completions REQUEST BODY: { "model": "codestral-2508", "prompt": "def fibonacci(", "suffix": "\n return result", "max_tokens": 100 } INTEGRATION CODE: ```javascript async function fillInMiddle(prefix, suffix) { const response = await fetch(`${BACKEND_URL}/mistral/fim/completions`, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ model: 'codestral-2508', prompt: prefix, suffix: suffix, max_tokens: 100 }) }); if (!response.ok) throw new Error(`HTTP ${response.status}`); return await response.json(); } ``` LLM PROMPT: """ Add code completion using Mistral FIM at https://theserver-open-ai.replit.app/mistral/fim/completions Implement autocomplete for code editor. """ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.4-2.6 MISTRAL FILES ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ UPLOAD: POST /mistral/files (multipart/form-data) LIST: GET /mistral/files GET: GET /mistral/files/:file_id DELETE: DELETE /mistral/files/:file_id INTEGRATION CODE: ```javascript async function uploadMistralFile(file, purpose = 'fine-tune') { const formData = new FormData(); formData.append('file', file); formData.append('purpose', purpose); const response = await fetch(`${BACKEND_URL}/mistral/files`, { method: 'POST', body: formData }); if (!response.ok) throw new Error(`HTTP ${response.status}`); return await response.json(); } ``` ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.7-2.11 MISTRAL FINE-TUNING ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ CREATE JOB: POST /mistral/fine_tuning/jobs LIST JOBS: GET /mistral/fine_tuning/jobs GET JOB: GET /mistral/fine_tuning/jobs/:job_id START JOB: POST /mistral/fine_tuning/jobs/:job_id/start CANCEL JOB: POST /mistral/fine_tuning/jobs/:job_id/cancel ARCHIVE MODEL: POST /mistral/fine_tuning/models/:model_id/archive UNARCHIVE MODEL: POST /mistral/fine_tuning/models/:model_id/unarchive ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.12-2.15 MISTRAL BATCH JOBS ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ CREATE: POST /mistral/batch/jobs LIST: GET /mistral/batch/jobs GET: GET /mistral/batch/jobs/:job_id CANCEL: POST /mistral/batch/jobs/:job_id/cancel ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.16-2.17 MISTRAL MODERATIONS ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ TEXT MODERATION: POST /mistral/moderations CHAT MODERATION: POST /mistral/chat/moderations ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.18 MISTRAL AUDIO TRANSCRIPTIONS ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ENDPOINT: POST /mistral/audio/transcriptions (multipart/form-data) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.19-2.23 MISTRAL AGENTS ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ CREATE: POST /mistral/agents LIST: GET /mistral/agents GET: GET /mistral/agents/:agent_id UPDATE: PUT /mistral/agents/:agent_id VERSION: POST /mistral/agents/:agent_id/version ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.24-2.30 MISTRAL CONVERSATIONS ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ CREATE: POST /mistral/conversations LIST: GET /mistral/conversations GET: GET /mistral/conversations/:conversation_id GET ENTRIES: GET /mistral/conversations/:conversation_id/entries GET MESSAGES: GET /mistral/conversations/:conversation_id/messages BRANCH: POST /mistral/conversations/:conversation_id/branch COMPLETE: POST /mistral/conversations/:conversation_id/complete ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.31-2.34 MISTRAL LIBRARIES ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ CREATE: POST /mistral/libraries LIST: GET /mistral/libraries GET: GET /mistral/libraries/:library_id DELETE: DELETE /mistral/libraries/:library_id ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.35 MISTRAL MODELS ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ LIST MODELS: GET /mistral/models INTEGRATION CODE: ```javascript async function listMistralModels() { const response = await fetch(`${BACKEND_URL}/mistral/models`); if (!response.ok) throw new Error(`HTTP ${response.status}`); return await response.json(); } ``` ═══════════════════════════════════════════════════════════════════════════════ 3️⃣ CLAUDE/ANTHROPIC API - 10 ENDPOINTS ═══════════════════════════════════════════════════════════════════════════════ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.1 CLAUDE MESSAGES (Chat) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ENDPOINT: POST /claude/messages RATE LIMIT: 20 requests/minute REQUEST BODY: { "model": "claude-sonnet-4-20250514", // or "claude-opus-4-20250514", "claude-haiku-4-20250514" "messages": [ { "role": "user", "content": "Hello!" } ], "max_tokens": 1024, "stream": true, // optional "temperature": 1.0 } RESPONSE (Non-streaming): { "id": "msg_...", "type": "message", "role": "assistant", "content": [{ "type": "text", "text": "Hello! How can I help?" }], "model": "claude-sonnet-4-20250514", "usage": { "input_tokens": 10, "output_tokens": 8 } } INTEGRATION CODE: ```javascript async function claudeChat(userMessage) { const response = await fetch(`${BACKEND_URL}/claude/messages`, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ model: 'claude-sonnet-4-20250514', messages: [{ role: 'user', content: userMessage }], max_tokens: 1024 }) }); if (!response.ok) throw new Error(`HTTP ${response.status}`); const data = await response.json(); return data.content[0].text; } ``` LLM PROMPT: """ Convert my chat to use Claude via https://theserver-open-ai.replit.app/claude/messages Use claude-sonnet-4-20250514 model. Support streaming. """ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.2-3.4 CLAUDE MESSAGE BATCHES ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ CREATE BATCH: POST /claude/messages/batches GET BATCH: GET /claude/messages/batches/:batch_id GET RESULTS: GET /claude/messages/batches/:batch_id/results ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.5 CLAUDE COUNT TOKENS ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ENDPOINT: POST /claude/messages/count_tokens REQUEST BODY: { "model": "claude-sonnet-4-20250514", "messages": [{ "role": "user", "content": "Hello!" }] } RESPONSE: { "input_tokens": 10 } ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.6-3.8 CLAUDE FILES ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ UPLOAD: POST /claude/files (multipart/form-data) GET: GET /claude/files/:file_id DELETE: DELETE /claude/files/:file_id ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.9-3.10 CLAUDE ORGANIZATION ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ GET ORG: GET /claude/organizations/:organization_id GET USAGE: GET /claude/organization/usage ═══════════════════════════════════════════════════════════════════════════════ 4️⃣ PERPLEXITY AI - 3 ENDPOINTS ═══════════════════════════════════════════════════════════════════════════════ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.1 PERPLEXITY CHAT (Sonar) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ENDPOINT: POST /perplexity/chat/completions RATE LIMIT: 20 requests/minute REQUEST BODY: { "model": "sonar", // or "sonar-pro", "sonar-reasoning" "messages": [ { "role": "user", "content": "What is the weather today?" } ], "stream": false } RESPONSE: { "id": "...", "model": "sonar", "object": "chat.completion", "created": 1234567890, "choices": [{ "index": 0, "finish_reason": "stop", "message": { "role": "assistant", "content": "Based on current data..." }, "delta": { "role": "assistant", "content": "" } }], "usage": { "prompt_tokens": 15, "completion_tokens": 20, "total_tokens": 35 }, "citations": ["https://example.com/weather"] // Web-grounded responses } INTEGRATION CODE: ```javascript async function perplexityChat(userMessage) { const response = await fetch(`${BACKEND_URL}/perplexity/chat/completions`, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ model: 'sonar', messages: [{ role: 'user', content: userMessage }] }) }); if (!response.ok) throw new Error(`HTTP ${response.status}`); const data = await response.json(); return { answer: data.choices[0].message.content, citations: data.citations // Web sources }; } ``` LLM PROMPT: """ Add web-grounded search using Perplexity at https://theserver-open-ai.replit.app/perplexity/chat/completions Display both the answer and the web citations/sources. """ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.2 PERPLEXITY SEARCH ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ENDPOINT: POST /perplexity/search ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.3 PERPLEXITY CACHE STATS ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ENDPOINT: GET /perplexity/search/cache-stats ═══════════════════════════════════════════════════════════════════════════════ 5️⃣ GOOGLE GEMINI API - 14 ENDPOINTS ═══════════════════════════════════════════════════════════════════════════════ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.1 GEMINI GENERATE CONTENT ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ENDPOINT: POST /gemini/models/:model/generateContent RATE LIMIT: 20 requests/minute REQUEST BODY: { "contents": [{ "parts": [{ "text": "Explain quantum computing" }] }] } INTEGRATION CODE: ```javascript async function geminiGenerate(prompt, model = 'gemini-2.0-flash-exp') { const response = await fetch(`${BACKEND_URL}/gemini/models/${model}/generateContent`, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ contents: [{ parts: [{ text: prompt }] }] }) }); if (!response.ok) throw new Error(`HTTP ${response.status}`); const data = await response.json(); return data.candidates[0].content.parts[0].text; } ``` LLM PROMPT: """ Add Gemini AI chat at https://theserver-open-ai.replit.app/gemini/models/gemini-2.0-flash-exp/generateContent Use the integration code above. """ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.2 GEMINI STREAM GENERATE ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ENDPOINT: POST /gemini/models/:model/streamGenerateContent ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.3 GEMINI BATCH GENERATE ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ENDPOINT: POST /gemini/models/:model/batchGenerateContent ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.4-5.5 GEMINI EMBEDDINGS ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ EMBED: POST /gemini/models/:model/embedContent BATCH EMBED: POST /gemini/models/:model/batchEmbedContent ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.6 GEMINI COUNT TOKENS ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ENDPOINT: POST /gemini/models/:model/countTokens ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.7-5.8 GEMINI MODELS ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ LIST: GET /gemini/models GET: GET /gemini/models/:model ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.9-5.12 GEMINI FILES ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ UPLOAD: POST /gemini/files (multipart/form-data) LIST: GET /gemini/files GET: GET /gemini/files/:file_id DELETE: DELETE /gemini/files/:file_id ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.13 GEMINI IMAGEN 4 (Image Generation) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ENDPOINT: POST /gemini/models/imagen-4/generateImage RATE LIMIT: 10 requests/hour REQUEST BODY: { "prompt": "A futuristic city", "numberOfImages": 1, "aspectRatio": "1:1" // "1:1", "16:9", "9:16", "4:3", "3:4" } INTEGRATION CODE: ```javascript async function geminiGenerateImage(prompt) { const response = await fetch(`${BACKEND_URL}/gemini/models/imagen-4/generateImage`, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ prompt: prompt, numberOfImages: 1, aspectRatio: '1:1' }) }); if (!response.ok) throw new Error(`HTTP ${response.status}`); const data = await response.json(); return data.images[0].imageUrl; // Returns base64 or URL } ``` ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5.14 GEMINI VEO 3 (Video Generation) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ENDPOINT: POST /gemini/models/veo-3/generateVideo REQUEST BODY: { "prompt": "A cat playing piano", "duration": "5s" } ═══════════════════════════════════════════════════════════════════════════════ 6️⃣ RAPIDAPI SERVICES - 12 ENDPOINTS ═══════════════════════════════════════════════════════════════════════════════ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.1 JOB SEARCH (JSearch) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ENDPOINT: POST /rapidapi/jobs/search REQUEST BODY: { "query": "Python developer", "location": "New York, NY", "num_pages": 1 } INTEGRATION CODE: ```javascript async function searchJobs(query, location) { const response = await fetch(`${BACKEND_URL}/rapidapi/jobs/search`, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ query: query, location: location, num_pages: 1 }) }); if (!response.ok) throw new Error(`HTTP ${response.status}`); return await response.json(); } ``` ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.2 JOB DETAILS ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ENDPOINT: POST /rapidapi/jobs/details REQUEST BODY: { "job_id": "abc123" } ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.3 SALARY ESTIMATE ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ENDPOINT: POST /rapidapi/jobs/salary REQUEST BODY: { "job_title": "Software Engineer", "location": "San Francisco, CA", "radius": 100 } ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.4-6.9 YAHOO FINANCE ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ KEY STATISTICS: GET /rapidapi/yahoo/key-statistics/:symbol FINANCIAL ANALYSIS: GET /rapidapi/yahoo/financial-analysis/:symbol EARNINGS TREND: GET /rapidapi/yahoo/earnings-trend/:symbol PRICE: GET /rapidapi/yahoo/price/:symbol MULTI QUOTE: POST /rapidapi/yahoo/multi-quote NEWS: GET /rapidapi/yahoo/news/:symbol INTEGRATION CODE: ```javascript // Get stock price async function getStockPrice(symbol) { const response = await fetch(`${BACKEND_URL}/rapidapi/yahoo/price/${symbol}`); if (!response.ok) throw new Error(`HTTP ${response.status}`); return await response.json(); } // Get multiple quotes async function getMultiQuote(symbols) { const response = await fetch(`${BACKEND_URL}/rapidapi/yahoo/multi-quote`, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ symbols: symbols }) // ["AAPL", "GOOGL", "MSFT"] }); if (!response.ok) throw new Error(`HTTP ${response.status}`); return await response.json(); } // Get stock news async function getStockNews(symbol) { const response = await fetch(`${BACKEND_URL}/rapidapi/yahoo/news/${symbol}`); if (!response.ok) throw new Error(`HTTP ${response.status}`); return await response.json(); } ``` LLM PROMPT: """ Add stock market data using Yahoo Finance proxy at https://theserver-open-ai.replit.app/rapidapi/yahoo/* Show real-time stock prices, news, and financial analysis. Use the integration code above. """ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.10 GOOGLE SEARCH ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ENDPOINT: POST /rapidapi/google/search REQUEST BODY: { "query": "machine learning tutorials", "limit": 10, "related_keywords": "true" } INTEGRATION CODE: ```javascript async function googleSearch(query) { const response = await fetch(`${BACKEND_URL}/rapidapi/google/search`, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ query: query, limit: 10, related_keywords: 'true' }) }); if (!response.ok) throw new Error(`HTTP ${response.status}`); return await response.json(); } ``` ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.11 WEB SCRAPER (Contact Extraction) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ENDPOINT: POST /rapidapi/scraper/contacts REQUEST BODY: { "query": "https://example.com" } ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.12 AMAZON PRODUCT SEARCH ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ENDPOINT: POST /rapidapi/amazon/search REQUEST BODY: { "query": "laptop", "page": "1", "country": "US" } ═══════════════════════════════════════════════════════════════════════════════ 7️⃣ EOD HISTORICAL DATA - 8 ENDPOINTS ═══════════════════════════════════════════════════════════════════════════════ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.1 HISTORICAL DATA ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ENDPOINT: GET /eod/historical/:symbol?from=YYYY-MM-DD&to=YYYY-MM-DD&period=d INTEGRATION CODE: ```javascript async function getHistoricalData(symbol, from, to) { const params = new URLSearchParams({ from, to, period: 'd' }); const response = await fetch(`${BACKEND_URL}/eod/historical/${symbol}?${params}`); if (!response.ok) throw new Error(`HTTP ${response.status}`); return await response.json(); } // Example usage const data = await getHistoricalData('AAPL.US', '2024-01-01', '2024-12-31'); ``` LLM PROMPT: """ Add stock chart using EOD Historical Data at https://theserver-open-ai.replit.app/eod/historical/:symbol Fetch historical OHLCV data and render a candlestick chart with Chart.js. """ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.2 REALTIME DATA ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ENDPOINT: GET /eod/realtime/:symbol INTEGRATION CODE: ```javascript async function getRealtimePrice(symbol) { const response = await fetch(`${BACKEND_URL}/eod/realtime/${symbol}`); if (!response.ok) throw new Error(`HTTP ${response.status}`); return await response.json(); } ``` ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.3 INTRADAY DATA ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ENDPOINT: GET /eod/intraday/:symbol?interval=5m ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.4 FUNDAMENTALS ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ENDPOINT: GET /eod/fundamentals/:symbol ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.5 SEARCH ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ENDPOINT: GET /eod/search/:query INTEGRATION CODE: ```javascript async function searchSymbol(query) { const response = await fetch(`${BACKEND_URL}/eod/search/${query}`); if (!response.ok) throw new Error(`HTTP ${response.status}`); return await response.json(); } ``` ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.6 EXCHANGE SYMBOLS ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ENDPOINT: GET /eod/exchange-symbols/:exchange ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.7 NEWS ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ENDPOINT: GET /eod/news?s=AAPL.US&limit=10 INTEGRATION CODE: ```javascript async function getMarketNews(symbols, limit = 10) { const params = new URLSearchParams({ s: symbols, limit }); const response = await fetch(`${BACKEND_URL}/eod/news?${params}`); if (!response.ok) throw new Error(`HTTP ${response.status}`); return await response.json(); } ``` ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 7.8 DIVIDENDS ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ ENDPOINT: GET /eod/dividends/:symbol?from=YYYY-MM-DD&to=YYYY-MM-DD INTEGRATION CODE: ```javascript async function getDividends(symbol, from, to) { const params = new URLSearchParams({ from, to }); const response = await fetch(`${BACKEND_URL}/eod/dividends/${symbol}?${params}`); if (!response.ok) throw new Error(`HTTP ${response.status}`); return await response.json(); } ``` ═══════════════════════════════════════════════════════════════════════════════ 🚀 QUICK START EXAMPLES ═══════════════════════════════════════════════════════════════════════════════ EXAMPLE 1: Convert OpenAI Chat Page ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ PROMPT FOR LLM: """ I have an HTML page that currently makes direct calls to OpenAI's API. Convert it to use my proxy server at https://theserver-open-ai.replit.app Current code: ```javascript const response = await fetch('https://api.openai.com/v1/chat/completions', { method: 'POST', headers: { 'Authorization': 'Bearer ' + apiKey, // Exposed API key! 'Content-Type': 'application/json' }, body: JSON.stringify({...}) }); ``` Replace with: ```javascript const response = await fetch('https://theserver-open-ai.replit.app/api/chat', { method: 'POST', headers: { 'Content-Type': 'application/json' }, // No API key needed! body: JSON.stringify({ model: 'gpt-4o', messages: [{ role: 'user', content: userInput }], stream: true }) }); ``` Also add streaming support and better error handling. """ EXAMPLE 2: Add Stock Market Dashboard ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ PROMPT FOR LLM: """ Create a stock market dashboard using the EOD Historical Data proxy. Backend: https://theserver-open-ai.replit.app Features needed: 1. Real-time price display using GET /eod/realtime/:symbol 2. Historical chart (7 days) using GET /eod/historical/:symbol 3. Latest news using GET /eod/news 4. Symbol search using GET /eod/search/:query Use Chart.js for the candlestick chart. Make it responsive and modern looking. """ EXAMPLE 3: Multi-Provider AI Chat ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ PROMPT FOR LLM: """ Build a chat interface that lets users switch between AI providers: - OpenAI GPT-4o: POST /api/chat - Mistral Large: POST /mistral/chat - Claude Sonnet 4: POST /claude/messages - Perplexity Sonar: POST /perplexity/chat/completions - Gemini Flash: POST /gemini/models/gemini-2.0-flash-exp/generateContent Backend: https://theserver-open-ai.replit.app Add a dropdown to select the provider, then adjust the API call accordingly. Show streaming responses for all providers. """ ═══════════════════════════════════════════════════════════════════════════════ ⚠️ ERROR HANDLING BEST PRACTICES ═══════════════════════════════════════════════════════════════════════════════ ```javascript async function callAPI(endpoint, options) { try { const response = await fetch(`${BACKEND_URL}${endpoint}`, options); // Handle HTTP errors if (!response.ok) { const error = await response.json().catch(() => ({ error: 'Unknown error' })); throw new Error(error.error || `HTTP ${response.status}`); } return await response.json(); } catch (error) { // Network errors if (error.message === 'Failed to fetch') { alert('Network error. Please check your connection.'); } // Rate limit errors else if (error.message.includes('Rate limit')) { alert('Too many requests. Please wait a moment.'); } // Other errors else { alert(`Error: ${error.message}`); } console.error('API Error:', error); throw error; } } ``` ═══════════════════════════════════════════════════════════════════════════════ 📝 NOTES ═══════════════════════════════════════════════════════════════════════════════ 1. No API keys needed in frontend - the proxy handles authentication 2. CORS is enabled - call from any domain 3. Rate limits protect the server from abuse 4. All endpoints support HTTPS 5. Streaming is available for chat endpoints 6. File uploads use multipart/form-data 7. Health check available at: GET /health 8. Root endpoint documentation: GET / ═══════════════════════════════════════════════════════════════════════════════ ✅ CHECKLIST FOR LLM INTEGRATION ═══════════════════════════════════════════════════════════════════════════════ When converting a page to use the proxy, ensure: □ Replace hardcoded API URLs with proxy URLs □ Remove API key handling from frontend □ Update Content-Type headers as needed □ Add loading states during API calls □ Implement error handling □ Support streaming responses (for chat) □ Handle file uploads correctly (FormData) □ Display results properly □ Test all endpoints □ Add user-friendly error messages ═══════════════════════════════════════════════════════════════════════════════ END OF DOCUMENTATION ═══════════════════════════════════════════════════════════════════════════════ Generated: 2025 Server: https://theserver-open-ai.replit.app Total Endpoints: 101