StreamFix is a drop-in replacement for the OpenAI API. It proxies your requests to OpenRouter (or other providers), repairing broken JSON and validating schemas in real-time.
from openai import OpenAI
client = OpenAI(
base_url="https://streamfix.up.railway.app/v1",
api_key="sk_YOUR_KEY"
)
# Use normally - works with any model
resp = client.chat.completions.create(
model="openai/gpt-4o-mini",
messages=[{"role": "user", "content": "Hello!"}]
)
Authenticate requests via the Authorization header using your Bearer token.
Authorization: Bearer sk_YOUR_API_KEY
Note: You can also Bring Your Own Key (BYOK) for upstream providers using the X-Provider-Authorization header.
Standard OpenAI-compatible chat completion endpoint. Supports JSON mode, streaming, and function calling.
| Param | Type | Required | Description |
|---|---|---|---|
| model | string | Yes | Target model (e.g. openai/gpt-4o) |
| messages | array | Yes | Chat history |
| stream | boolean | No | Enable streaming response |
| schema | object | No | JSON Schema for Contract Mode (3 credits). Use extra_body wrapper for OpenAI SDK. |
StreamFix supports Server-Sent Events (SSE) with **Smart Interception**. It repairs JSON syntax and strips <think> tags in real-time with sub-millisecond overhead.
Real-Time Repair: Unquoted keys, trailing commas, and DeepSeek tags are fixed/stripped on-the-fly. The client receives valid JSON chunks.
💡 Zero Latency: Our custom FSM engine buffers only the necessary tokens (approx. 10 chars) to ensure safe repair without delaying the stream.
# Chunks are yielded immediately as they arrive from upstream
stream = client.chat.completions.create(
model="gpt-4o",
messages=[...],
stream=True
)
Retrieve the repaired JSON and validation metadata for any request (streaming or non-streaming).
{
"request_id": "req_abc123",
"status": "REPAIRED",
"repairs_applied": ["remove_trailing_comma"],
"repaired_content": "{\"status\": \"ok\"}",
"schema_valid": true,
"response_time_ms": 450
}
Enforce strict adherence to a JSON Schema. StreamFix will validate the output and return detailed errors if the model fails to comply.
# Note: Use extra_body for OpenAI SDK, or top-level for direct API calls
resp = client.chat.completions.create(
model="gpt-4o",
messages=[...],
extra_body={
"schema": {
"type": "object",
"required": ["id", "score"],
"properties": {
"id": {"type": "integer"},
"score": {"type": "number"}
}
}
}
)
We support all models via OpenRouter. Here are our recommended defaults for reliability.
Best balance of speed and cost ($0.000005/req)
Excellent for structured data extraction
High performance open-source model
StreamFix returns all repair and validation metadata in HTTP headers to maintain 100% OpenAI API compatibility in the response body.
x-streamfix-request-id: req_abc123def456 x-streamfix-credits-used: 1 x-streamfix-credits-remaining: 999 x-streamfix-mode: shared x-streamfix-repair-status: applied x-streamfix-repairs-applied: 2 x-streamfix-artifact-stored: false
x-streamfix-contract-mode: active
x-streamfix-schema-valid: true
x-streamfix-schema-errors: 0
x-streamfix-retry-count: 1 # If retry was triggered
x-streamfix-tool-args-repaired: 1 # Non-streaming only
| Header | Values | Description |
|---|---|---|
| x-streamfix-repair-status | applied | none | passthrough | Whether repairs were applied to the response |
| x-streamfix-repairs-applied | 0-N | Number of repairs made (trailing commas, quotes, etc) |
| x-streamfix-schema-valid | true | false | Schema validation result (Contract Mode only) |
| x-streamfix-artifact-stored | true | false | Whether repair artifact was saved (requires opt-in) |
| x-streamfix-tool-args-repaired | 0-N | Number of tool call arguments repaired (non-streaming only) |
| x-streamfix-retry-count | 0-1 | Number of retries attempted (Contract Mode only) |
| x-streamfix-client-request-id | string | Echoed correlation ID from X-Request-Id header |
| Code | Meaning | Solution |
|---|---|---|
| 401 | Unauthorized | Check your API Key in the Authorization header. |
| 402 | Payment Required | Insufficient credits. Top up at /account/purchase. |
| 429 | Rate Limited | You exceeded 60 requests/minute. Slow down. |
| 408 | Request Timeout | Streaming timeout exceeded (300 seconds). |
| 413 | Payload Too Large | Request exceeds 10MB limit. |
| 502 | Bad Gateway | Upstream provider (e.g. OpenAI) error. Retry. |
Create a new account and receive an API key with 1000 free credits.
POST /account/create?email=user@example.com
{
"api_key": "sk_abc123...",
"email": "user@example.com",
"credits": 1000,
"message": "Account created! Save your API key - it won't be shown again."
}
Note: If the email already exists, the API key is rotated and existing credits are preserved.
Check your remaining credits. Requires authentication.
GET /account/balance Authorization: Bearer sk_YOUR_API_KEY
{
"credits_remaining": 997,
"is_active": true
}
Create a Stripe checkout session for purchasing credits. User-friendly: uses email instead of API key.
POST /account/purchase-by-email
Content-Type: application/json
{
"email": "user@example.com"
}
{
"checkout_url": "https://checkout.stripe.com/...",
"credits": 10000,
"price_usd": 10.0
}
Use your own provider API keys to avoid using the shared pool.
from openai import OpenAI client = OpenAI( base_url="https://streamfix.up.railway.app/v1", api_key="sk_YOUR_STREAMFIX_KEY", default_headers={ "X-Provider-Authorization": "Bearer YOUR_OPENROUTER_KEY" } )
Benefits: Use your own OpenRouter account, avoid shared pool rate limits, more control over costs. StreamFix credits are still deducted (1 per request, 3 for Contract Mode), but upstream costs use your account.
StreamFix automatically repairs broken JSON in tool_calls[].function.arguments for non-streaming requests.
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Call test_function"}],
tools=[{
"type": "function",
"function": {
"name": "test_function",
"parameters": {
"type": "object",
"properties": {"arg": {"type": "string"}}
}
}
}]
)
# Tool call arguments are automatically repaired if broken
tool_calls = response.choices[0].message.tool_calls
if tool_calls:
args = tool_calls[0].function.arguments # Already repaired JSON
Note: For streaming requests, tool call arguments are passed through as-is to ensure low latency. Use non-streaming for guaranteed tool call repair.
response_format: {"type": "json_object"} when possible. StreamFix repairs what breaks, but starting with valid intent helps.x-streamfix-credits-remaining response header to avoid 402 errors.X-Request-Id or X-Client-Request-Id header to track requests across your system logs./result/{request_id}.