API Reference

StreamFix is a drop-in replacement for the OpenAI API. It proxies your requests to OpenRouter (or other providers), repairing broken JSON and validating schemas in real-time.

Recommended Setup (Python)
from openai import OpenAI

client = OpenAI(
    base_url="https://streamfix.up.railway.app/v1",
    api_key="sk_YOUR_KEY"
)

# Use normally - works with any model
resp = client.chat.completions.create(
    model="openai/gpt-4o-mini",
    messages=[{"role": "user", "content": "Hello!"}]
)

Authentication

Authenticate requests via the Authorization header using your Bearer token.

Authorization: Bearer sk_YOUR_API_KEY

Note: You can also Bring Your Own Key (BYOK) for upstream providers using the X-Provider-Authorization header.

POST

/v1/chat/completions

Standard OpenAI-compatible chat completion endpoint. Supports JSON mode, streaming, and function calling.

Parameters

Param Type Required Description
model string Yes Target model (e.g. openai/gpt-4o)
messages array Yes Chat history
stream boolean No Enable streaming response
schema object No JSON Schema for Contract Mode (3 credits). Use extra_body wrapper for OpenAI SDK.

Streaming

StreamFix supports Server-Sent Events (SSE) with **Smart Interception**. It repairs JSON syntax and strips <think> tags in real-time with sub-millisecond overhead.

Real-Time Repair: Unquoted keys, trailing commas, and DeepSeek tags are fixed/stripped on-the-fly. The client receives valid JSON chunks.

💡 Zero Latency: Our custom FSM engine buffers only the necessary tokens (approx. 10 chars) to ensure safe repair without delaying the stream.

Example

# Chunks are yielded immediately as they arrive from upstream
stream = client.chat.completions.create(
    model="gpt-4o",
    messages=[...],
    stream=True
)
GET

/result/{request_id}

Retrieve the repaired JSON and validation metadata for any request (streaming or non-streaming).

{
  "request_id": "req_abc123",
  "status": "REPAIRED",
  "repairs_applied": ["remove_trailing_comma"],
  "repaired_content": "{\"status\": \"ok\"}",
  "schema_valid": true,
  "response_time_ms": 450
}

Schema Validation (Contract Mode)

Enforce strict adherence to a JSON Schema. StreamFix will validate the output and return detailed errors if the model fails to comply.

# Note: Use extra_body for OpenAI SDK, or top-level for direct API calls
resp = client.chat.completions.create(
    model="gpt-4o",
    messages=[...],
    extra_body={
        "schema": {
            "type": "object",
            "required": ["id", "score"],
            "properties": {
                "id": {"type": "integer"},
                "score": {"type": "number"}
            }
        }
    }
)
Cost: 3 Credits Validation Only

Supported Models

We support all models via OpenRouter. Here are our recommended defaults for reliability.

openai/gpt-4o-mini

Best balance of speed and cost ($0.000005/req)

anthropic/claude-3.5-haiku

Excellent for structured data extraction

qwen/qwen-2.5-72b-instruct

High performance open-source model

Response Headers

StreamFix returns all repair and validation metadata in HTTP headers to maintain 100% OpenAI API compatibility in the response body.

Always Present

x-streamfix-request-id: req_abc123def456
x-streamfix-credits-used: 1
x-streamfix-credits-remaining: 999
x-streamfix-mode: shared
x-streamfix-repair-status: applied
x-streamfix-repairs-applied: 2
x-streamfix-artifact-stored: false

Contract Mode (When Schema Provided)

x-streamfix-contract-mode: active
x-streamfix-schema-valid: true
x-streamfix-schema-errors: 0
x-streamfix-retry-count: 1  # If retry was triggered

Tool Calls (When Arguments Repaired)

x-streamfix-tool-args-repaired: 1  # Non-streaming only

Header Reference

Header Values Description
x-streamfix-repair-status applied | none | passthrough Whether repairs were applied to the response
x-streamfix-repairs-applied 0-N Number of repairs made (trailing commas, quotes, etc)
x-streamfix-schema-valid true | false Schema validation result (Contract Mode only)
x-streamfix-artifact-stored true | false Whether repair artifact was saved (requires opt-in)
x-streamfix-tool-args-repaired 0-N Number of tool call arguments repaired (non-streaming only)
x-streamfix-retry-count 0-1 Number of retries attempted (Contract Mode only)
x-streamfix-client-request-id string Echoed correlation ID from X-Request-Id header

Error Codes

Code Meaning Solution
401 Unauthorized Check your API Key in the Authorization header.
402 Payment Required Insufficient credits. Top up at /account/purchase.
429 Rate Limited You exceeded 60 requests/minute. Slow down.
408 Request Timeout Streaming timeout exceeded (300 seconds).
413 Payload Too Large Request exceeds 10MB limit.
502 Bad Gateway Upstream provider (e.g. OpenAI) error. Retry.
POST

/account/create

Create a new account and receive an API key with 1000 free credits.

Request

POST /account/create?email=user@example.com

Response

{
  "api_key": "sk_abc123...",
  "email": "user@example.com",
  "credits": 1000,
  "message": "Account created! Save your API key - it won't be shown again."
}

Note: If the email already exists, the API key is rotated and existing credits are preserved.

GET

/account/balance

Check your remaining credits. Requires authentication.

Request

GET /account/balance
Authorization: Bearer sk_YOUR_API_KEY

Response

{
  "credits_remaining": 997,
  "is_active": true
}
POST

/account/purchase-by-email

Create a Stripe checkout session for purchasing credits. User-friendly: uses email instead of API key.

Request

POST /account/purchase-by-email
Content-Type: application/json

{
  "email": "user@example.com"
}

Response

{
  "checkout_url": "https://checkout.stripe.com/...",
  "credits": 10000,
  "price_usd": 10.0
}
$10 = 10,000 credits $0.001 per credit

BYOK (Bring Your Own Key)

Use your own provider API keys to avoid using the shared pool.

from openai import OpenAI

client = OpenAI(
    base_url="https://streamfix.up.railway.app/v1",
    api_key="sk_YOUR_STREAMFIX_KEY",
    default_headers={
        "X-Provider-Authorization": "Bearer YOUR_OPENROUTER_KEY"
    }
)

Benefits: Use your own OpenRouter account, avoid shared pool rate limits, more control over costs. StreamFix credits are still deducted (1 per request, 3 for Contract Mode), but upstream costs use your account.

Tool Calls Support

StreamFix automatically repairs broken JSON in tool_calls[].function.arguments for non-streaming requests.

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Call test_function"}],
    tools=[{
        "type": "function",
        "function": {
            "name": "test_function",
            "parameters": {
                "type": "object",
                "properties": {"arg": {"type": "string"}}
            }
        }
    }]
)

# Tool call arguments are automatically repaired if broken
tool_calls = response.choices[0].message.tool_calls
if tool_calls:
    args = tool_calls[0].function.arguments  # Already repaired JSON

Note: For streaming requests, tool call arguments are passed through as-is to ensure low latency. Use non-streaming for guaranteed tool call repair.

Best Practices

  • Always use JSON Mode: Set response_format: {"type": "json_object"} when possible. StreamFix repairs what breaks, but starting with valid intent helps.
  • Handle 429s: Implement exponential backoff for rate limits (60 requests/minute).
  • Monitor Credits: Check x-streamfix-credits-remaining response header to avoid 402 errors.
  • Use Correlation IDs: Send X-Request-Id or X-Client-Request-Id header to track requests across your system logs.
  • For Guaranteed Repairs: Use non-streaming mode. Streaming repairs are available post-completion via /result/{request_id}.
  • For Tool Calls: Use non-streaming mode for automatic argument repair. Streaming passes through tool call arguments as-is.