Integration Recipes

Add StreamFix to your stack in 2 minutes. One base_url change — everything else stays the same.

All examples use non-streaming for guaranteed repairs. Add stream=True for real-time token-by-token repair.

Python OpenAI SDK Vercel AI SDK LangChain CrewAI n8n Reading Headers
🐍

Python OpenAI SDK

Works with openai>=1.0

Before (breaks on fenced JSON)

from openai import OpenAI
import json

client = OpenAI(api_key="sk-or-...",
                base_url="https://openrouter.ai/api/v1")

resp = client.chat.completions.create(
    model="openai/gpt-4o-mini",
    messages=[{"role": "user", "content": "Return JSON: {name, email}"}]
)

data = json.loads(resp.choices[0].message.content)
# 💥 JSONDecodeError — model returned ```json\n{...}\n```

After (2-line change)

from openai import OpenAI
import json

client = OpenAI(
    api_key="sk_YOUR_STREAMFIX_KEY",  # ← your StreamFix key
    base_url="https://streamfix.up.railway.app/v1",   # ← that's it
)

resp = client.chat.completions.create(
    model="openai/gpt-4o-mini",
    messages=[{"role": "user", "content": "Return JSON: {name, email}"}]
)

data = json.loads(resp.choices[0].message.content)  # ✅ Always valid

# To read repair headers, use with_raw_response:
raw = client.chat.completions.with_raw_response.create(
    model="openai/gpt-4o-mini",
    messages=[{"role": "user", "content": "Return JSON: {name, email}"}]
)
print(raw.headers.get("X-StreamFix-Applied"))  # "fence_strip"
Streaming: Add stream=True — StreamFix repairs JSON token-by-token in the SSE stream. Your client receives valid tokens in real-time.

Vercel AI SDK

Works with @ai-sdk/openai

Setup (TypeScript)

import { createOpenAI } from '@ai-sdk/openai';
import { generateObject } from 'ai';
import { z } from 'zod';

const openai = createOpenAI({
  baseURL: 'https://streamfix.up.railway.app/v1',  // ← StreamFix
  apiKey: 'sk_YOUR_STREAMFIX_KEY',           // ← that's it
});

const { object } = await generateObject({
  model: openai('openai/gpt-4o-mini'),
  schema: z.object({
    name: z.string(),
    email: z.string().email(),
  }),
  prompt: 'Extract: John Smith, john@example.com',
});

console.log(object);  // ✅ { name: "John Smith", email: "john@example.com" }

Streaming (Next.js)

import { streamText } from 'ai';

const result = streamText({
  model: openai('openai/gpt-4o-mini'),  // same client from above
  prompt: 'List 5 products as JSON array',
});

// StreamFix repairs each SSE chunk — no parse errors mid-stream
return result.toDataStreamResponse();
Why this works: Vercel AI SDK uses the OpenAI client under the hood. Changing baseURL routes all traffic through StreamFix transparently.
🦜

LangChain

Works with langchain-openai>=0.1

Setup (Python)

from langchain_openai import ChatOpenAI
from langchain_core.output_parsers import JsonOutputParser

llm = ChatOpenAI(
    model="openai/gpt-4o-mini",
    api_key="sk_YOUR_STREAMFIX_KEY",
    base_url="https://streamfix.up.railway.app/v1",  # ← that's it
)

parser = JsonOutputParser()
chain = llm | parser

result = chain.invoke("Extract {name, email} from: Jane Doe, jane@co.com")
print(result)  # ✅ {"name": "Jane Doe", "email": "jane@co.com"}

With Pydantic Schema

from pydantic import BaseModel

class Contact(BaseModel):
    name: str
    email: str

# StreamFix strips fences/prose before LangChain's parser sees it
# No more "OutputParserException: Failed to parse" errors
parser = JsonOutputParser(pydantic_object=Contact)
chain = llm | parser

result = chain.invoke("Extract contact: John, john@test.com")
print(result)  # ✅ Contact(name='John', email='john@test.com')
No more OutputParserException: StreamFix strips markdown fences and fixes syntax before LangChain's parser sees the output. You can remove retry logic.
🤖

CrewAI

Works with crewai>=0.28

from crewai import Agent, Task, Crew, LLM

# Point CrewAI's LLM at StreamFix — tool-call arguments are repaired automatically
llm = LLM(
    model="openai/gpt-4o-mini",
    base_url="https://streamfix.up.railway.app/v1",
    api_key="sk_YOUR_STREAMFIX_KEY"
)

researcher = Agent(
    role="Research Analyst",
    goal="Extract structured data from sources",
    backstory="You are a precise data extraction specialist.",
    llm=llm  # <-- swap in StreamFix here
)

task = Task(
    description="Summarise the latest AI news as JSON",
    expected_output="A JSON object with keys: headline, summary, source",
    agent=researcher
)

crew = Crew(agents=[researcher], tasks=[task])
result = crew.kickoff()
print(result)
Why this matters for agents: When a CrewAI agent calls a tool, the LLM returns tool-call arguments as JSON. OpenAI's own docs warn these are not guaranteed to be valid. StreamFix repairs malformed arguments before CrewAI's tool parser sees them — so your agent loop never breaks mid-run from a JSON parse error.
🔁

n8n

Works with the OpenAI node in any n8n version

n8n's OpenAI node accepts a custom base URL in its credentials. Point it at StreamFix — no workflow changes needed.

# In n8n: Credentials → OpenAI API → "Base URL" field

Base URL:  https://streamfix.up.railway.app/v1
API Key:   sk_YOUR_STREAMFIX_KEY

# Your upstream API key goes in your StreamFix dashboard under Project → Credentials
# StreamFix forwards requests to OpenRouter / OpenAI using your stored key
1️⃣

Create StreamFix project

Add your OpenAI or OpenRouter key as a credential in the StreamFix dashboard.

2️⃣

Update n8n credentials

Set Base URL to https://streamfix.up.railway.app/v1 and API Key to your StreamFix key.

3️⃣

Run existing workflows

All JSON responses are cleaned automatically. No node changes, no new logic.

Common n8n pain point this fixes: The "JSON Body" parse error that breaks automation workflows when an LLM adds markdown fences or a <think> block around its response. StreamFix strips these before n8n's JSON parser runs.

Reading Provenance Headers

Every response includes headers telling you exactly what was repaired. Use these for monitoring, alerts, and debugging.

# Python — reading headers from raw httpx response
import httpx

resp = httpx.post(
    "https://streamfix.up.railway.app/v1/chat/completions",
    headers={"Authorization": "Bearer sk_YOUR_KEY"},
    json={"model": "openai/gpt-4o-mini", "messages": [...]}
)

print(resp.headers["X-StreamFix-Applied"])       # "fence_strip,remove_trailing_comma"
print(resp.headers["X-StreamFix-Repairs-Applied"]) # "2"
print(resp.headers["X-StreamFix-Status"])          # "repaired" | "pass" | "failed"
print(resp.headers["X-StreamFix-Request-Id"])      # "req_abc123"
Header Example Use Case
X-StreamFix-Applied fence_strip,remove_trailing_comma Alert on specific repair types
X-StreamFix-Status repaired Track repair rate vs pass-through rate
X-StreamFix-Request-Id req_abc123 Correlate with logs, fetch artifacts
X-StreamFix-Repairs-Applied 2 Count repairs per request

Ready to integrate?

Get a free API key with 1,000 credits. No credit card required.

Get Free Key →