Add StreamFix to your stack in 2 minutes. One base_url change — everything else stays the same.
All examples use non-streaming for guaranteed repairs. Add stream=True for real-time token-by-token repair.
Works with openai>=1.0
from openai import OpenAI import json client = OpenAI(api_key="sk-or-...", base_url="https://openrouter.ai/api/v1") resp = client.chat.completions.create( model="openai/gpt-4o-mini", messages=[{"role": "user", "content": "Return JSON: {name, email}"}] ) data = json.loads(resp.choices[0].message.content) # 💥 JSONDecodeError — model returned ```json\n{...}\n```
from openai import OpenAI import json client = OpenAI( api_key="sk_YOUR_STREAMFIX_KEY", # ← your StreamFix key base_url="https://streamfix.up.railway.app/v1", # ← that's it ) resp = client.chat.completions.create( model="openai/gpt-4o-mini", messages=[{"role": "user", "content": "Return JSON: {name, email}"}] ) data = json.loads(resp.choices[0].message.content) # ✅ Always valid # To read repair headers, use with_raw_response: raw = client.chat.completions.with_raw_response.create( model="openai/gpt-4o-mini", messages=[{"role": "user", "content": "Return JSON: {name, email}"}] ) print(raw.headers.get("X-StreamFix-Applied")) # "fence_strip"
stream=True — StreamFix repairs JSON token-by-token in the SSE stream. Your client receives valid tokens in real-time.
Works with @ai-sdk/openai
import { createOpenAI } from '@ai-sdk/openai'; import { generateObject } from 'ai'; import { z } from 'zod'; const openai = createOpenAI({ baseURL: 'https://streamfix.up.railway.app/v1', // ← StreamFix apiKey: 'sk_YOUR_STREAMFIX_KEY', // ← that's it }); const { object } = await generateObject({ model: openai('openai/gpt-4o-mini'), schema: z.object({ name: z.string(), email: z.string().email(), }), prompt: 'Extract: John Smith, john@example.com', }); console.log(object); // ✅ { name: "John Smith", email: "john@example.com" }
import { streamText } from 'ai'; const result = streamText({ model: openai('openai/gpt-4o-mini'), // same client from above prompt: 'List 5 products as JSON array', }); // StreamFix repairs each SSE chunk — no parse errors mid-stream return result.toDataStreamResponse();
baseURL routes all traffic through StreamFix transparently.
Works with langchain-openai>=0.1
from langchain_openai import ChatOpenAI from langchain_core.output_parsers import JsonOutputParser llm = ChatOpenAI( model="openai/gpt-4o-mini", api_key="sk_YOUR_STREAMFIX_KEY", base_url="https://streamfix.up.railway.app/v1", # ← that's it ) parser = JsonOutputParser() chain = llm | parser result = chain.invoke("Extract {name, email} from: Jane Doe, jane@co.com") print(result) # ✅ {"name": "Jane Doe", "email": "jane@co.com"}
from pydantic import BaseModel class Contact(BaseModel): name: str email: str # StreamFix strips fences/prose before LangChain's parser sees it # No more "OutputParserException: Failed to parse" errors parser = JsonOutputParser(pydantic_object=Contact) chain = llm | parser result = chain.invoke("Extract contact: John, john@test.com") print(result) # ✅ Contact(name='John', email='john@test.com')
OutputParserException: StreamFix strips markdown fences and fixes syntax before LangChain's parser sees the output. You can remove retry logic.
Works with crewai>=0.28
from crewai import Agent, Task, Crew, LLM # Point CrewAI's LLM at StreamFix — tool-call arguments are repaired automatically llm = LLM( model="openai/gpt-4o-mini", base_url="https://streamfix.up.railway.app/v1", api_key="sk_YOUR_STREAMFIX_KEY" ) researcher = Agent( role="Research Analyst", goal="Extract structured data from sources", backstory="You are a precise data extraction specialist.", llm=llm # <-- swap in StreamFix here ) task = Task( description="Summarise the latest AI news as JSON", expected_output="A JSON object with keys: headline, summary, source", agent=researcher ) crew = Crew(agents=[researcher], tasks=[task]) result = crew.kickoff() print(result)
Works with the OpenAI node in any n8n version
n8n's OpenAI node accepts a custom base URL in its credentials. Point it at StreamFix — no workflow changes needed.
# In n8n: Credentials → OpenAI API → "Base URL" field Base URL: https://streamfix.up.railway.app/v1 API Key: sk_YOUR_STREAMFIX_KEY # Your upstream API key goes in your StreamFix dashboard under Project → Credentials # StreamFix forwards requests to OpenRouter / OpenAI using your stored key
Create StreamFix project
Add your OpenAI or OpenRouter key as a credential in the StreamFix dashboard.
Update n8n credentials
Set Base URL to https://streamfix.up.railway.app/v1 and API Key to your StreamFix key.
Run existing workflows
All JSON responses are cleaned automatically. No node changes, no new logic.
<think> block around its response. StreamFix strips these before n8n's JSON parser runs.
Every response includes headers telling you exactly what was repaired. Use these for monitoring, alerts, and debugging.
# Python — reading headers from raw httpx response import httpx resp = httpx.post( "https://streamfix.up.railway.app/v1/chat/completions", headers={"Authorization": "Bearer sk_YOUR_KEY"}, json={"model": "openai/gpt-4o-mini", "messages": [...]} ) print(resp.headers["X-StreamFix-Applied"]) # "fence_strip,remove_trailing_comma" print(resp.headers["X-StreamFix-Repairs-Applied"]) # "2" print(resp.headers["X-StreamFix-Status"]) # "repaired" | "pass" | "failed" print(resp.headers["X-StreamFix-Request-Id"]) # "req_abc123"
| Header | Example | Use Case |
|---|---|---|
| X-StreamFix-Applied | fence_strip,remove_trailing_comma | Alert on specific repair types |
| X-StreamFix-Status | repaired | Track repair rate vs pass-through rate |
| X-StreamFix-Request-Id | req_abc123 | Correlate with logs, fetch artifacts |
| X-StreamFix-Repairs-Applied | 2 | Count repairs per request |
Get a free API key with 1,000 credits. No credit card required.
Get Free Key →