Home / LLM Python True/False/None Fix

LLM Returns Python True/False/None Instead of JSON

Many LLMs — especially when prompted with Python examples or trained on Python-heavy data — return True instead of true, None instead of null. Here's why it happens and four ways to fix it.

Applies to: Llama, Mistral, Qwen, DeepSeek, and occasionally GPT-4o / Claude when prompts contain Python dict examples

The problem

JSON requires lowercase true, false, and null. Python uses True, False, and None. When an LLM generates output that looks like a Python dict rather than valid JSON, json.loads() immediately fails.

❌ What the LLM returns
{"active": True, "count": 3, "error": None}
❌ The error
import json

content = '{"active": True, "count": 3, "error": None}'
json.loads(content)
# JSONDecodeError: Expecting value: line 1 column 12 (char 11)
Why does this happen? LLMs are next-token predictors trained on massive code corpora. Python is the most represented language in training data. When a prompt includes Python dict examples, or the model's internal representation favors Python syntax, it outputs True/False/None instead of the JSON equivalents.

Which models do this?

Model family Python literal frequency Trigger
Llama 3 / 3.1 High Any JSON prompt without strict format instructions
Mistral / Mixtral Medium Prompts containing Python dict examples
Qwen 2 / 2.5 Medium Boolean-heavy schemas, Python few-shot examples
GPT-4o / Claude Low Rare; happens when prompt includes Python dict literals

Particularly common when user prompts contain Python dict examples like {"active": True} — the model mirrors the syntax it sees.

Fix 1: str.replace() — quick and dirty

The simplest approach: replace Python literals with JSON equivalents before parsing.

✅ Fix — naive replacement
import json

def fix_python_literals(text: str) -> str:
    text = text.replace('True', 'true')
    text = text.replace('False', 'false')
    text = text.replace('None', 'null')
    return text

content = '{"active": True, "count": 3, "error": None}'
data = json.loads(fix_python_literals(content))  # ✅ works
⚠️ Gotcha: This replaces inside string values too.
❌ The gotcha in action
content = '{"msg": "True story", "active": True}'
fix_python_literals(content)
# '{"msg": "true story", "active": true}'
#          ^^^^
# "True story" became "true story" — data corruption!

Fix 2: String-safe replacement

Walk the string character-by-character, track whether you're inside a quoted string, and only replace outside quotes. Checks word boundaries so TrueBlue doesn't become trueBlue.

✅ Fix — string-safe replacement
def fix_python_literals_safe(text: str) -> str:
    result = []
    in_string = False
    escape = False
    i = 0
    replacements = [('True', 'true'), ('False', 'false'), ('None', 'null')]

    while i < len(text):
        if escape:
            result.append(text[i]); escape = False; i += 1; continue
        if text[i] == '\\' and in_string:
            escape = True; result.append(text[i]); i += 1; continue
        if text[i] == '"':
            in_string = not in_string; result.append(text[i]); i += 1; continue

        if not in_string:
            matched = False
            for old, new in replacements:
                if text[i:i+len(old)] == old:
                    before = text[i-1] if i > 0 else ' '
                    after = text[i+len(old)] if i+len(old) < len(text) else ' '
                    if not before.isalnum() and not after.isalnum():
                        result.append(new); i += len(old); matched = True; break
            if matched: continue

        result.append(text[i]); i += 1

    return ''.join(result)

# "True story" stays "True story", values get fixed
content = '{"msg": "True story", "active": True, "error": None}'
fixed = fix_python_literals_safe(content)
# '{"msg": "True story", "active": true, "error": null}'
data = json.loads(fixed)  # ✅
Downside: This is ~30 lines of code you now have to maintain and test. Edge cases exist — nested escaped quotes, single-quoted strings from some models, mixed literal types. It works, but it's not trivial.

Fix 3: ast.literal_eval()

Python's ast.literal_eval() can parse Python literal syntax directly. Since the LLM is outputting what's essentially a Python dict, this sometimes just works.

✅ Fix — ast.literal_eval
import ast

content = '{"active": True, "count": 3, "error": None}'
data = ast.literal_eval(content)  # ✅ Works! Returns a Python dict.
# data == {"active": True, "count": 3, "error": None}
⚠️ Edge case: ast.literal_eval() parses Python literals, not JSON. If the LLM output contains valid JSON tokens that aren't valid Python, it fails.
❌ Where ast.literal_eval fails
import ast

# Mixed: LLM uses "null" (JSON) but "True" (Python) in the same output
content = '{"active": True, "error": null}'
ast.literal_eval(content)
# ValueError: malformed node or string
# "null" is not a Python literal — Python uses "None"

# Also fails on JSON-style booleans mixed with Python booleans
content = '{"a": true, "b": False}'
ast.literal_eval(content)
# ValueError: true is not defined in Python
Verdict: ast.literal_eval() is safe (it only evaluates literals, not arbitrary code), but it only works when the output is entirely Python syntax. If the LLM mixes Python and JSON tokens — which happens often — it fails. Use as a fallback, not a primary strategy.

Fix 4: Proxy fix (StreamFix)

StreamFix sits between your code and the LLM provider. Python literals arrive already converted to valid JSON — string-safe, so values inside quotes are never touched.

✅ Fix — one line change
from openai import OpenAI
import json

client = OpenAI(
    api_key="sk_YOUR_STREAMFIX_KEY",
    base_url="https://streamfix.dev/v1",
)

resp = client.chat.completions.create(
    model="meta-llama/llama-3.1-70b-instruct",
    messages=[{"role": "user", "content": "Return JSON: {active, error}"}],
)

# {"active": True, "error": None} → {"active": true, "error": null}
# String-safe: "True story" stays "True story"
data = json.loads(resp.choices[0].message.content)  # ✅ always valid JSON

Python literals, trailing commas, markdown fences — all fixed automatically

StreamFix repairs True/False/None, trailing commas, markdown fences, and 10+ other JSON failure modes in real-time — including during streaming. One base_url change, zero parsing code.

from openai import OpenAI

client = OpenAI(
    api_key="sk_YOUR_STREAMFIX_KEY",
    base_url="https://streamfix.dev/v1",
)

# All JSON issues repaired automatically:
# True/False/None → true/false/null (string-safe)
# ```json ... ``` → unwrapped
# {key: "val",} → trailing comma removed
resp = client.chat.completions.create(
    model="your-model",
    messages=[{"role": "user", "content": "Return JSON"}],
)
data = json.loads(resp.choices[0].message.content)  # ✅ always works
Get Free API Key →

Related guides