← Back to blog

I built a textile pattern generation API because PatternedAI has no API

2026-04-30 · PixelAPI engineering

There's a real category gap in the AI-pattern space.

PatternedAI has 600K users. Spoonflower's design tools are everywhere. Both are excellent GUIs for textile designers. Neither has a public REST API. So if you're a print-on-demand shop, a Shopify store auto-generating colorways, or an indie game studio that needs seamless fabric textures — you're stuck either copy-pasting through a web UI or paying enterprise rates for a custom integration.

I shipped PixelAPI's /v1/pattern endpoint yesterday — 8 styles, 512px or 1024px output, recolor + upscale ops, fully seamless tileable. At $0.008/pattern, it's 2-5× cheaper than PatternedAI's GUI sessions.

This isn't a "Show HN, please clap." This is the story of what almost shipped at 2/10 quality, why I caught it before customers did, and the open-source-only tooling that got us to 8.4/10 average.

What's in the box

# 1. Generate
curl -X POST https://api.pixelapi.dev/v1/pattern/generate \
  -H "Authorization: Bearer YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "style": "ikat",
    "resolution": "512",
    "prompt": "indigo and cream traditional ikat textile"
  }'
# → {"generation_id":"...","credits_used":8,"poll_url":"/v1/pattern/{id}"}

# 2. Poll
curl https://api.pixelapi.dev/v1/pattern/{id} \
  -H "Authorization: Bearer YOUR_KEY"
# → {"status":"completed","output_url":"https://api.pixelapi.dev/outputs/.../1495e592...png"}

# 3. (Optional) recolor a copy
curl -X POST https://api.pixelapi.dev/v1/pattern/recolor \
  -d '{"source_url":"...","hue_shift":180}'
# → 2 credits, hue rotation in HSV

# 4. (Optional) upscale to 2048px print-ready
curl -X POST https://api.pixelapi.dev/v1/pattern/upscale \
  -d '{"source_url":"..."}'
# → 3 credits, Lanczos

The 8 styles + the model behind each

StyleModelWhy
FloralPatternDiffusionSD2 fine-tuned on 6.8M tileable patterns; ditsy-print sweet spot
GeometricPatternDiffusionTessellation + grid prompts
IkatPatternDiffusionTraditional Indian woven patterns
PaisleyPatternDiffusionBoteh motif training data
TribalPatternDiffusionBold symmetrical Aztec-style
Animal-printPatternDiffusionLeopard/zebra texture repeat
AbstractSDXL-seamlessFree-form abstract benefits from SDXL's broader training
StripesPIL algorithmSee below — this one almost destroyed me

Why "stripes" needed an algorithm and not an AI

Both PatternDiffusion and SDXL-seamless failed at clean parallel stripes during my QC audit. PatternDiffusion produced rainbow plaid noise. SDXL-seamless produced "shirt motifs" because it saw "shirt" in the prompt. Neither model was trained on enough plain-stripe samples to handle a request as simple as "navy blue and white classic shirt vertical stripes."

Spending 4 hours iterating prompt engineering on something Pillow does in 10 lines made no sense. So:

# /home/om/pixelapi-worker-code/models/pattern_model.py
def synthesize_stripes(width=512, height=512, prompt=""):
    """Algorithmic stripe / plaid / gingham. Zero VRAM, deterministic."""
    p = prompt.lower()
    is_horizontal = "horizontal" in p
    is_plaid = any(k in p for k in ("plaid","tartan","gingham","check"))
    stripe_w = 6 if "thin" in p else 36 if "thick" in p else 18

    colors = [color_table[k] for k in color_table if k in p]
    if not colors:
        colors = [(10,30,90), (255,255,255)]  # navy/white default

    img = Image.new("RGB", (width, height), colors[0])
    draw = ImageDraw.Draw(img)
    for x in range(0, width, stripe_w * 2):
        if is_horizontal:
            draw.rectangle([0, x, width, x + stripe_w], fill=colors[1])
        else:
            draw.rectangle([x, 0, x + stripe_w, height], fill=colors[1])
    if is_plaid:
        # ... overlay perpendicular stripes
    return img

Result: 10/10 quality, 10ms generation, zero GPU usage. A user asks for "navy blue and white classic shirt vertical stripes" and gets exactly that, every single time.

The lesson — and one I want to underline because it's the part nobody talks about: AI is overkill for half of what people use AI for. Pattern recognition is not the same as pattern generation. SDXL is a 7B-parameter monster wasting electricity to render content that a Bresenham line algorithm produces in nanoseconds.

The QC ladder that caught the silent failures

Generating bad output is one thing. Charging customers for it is the bigger sin.

The endpoint sits behind a structural-QC gate that runs on every output:

  1. Pass-through detection: if the output is pixel-equivalent to the input, reject. (Caught a real case where a remove-text job returned the input unchanged and was billed as "completed.")
  2. Scene-destruction detection: if more than 35% of pixels changed for an edit operation, reject (catches FireRed-style hallucinations where the model replaces the subject entirely).
  3. VLM verification: a Qwen2.5-VL-7B QA pass that compares input + output + prompt and emits a good/bad/unsure verdict.

When any gate fails, the job goes through an iteration ladder (up to 5 attempts with different prompt strategies / fallback models). After exhaustion: automatic refund of credits, no email asking the customer to fight for it.

Pricing reality check

OperationCreditsUSDPatternedAI/competitor
512px generate8$0.008$0.015–0.045 (GUI only)
1024px print-ready15$0.015$0.030–0.090 (GUI only)
Recolor (HSV hue shift)2$0.002N/A — no API competitor
Upscale to 2048px3$0.003N/A — no API competitor

No per-seat licensing, no monthly minimum, no 30-day deprecation cycles. Pay-per-use, period.

What's NOT 10/10 yet (honest list)

Try it

100 free credits on signup at pixelapi.dev. That's 12 generations, or 50 recolors, or 33 upscales — enough to validate the API for your use case.

If you find a generation under 8/10, hit reply on the auto-email. The QC ladder will already have refunded you, and the failure case becomes the next prompt-engineering iteration here.