Detect NSFW, violent, or unsafe content in images via REST API at $0.0005 per image. 2× cheaper than AWS.
Moderate user-uploaded images on social platforms, marketplaces, dating apps, and forums. Catch unsafe content before it reaches your users. Typical integration: a single POST request, a public output URL in seconds, store in your CDN. No GPU infrastructure to maintain, no cold-start delays.
curl -X POST https://api.pixelapi.dev/v1/image/moderate \
-H "X-API-Key: $PIXELAPI_KEY" \
-F "[email protected]"
| Volume per month | Plan | Cost |
|---|---|---|
| Under 100 calls | Free tier | $0 |
| ~10,000 calls | Starter | $10/mo |
| ~60,000 calls | Pro | $50/mo |
| ~300,000 calls | Scale | $200/mo |
Yes. Multi-class detection: nudity, violence, weapons, drugs, hate symbols. Confidence scores per class. Low-latency (sub-1s) for real-time moderation. We process tens of thousands of food delivery calls per day for paying customers.
Free 10/min, Starter 60/min, Pro 300/min, Scale unlimited. Higher tiers available on request.
AWS Rekognition Content Moderation is $0.001/image. Sightengine $0.001-0.003/image. Hive Moderation $0.0006-0.002/image. PixelAPI at $0.0005 is the cheapest production option.