Open API

LLM Observability
for Developers

One URL change. Full visibility into every LLM call. Costs, latency, tokens, errors - all queryable via REST API.

100 free credits | No credit card | Self-serve

main.py
# Before - no visibility
from openai import OpenAI
client = OpenAI()

# After - full observability
from openai import OpenAI
client = OpenAI(
    base_url="https://api.demeterics.com/openai/v1",
    api_key="dmt_your_key"
)

# Tag your workflow
response = client.chat.completions.create(
    model="gpt-4",
    messages=[{
        "role": "user",
        "content": "/// APP support-bot\n..."
    }]
)

What You Get

Cost per Request

Track spend by workflow, model, and user. Export to CSV.

Latency Percentiles

Average, P90, P99. Find slow prompts before users complain.

Full Request Logs

Every prompt, response, and error. Searchable and filterable.

REST API

Query your data programmatically. Build custom dashboards.

Works with Any LLM Provider

Same SDK. Same code. Just change the base URL.

OpenAI / Python
client = OpenAI(base_url="https://api.demeterics.com/v1")
Anthropic / Python
client = Anthropic(base_url="https://api.demeterics.com/anthropic")
OpenAI / Node.js
new OpenAI({ baseURL: "https://api.demeterics.com/v1" })
cURL
curl https://api.demeterics.com/v1/chat/completions ...

OpenAI | Anthropic | Groq | Gemini | OpenRouter | 50+ more

Query Your Data via API

Build custom dashboards, set up alerts, or integrate with your existing tools. Full REST API with comprehensive documentation.

View API Reference
GET /api/v1/interactions
// Response
{
  "interactions": [
    {
      "id": "int_abc123",
      "model": "gpt-4",
      "app": "support-bot",
      "tokens_in": 1247,
      "tokens_out": 523,
      "cost_usd": 0.0847,
      "latency_ms": 1423
    }
  ],
  "total": 15847
}

Start in 2 Minutes

Get your API key. Change one URL. See your first data.

100 free credits. No credit card. Self-serve signup.