Full LLM Observability in 2 Minutes

Just Change Your Base URL

Keep your existing OpenAI/Anthropic credits. Get full observability with zero code changes. Debug latency issues, track costs, and never miss a failed request again.

  • Bring Your Own Keys – Use your provider accounts, pay them directly
  • 50+ fields per call – Costs by category, latency breakdown, full conversation history
  • Zero friction – Change one URL, instant visibility

100 free credits included. BYOK: 10% platform fee, you pay providers directly.

Before → After 2 lines changed
# Before (standard OpenAI)
client = OpenAI(
    api_key="sk-..."
)

# After (with Demeterics observability)
client = OpenAI(
    base_url="https://api.demeterics.com/openai/v1",
    api_key="dmt_..."  # includes your OpenAI key
)

Works with Python, Node.js, Go, Rust, and any HTTP client

Zero Lock-In

Bring Your Own Keys (BYOK)

Keep your existing OpenAI, Anthropic, or Groq accounts. Route traffic through Demeterics, pay providers directly, and add a transparent 10% platform fee for observability.

Keep Your Credits Existing provider credits? Keep using them.
Transparent Pricing Provider rates + 10% platform fee. No hidden costs.
Switch Anytime Remove Demeterics? Just change back the URL.
Keep Your Rates Volume discounts and negotiated pricing stay intact.
Two Ways to BYOK
Option 1: Store Your Keys
Register once in Settings → API Keys. Encrypted with Google Cloud KMS.
Authorization: Bearer dmt_your_key
Option 2: Dual-Key (Per-Request)
Pass your vendor key with each request. Never stored.
Bearer dmt_your_key;gsk_vendor_key
Either way:
  • You pay providers directly
  • Full observability in your dashboard
  • 10% platform fee for the service
Game Changer

Tag Your Prompts. Zero Overhead.

Add metadata, A/B test variants, and audit notes directly in your prompts using /// NAME VALUE syntax. We strip these before the LLM sees them — so you pay nothing extra.

Works Everywhere No SDK, no headers — just edit your prompt
Tags Cost Nothing Stripped before reaching the LLM
A/B Testing Built-In Track variants without code changes
Perfect for No-Code Zapier/Make can't set headers, but CAN edit prompts
See Full Documentation
Your Prompt Tags stripped before LLM
/// APP customer-support
/// FLOW ticket-response
/// VARIANT gpt4-concise
/// VERSION 2.3.1

You are a helpful customer support agent.

/// This variant uses shorter responses
/// Approved by legal team 2024-12-15

Respond to this ticket: {{{ ticket_text }}}

Everything You Need. Nothing You Don't.

From debugging to ROI tracking — features that actually matter.

Latency p95
Debug slow calls
Error Tracking
Rate limits, 500s
Cost Attribution
By team/workflow
/// Tagging
Free metadata tags
Dual-Key Auth
Secure automation
BYOK
Your keys + 10% fee
Council API
18 persona feedback
Conversion Track
Link AI to ROI
A/B Testing
Prompt variants
Data Export
CSV via API
Budget Guards
Per-key limits
ZDR Mode
Control what's stored

See It In Action

From black box to glass box: watch how Demeterics turns AI spending into crystal-clear visibility.

Full demo: n8n workflows, cost tracking, multi-agent quality control

Start Free Now Read the Docs
Zero Friction

Integration = Change One URL

No SDK to install. No code rewrite. Just change your base URL and you're done.

# pip install openai
from openai import OpenAI

# Before
# client = OpenAI(api_key="sk-...")

# After (with Demeterics observability)
client = OpenAI(
    base_url="https://api.demeterics.com/openai/v1",
    api_key="dmt_your_key"  # includes your OpenAI key
)

response = client.chat.completions.create(
    model="gpt-4",
    messages=[{
        "role": "user",
        "content": """/// APP my-app
/// FLOW chat
Hello!"""
    }]
)
// npm install openai
import OpenAI from 'openai';

// Before
// const client = new OpenAI({ apiKey: "sk-..." });

// After (with Demeterics observability)
const client = new OpenAI({
    baseURL: "https://api.demeterics.com/openai/v1",
    apiKey: "dmt_your_key"
});

const response = await client.chat.completions.create({
    model: "gpt-4",
    messages: [{
        role: "user",
        content: `/// APP my-app
/// FLOW chat
Hello!`
    }]
});
# Just change the URL - that's it!
curl https://api.demeterics.com/openai/v1/chat/completions \
  -H "Authorization: Bearer dmt_your_key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4",
    "messages": [{
      "role": "user",
      "content": "/// APP my-app\n/// FLOW chat\nHello!"
    }]
  }'
# pip install langchain-openai
from langchain_openai import ChatOpenAI

# Before
# llm = ChatOpenAI(model="gpt-4", api_key="sk-...")

# After (with Demeterics observability)
llm = ChatOpenAI(
    model="gpt-4",
    base_url="https://api.demeterics.com/openai/v1",
    api_key="dmt_your_key"
)

# Tags go in your prompt - stripped before LLM, zero cost
response = llm.invoke("""/// APP langchain-app
/// FLOW rag-pipeline
Hello!""")

Works With All Major LLM Providers

One proxy. One dashboard. All your AI.

🤖
OpenAI
🧠
Anthropic
âš¡
Groq
💎
Google
🔀
OpenRouter

Integrations

Works With Your Existing Stack

LangChain
Vercel AI
Zapier
Make
n8n
Any HTTP

Start Free. Scale When Ready.

100 free credits to start. BYOK: pay providers directly + 10% platform fee.

100 Free Credits No card required
BYOK Mode Your keys + 10% fee
Unlimited Users No per-seat fees

Ready to See What's Happening in Your LLM Stack?

Change one URL. Get instant visibility. Debug faster. Spend smarter.

100 free credits. BYOK supported. No credit card required.