SDKs
Demeterics is OpenAI SDK compatible—use the official OpenAI SDKs for Python, Node.js, and Go by simply changing the base URL. No custom SDK required!
GitHub Repository: Full SDK source code, examples, and curl scripts are available at github.com/bluefermion/demeterics
Quick Start
Python (OpenAI SDK)
Install the official OpenAI SDK:
pip install openai
Use Demeterics by changing the base URL:
from openai import OpenAI
# Initialize with Demeterics endpoint
client = OpenAI(
base_url="https://api.demeterics.com/groq/v1", # or /openai/v1, /anthropic/v1, /gemini/v1
api_key="dmt_your_demeterics_api_key"
)
# Make requests as usual
response = client.chat.completions.create(
model="llama-3.3-70b-versatile",
messages=[
{"role": "user", "content": "Explain quantum computing"}
]
)
print(response.choices[0].message.content)
Streaming support:
stream = client.chat.completions.create(
model="llama-3.3-70b-versatile",
messages=[{"role": "user", "content": "Tell me a story"}],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")
Node.js (OpenAI SDK)
Install the official OpenAI SDK:
npm install openai
Use Demeterics by changing the base URL:
import OpenAI from 'openai';
// Initialize with Demeterics endpoint
const client = new OpenAI({
baseURL: 'https://api.demeterics.com/groq/v1', // or /openai/v1, /anthropic/v1, /gemini/v1
apiKey: 'dmt_your_demeterics_api_key'
});
// Make requests as usual
const response = await client.chat.completions.create({
model: 'llama-3.3-70b-versatile',
messages: [
{ role: 'user', content: 'Explain quantum computing' }
]
});
console.log(response.choices[0].message.content);
Streaming support:
const stream = await client.chat.completions.create({
model: 'llama-3.3-70b-versatile',
messages: [{ role: 'user', content: 'Tell me a story' }],
stream: true
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || '');
}
Go (OpenAI SDK)
Install the official OpenAI Go SDK:
go get github.com/openai/openai-go
Use Demeterics by changing the base URL:
package main
import (
"context"
"fmt"
"github.com/openai/openai-go"
"github.com/openai/openai-go/option"
)
func main() {
client := openai.NewClient(
option.WithBaseURL("https://api.demeterics.com/groq/v1"), // or /openai/v1, /anthropic/v1, /gemini/v1
option.WithAPIKey("dmt_your_demeterics_api_key"),
)
response, err := client.Chat.Completions.New(context.Background(), openai.ChatCompletionNewParams{
Model: openai.F("llama-3.3-70b-versatile"),
Messages: openai.F([]openai.ChatCompletionMessageParamUnion{
openai.UserMessage("Explain quantum computing"),
}),
})
if err != nil {
panic(err)
}
fmt.Println(response.Choices[0].Message.Content)
}
Multi-Provider Support
Switch between providers by changing the base URL:
| Provider | Base URL | SDK Models |
|---|---|---|
| Groq | https://api.demeterics.com/groq/v1 |
llama-3.3-70b-versatile, mixtral-8x7b-32768 |
| OpenAI | https://api.demeterics.com/openai/v1 |
gpt-4o, gpt-4o-mini, gpt-4 |
| Anthropic | https://api.demeterics.com/anthropic/v1 |
anthropic/claude-sonnet-4.5, claude-opus-4-20250514 |
| Gemini | https://api.demeterics.com/gemini/v1 |
gemini-2.0-flash-exp, gemini-pro |
| OpenRouter | https://api.demeterics.com/openrouter/v1 |
200+ models from multiple providers |
Example: Switching Providers
Python:
# Groq
groq_client = OpenAI(base_url="https://api.demeterics.com/groq/v1", api_key="dmt_...")
# OpenAI
openai_client = OpenAI(base_url="https://api.demeterics.com/openai/v1", api_key="dmt_...")
# Anthropic (uses Messages API)
anthropic_client = OpenAI(base_url="https://api.demeterics.com/anthropic/v1", api_key="dmt_...")
Node.js:
const groqClient = new OpenAI({ baseURL: 'https://api.demeterics.com/groq/v1', apiKey: 'dmt_...' });
const openaiClient = new OpenAI({ baseURL: 'https://api.demeterics.com/openai/v1', apiKey: 'dmt_...' });
const anthropicClient = new OpenAI({ baseURL: 'https://api.demeterics.com/anthropic/v1', apiKey: 'dmt_...' });
Anthropic (Claude) SDK
For Anthropic's native Messages API, use the Anthropic SDK:
Python:
pip install anthropic
from anthropic import Anthropic
client = Anthropic(
base_url="https://api.demeterics.com/anthropic",
api_key="dmt_your_demeterics_api_key"
)
response = client.messages.create(
model="anthropic/claude-sonnet-4.5",
max_tokens=1024,
messages=[
{"role": "user", "content": "Explain quantum computing"}
]
)
print(response.content[0].text)
Node.js:
npm install @anthropic-ai/sdk
import Anthropic from '@anthropic-ai/sdk';
const client = new Anthropic({
baseURL: 'https://api.demeterics.com/anthropic',
apiKey: 'dmt_your_demeterics_api_key'
});
const response = await client.messages.create({
model: 'anthropic/claude-sonnet-4.5',
max_tokens: 1024,
messages: [
{ role: 'user', content: 'Explain quantum computing' }
]
});
console.log(response.content[0].text);
Common Patterns
Error Handling
Python:
from openai import OpenAI, OpenAIError
client = OpenAI(
base_url="https://api.demeterics.com/groq/v1",
api_key="dmt_your_api_key"
)
try:
response = client.chat.completions.create(
model="llama-3.3-70b-versatile",
messages=[{"role": "user", "content": "Hello"}]
)
print(response.choices[0].message.content)
except OpenAIError as e:
print(f"Error: {e}")
Async Support
Python (async):
from openai import AsyncOpenAI
client = AsyncOpenAI(
base_url="https://api.demeterics.com/groq/v1",
api_key="dmt_your_api_key"
)
async def main():
response = await client.chat.completions.create(
model="llama-3.3-70b-versatile",
messages=[{"role": "user", "content": "Hello"}]
)
print(response.choices[0].message.content)
import asyncio
asyncio.run(main())
Environment Variables
Python:
import os
from openai import OpenAI
client = OpenAI(
base_url="https://api.demeterics.com/groq/v1",
api_key=os.environ["DEMETERICS_API_KEY"] # From environment variable
)
Node.js:
import OpenAI from 'openai';
const client = new OpenAI({
baseURL: 'https://api.demeterics.com/groq/v1',
apiKey: process.env.DEMETERICS_API_KEY // From environment variable
});
Framework Integrations
LangChain (Python)
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
base_url="https://api.demeterics.com/groq/v1",
api_key="dmt_your_api_key",
model="llama-3.3-70b-versatile"
)
response = llm.invoke("Explain quantum computing")
print(response.content)
LangChain.js
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({
configuration: {
baseURL: "https://api.demeterics.com/groq/v1",
apiKey: "dmt_your_api_key"
},
modelName: "llama-3.3-70b-versatile"
});
const response = await llm.invoke("Explain quantum computing");
console.log(response.content);
Automatic Tracking
All requests through Demeterics SDKs are automatically:
- ✅ Logged to BigQuery with full request/response payloads
- ✅ Tracked for token usage and costs
- ✅ Billed from your Stripe credit balance (or BYOK)
- ✅ Available in the dashboard at demeterics.com/interactions
No additional code required—just use the SDK as normal!
Conversion Tracking
Track business outcomes from your AI interactions using cohort tagging.
Step 1: Tag Prompts with Cohort ID
Add /// COHORT comments to group related interactions:
cohort_id = f"campaign-{user_id}-{int(time.time())}"
response = client.chat.completions.create(
model="llama-3.3-70b-versatile",
messages=[{
"role": "user",
"content": f"""/// APP MarketingBot
/// FLOW email_generation
/// COHORT {cohort_id}
Write an engaging email subject for our summer sale."""
}]
)
Step 2: Submit Business Outcomes
After collecting business metrics, submit them via API:
curl -X POST https://api.demeterics.com/api/v1/cohort/outcome \
-H "Authorization: Bearer dmt_your_key" \
-H "Content-Type: application/json" \
-d '{
"cohort_id": "campaign-user123-1701234567",
"outcome": 1523.50,
"outcome_v2": 45,
"label": "24h revenue and clicks"
}'
See Conversion Tracking Guide for full details.
Best Practices
- Use Environment Variables: Never hardcode API keys — see Authentication Guide
- Handle Errors: Implement try/catch for all API calls
- Implement Retries: Use exponential backoff for rate limits
- Monitor Usage: Check your dashboard for unexpected spikes — see Credits & Pricing
- Rotate Keys: Create new API keys every 90 days
Related Documentation
| Guide | Description |
|---|---|
| Getting Started | API key setup and first request |
| API Reference | Complete endpoint documentation |
| Prompt Engineering | Metadata tags, templates, A/B testing |
| Conversion Tracking | Cohort-based outcome measurement |
| Authentication | API keys and security |
| Credits & Pricing | Usage-based billing |
Need Help?
- GitHub SDK Repo: github.com/bluefermion/demeterics
- Support: support@demeterics.com
Ready to integrate? Head to the Quick Start Guide to get your API key and make your first request.