SDK Reference

Aqta is OpenAI-compatible. You use the standard OpenAI SDK — the only change is base_url / baseURL. No new package to install, no new API to learn.


Python

Install

bash
pip install openai

Basic usage

python
from openai import OpenAI

client = OpenAI(
    base_url="https://api.aqta.ai/v1",
    api_key="sk-aqta-your-key-here",
)

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Explain AI governance in one paragraph."},
    ],
    temperature=0.7,
    max_tokens=300,
)

print(response.choices[0].message.content)

Aqta metadata

Every response includes an aqta object with trace ID, cost, provider, and policy outcome:

python
print(response.aqta)
# {
#   "trace_id": "tr_abc123",
#   "cost_eur": 0.00041,
#   "provider": "openai",
#   "status": "passed"
# }

Streaming

python
stream = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Write a haiku about compliance."}],
    stream=True,
)

for chunk in stream:
    delta = chunk.choices[0].delta.content
    if delta:
        print(delta, end="", flush=True)

Async

python
import asyncio
from openai import AsyncOpenAI

client = AsyncOpenAI(
    base_url="https://api.aqta.ai/v1",
    api_key="sk-aqta-your-key-here",
)

async def main():
    response = await client.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": "Hello"}],
    )
    print(response.choices[0].message.content)

asyncio.run(main())

TypeScript / Node.js

Install

bash
npm install openai

Basic usage

typescript
import OpenAI from 'openai';

const client = new OpenAI({
  baseURL: 'https://api.aqta.ai/v1',
  apiKey: 'sk-aqta-your-key-here',
});

const response = await client.chat.completions.create({
  model: 'gpt-4o',
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'Explain AI governance in one paragraph.' },
  ],
  temperature: 0.7,
  max_tokens: 300,
});

console.log(response.choices[0].message.content);

Streaming

typescript
const stream = await client.chat.completions.create({
  model: 'gpt-4o',
  messages: [{ role: 'user', content: 'Write a haiku about compliance.' }],
  stream: true,
});

for await (const chunk of stream) {
  const delta = chunk.choices[0]?.delta?.content ?? '';
  process.stdout.write(delta);
}

Edge / serverless (Next.js)

typescript
// app/api/chat/route.ts
import OpenAI from 'openai';
import { NextRequest } from 'next/server';

const client = new OpenAI({
  baseURL: 'https://api.aqta.ai/v1',
  apiKey: process.env.AQTA_API_KEY!,
});

export async function POST(req: NextRequest) {
  const { messages } = await req.json();

  const response = await client.chat.completions.create({
    model: 'gpt-4o',
    messages,
  });

  return Response.json(response);
}

REST / HTTP

Call the API directly with any HTTP client. No SDK required.

bash
curl https://api.aqta.ai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer sk-aqta-your-key-here" \
  -d '{
    "model": "gpt-4o",
    "messages": [{"role": "user", "content": "Hello"}]
  }'

All endpoints follow the OpenAI REST format. See Endpoints for the full reference.


OpenAPI spec

The full OpenAPI 3.1 spec is available at:

https://api.aqta.ai/openapi.json

Import it into Postman, Insomnia, or any OpenAPI-compatible tool to explore and test all endpoints.


Environment variables

Keep your key out of source code:

bash
# .env
AQTA_API_KEY=sk-aqta-your-key-here
python
# Python
import os
from openai import OpenAI

client = OpenAI(
    base_url="https://api.aqta.ai/v1",
    api_key=os.environ["AQTA_API_KEY"],
)
typescript
// TypeScript
const client = new OpenAI({
  baseURL: 'https://api.aqta.ai/v1',
  apiKey: process.env.AQTA_API_KEY!,
});

Supported models

ProviderModels
OpenAIgpt-4o, gpt-4o-mini, gpt-4-turbo, gpt-3.5-turbo
Anthropicclaude-3-5-sonnet-20241022, claude-3-opus-20240229, claude-3-haiku-20240307
Googlegemini-1.5-pro, gemini-2.0-flash
Mistralmistral-large-latest, mistral-small-latest
Perplexitysonar, sonar-pro
Groqllama-3.3-70b-versatile, mixtral-8x7b-32768

Model availability depends on your tier. See Rate Limits for details.


Questions? hello@aqta.ai

Last updated: March 2026