Chat Completions API quickstart
POST https://api.kindo.ai/v1/chat/completions speaks the OpenAI
Chat Completions API.
Stock OpenAI clients work unmodified.
Prerequisites
- A Kindo API key (see Authentication).
- A model ID from
GET /v1/models— for example,claude-sonnet-4-5-20250929.
openai-python
import osfrom openai import OpenAI
client = OpenAI( base_url="https://api.kindo.ai/v1", api_key=os.environ["KINDO_API_KEY"],)
response = client.chat.completions.create( model="claude-sonnet-4-5-20250929", messages=[ {"role": "user", "content": "Explain Kubernetes pod security policies."} ],)
print(response.choices[0].message.content)openai-node
import OpenAI from 'openai';
const client = new OpenAI({ baseURL: 'https://api.kindo.ai/v1', apiKey: process.env.KINDO_API_KEY});
const response = await client.chat.completions.create({ model: 'claude-sonnet-4-5-20250929', messages: [ { role: 'user', content: 'Explain Kubernetes pod security policies.' } ]});
console.log(response.choices[0].message.content);curl
curl -X POST https://api.kindo.ai/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $KINDO_API_KEY" \ -d '{ "model": "claude-sonnet-4-5-20250929", "messages": [ { "role": "user", "content": "Explain Kubernetes pod security policies." } ] }'Response shape
{ "id": "chatcmpl-abc123", "object": "chat.completion", "created": 1710000000, "model": "claude-sonnet-4-5-20250929", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "Pod Security Policies were Kubernetes rules..." }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 24, "completion_tokens": 134, "total_tokens": 158 }}What you get by default
- A single-shot, stateless completion. No persistence, no conversation linkage.
- Your
messagesflow through verbatim. Kindo does not prepend a curated system prompt. - Tools are only what you list in
tools. Kindo does not silently inject hosted tools.
Kindo’s opt-in extensions (curated system prompt, hosted tools,
stateful conversations) are available on /v1/responses. See the
Chat Actions guide for that surface.
Migrating from llm.kindo.ai
llm.kindo.ai is deprecated. Change the subdomain:
# Oldclient = OpenAI(base_url="https://llm.kindo.ai", api_key="...")
# Newclient = OpenAI(base_url="https://api.kindo.ai/v1", api_key="...")The request and response bodies are identical. If you previously
sent the bare api-key: header, use Authorization: Bearer
for api.kindo.ai.
Next steps
- Request shape — every standard field Kindo honors.
- Streaming —
stream: trueand SSE. - Tool use — function-calling round trip.
- Errors — error envelope shapes.