Responses API quickstart
POST https://api.kindo.ai/v1/responses speaks the OpenAI
Responses API.
Stock OpenAI clients work unmodified. Pick the section below that
matches the client you already use.
Prerequisites
- A Kindo API key (see Authentication).
- A model ID from
GET /v1/models— for example,claude-sonnet-4-5-20250929.
codex
codex is the OpenAI CLI built on the Responses API. Point it at
Kindo with two environment variables:
export OPENAI_BASE_URL=https://api.kindo.ai/v1export OPENAI_API_KEY=$KINDO_API_KEY
codexThat’s the entire integration. codex builds standard Responses
requests and Kindo handles them with stock OpenAI semantics.
openai-python
import osfrom openai import OpenAI
client = OpenAI( base_url="https://api.kindo.ai/v1", api_key=os.environ["KINDO_API_KEY"],)
response = client.responses.create( model="claude-sonnet-4-5-20250929", input="Explain Kubernetes pod security policies in plain English.",)
print(response.output_text)openai-node
import OpenAI from 'openai';
const client = new OpenAI({ baseURL: 'https://api.kindo.ai/v1', apiKey: process.env.KINDO_API_KEY});
const response = await client.responses.create({ model: 'claude-sonnet-4-5-20250929', input: 'Explain Kubernetes pod security policies in plain English.'});
console.log(response.output_text);curl
curl -X POST https://api.kindo.ai/v1/responses \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $KINDO_API_KEY" \ -d '{ "model": "claude-sonnet-4-5-20250929", "input": "Explain Kubernetes pod security policies in plain English." }'Response shape
The response uses the standard OpenAI Responses output:
{ "id": "resp_abc123", "object": "response", "status": "completed", "model": "claude-sonnet-4-5-20250929", "output": [ { "type": "message", "role": "assistant", "content": [ { "type": "output_text", "text": "Pod Security Policies were Kubernetes rules that controlled what a pod was allowed to do..." } ], "status": "completed" } ], "output_text": "Pod Security Policies were Kubernetes rules that controlled what a pod was allowed to do..."}Raw HTTP responses also include output_text at the top level — the
SDK exposes it via a convenience field of the same name.
What you get by default
- A single-shot, stateless completion. Nothing is persisted on
Kindo’s side, and
previous_response_id/conversationcarry no state across calls. - Your
instructions(orinput-arraydeveloper/systemmessages) flow through verbatim. Kindo does not prepend its curated system prompt. - Tools are only what you list in
tools. Kindo does not silently injectkindo_shell,kindo_web_search, or anything else.
If you want any of those — stateful conversations, a curated Kindo system prompt, or the Kindo-hosted tool catalog — opt in via the Chat Actions extensions.
Next steps
- Request shape — every standard Responses field Kindo honors and how it’s forwarded.
- Streaming — set
stream: trueand consume SSE. - Tool use — function-calling round trip.
- Errors — error envelope shapes.
- Chat Actions extensions — opt in to Kindo prompts, tools, and stateful conversations.