Skip to content

Model catalog

The same model registry backs all three APIs. Whichever model ID you discover via /v1/models works as the model parameter on /v1/chat/completions, /v1/responses, and /v1/messages — Kindo routes to the correct upstream provider for you.

Discover available models

Terminal window
curl https://api.kindo.ai/v1/models \
-H "Authorization: Bearer $KINDO_API_KEY"

The response uses the standard OpenAI models-list format:

{
"object": "list",
"data": [
{
"id": "claude-sonnet-4-5-20250929",
"object": "model",
"created": 0,
"owned_by": "kindo"
},
{
"id": "gpt-4o-2024-11-20",
"object": "model",
"created": 0,
"owned_by": "kindo"
}
]
}

You only see models your organization has access to — model visibility is governed by your user-group configuration.

You can also inspect the same list under Settings > Kindo API > Available Models in the Kindo Terminal.

Naming conventions

Model IDs follow the upstream provider’s published name with the date suffix the provider uses. A few representative examples:

FamilyExample IDs
Anthropic Claudeclaude-sonnet-4-5-20250929, claude-opus-4-20251125
OpenAI GPTgpt-4o-2024-11-20, gpt-4o-mini-2024-07-18
OpenAI o-serieso1-2024-12-17, o3-mini-2025-01-31

Kindo does not rename models. If you’ve used these IDs against the upstream provider directly, the same string works against api.kindo.ai.

The exact set evolves — GET /v1/models is the source of truth. Hard-coding a model ID into your client is fine; assuming a model will exist forever is not. Inference endpoints return 400 for an unknown model.

Cross-API model use

You can use the same model ID on whichever protocol your client speaks:

  • A claude-* model works on /v1/chat/completions, /v1/responses, and /v1/messages — Kindo translates the wire format for you.
  • A gpt-* model works on all three — same translation.

In practice, pick the protocol that matches your SDK; pick the model based on capability, latency, and cost.

Errors

StatusMeaning
403 model_access_deniedThe model exists but your organization is not entitled to call it.
400 model_not_foundThe model ID is not registered on inference endpoints. Check GET /v1/models. Emitted by /v1/chat/completions and /v1/responses. /v1/messages returns 400 invalid_request_error without a code field — see Messages errors.

If your client returns a model error, hit /v1/models first. Most “unknown model” reports trace back to a stale model ID.

See also