Responses API tool use
The Responses API uses OpenAI’s typed-output tool-call shape. The flow is:
- You send a request with
toolsdefined. - The model decides to call a tool. The response status is
incompleteandoutputcontains afunction_callitem. - You execute the tool client-side and send the result back as a
function_call_outputinput item, referencing the samecall_id. - The model continues with the result and emits a final
message.
Define a function tool
Function tools are defined in the Responses-API tool format
(top-level name, description, parameters):
{ "model": "claude-sonnet-4-5-20250929", "input": "What is the weather in San Francisco?", "tools": [ { "type": "function", "name": "get_weather", "description": "Get the current weather for a city.", "parameters": { "type": "object", "properties": { "location": { "type": "string" } }, "required": ["location"] } } ], "tool_choice": "auto"}This is the shape codex and openai-python’s client.responses
produce.
Round trip
Set
store: trueto persist the response server-side. The response includesconversation.idwhenstore: true, which you can use for stateful continuation, or ignore it for a one-shot call.
Step 1 — initial request
curl -X POST https://api.kindo.ai/v1/responses \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $KINDO_API_KEY" \ -d '{ "model": "claude-sonnet-4-5-20250929", "store": true, "input": "What is the weather in San Francisco?", "tools": [ { "type": "function", "name": "get_weather", "description": "Get the current weather for a city.", "parameters": { "type": "object", "properties": { "location": { "type": "string" } }, "required": ["location"] } } ] }'Step 2 — tool-call response
{ "id": "resp_abc123", "status": "incomplete", "incomplete_details": { "reason": "tool_use" }, "conversation": { "id": "conv_abc123" }, "output": [ { "type": "function_call", "call_id": "call_abc", "name": "get_weather", "arguments": "{\"location\":\"San Francisco, CA\"}" } ]}status: "incomplete" with incomplete_details.reason: "tool_use"
is the canonical signal that the model wants you to run a tool.
Kindo divergence from OpenAI spec: OpenAI’s published Responses API uses
status: "completed"with afunction_calloutput item when the model calls a tool. Kindo returnsstatus: "incomplete"withincomplete_details.reason: "tool_use"instead. Stock OpenAI SDKs will see this divergence when consuming raw HTTP responses.
Step 3 — execute the tool client-side
Run get_weather("San Francisco, CA") in your application. This is
your code, not Kindo’s — function tools are by definition
client-executed.
Step 4 — submit the result
curl -X POST https://api.kindo.ai/v1/responses \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $KINDO_API_KEY" \ -d '{ "model": "claude-sonnet-4-5-20250929", "conversation": "conv_abc123", "input": [ { "type": "message", "role": "user", "content": "What is the weather in San Francisco?" }, { "type": "function_call", "call_id": "call_abc", "name": "get_weather", "arguments": "{\"location\":\"San Francisco, CA\"}" }, { "type": "function_call_output", "call_id": "call_abc", "output": "{\"temperature\":68,\"unit\":\"fahrenheit\"}" } ], "tools": [ { "type": "function", "name": "get_weather", "description": "Get the current weather for a city.", "parameters": { "type": "object", "properties": { "location": { "type": "string" } }, "required": ["location"] } } ] }'The call_id must match the one from Step 2. Re-include the tools
definition so the model sees the schema again. Re-include the prior
input messages so the model has the context of the tool call.
previous_response_id is not yet supported; use the conversation
field to continue an existing exchange. See Chat Actions
extensions for the stateful continuation recipe.
Step 5 — final answer
{ "id": "resp_def456", "status": "completed", "output": [ { "type": "message", "role": "assistant", "content": [ { "type": "output_text", "text": "It's 68 °F in San Francisco." } ] } ], "output_text": "It's 68 °F in San Francisco."}tool_choice
| Value | Behavior |
|---|---|
"auto" (default) | The model decides whether to call a tool. |
"none" | The model must produce a message; never a tool call. |
"required" | The model must call at least one tool. |
{type: "function", name: "..."} | The model must call exactly that function. |
{type: "allowed_tools", mode: "auto", tools: [...]} | Restricts the model to a subset of declared tools. Works for function, mcp, and kindo_* tools. |
Other tool types
POST /v1/responses also accepts:
{type: "mcp", server_label: "<your-mcp-server>"}— MCP tools registered against your organization.{type: "kindo_<name>"}— individual Kindo-hosted tools (opt-in).{type: "kindo_tools"}— sugar that expands to the full Kindo hosted-tool catalog.
Stock OpenAI built-in types (web_search, file_search, computer,
shell, etc.) are forwarded verbatim and dispatched to the upstream
model when supported.
See Chat Actions extensions for the full hosted-tool catalog and stateful flow patterns.
See also
- Request shape — full field reference.
- Streaming —
function_call_arguments.deltaevents. - Chat Actions extensions — Kindo-hosted tools.
- Errors — error envelopes for tool-call failures and provider passthrough.