Skip to content

Responses API tool use

The Responses API uses OpenAI’s typed-output tool-call shape. The flow is:

  1. You send a request with tools defined.
  2. The model decides to call a tool. The response status is incomplete and output contains a function_call item.
  3. You execute the tool client-side and send the result back as a function_call_output input item, referencing the same call_id.
  4. The model continues with the result and emits a final message.

Define a function tool

Function tools are defined in the Responses-API tool format (top-level name, description, parameters):

{
"model": "claude-sonnet-4-5-20250929",
"input": "What is the weather in San Francisco?",
"tools": [
{
"type": "function",
"name": "get_weather",
"description": "Get the current weather for a city.",
"parameters": {
"type": "object",
"properties": {
"location": { "type": "string" }
},
"required": ["location"]
}
}
],
"tool_choice": "auto"
}

This is the shape codex and openai-python’s client.responses produce.

Round trip

Set store: true to persist the response server-side. The response includes conversation.id when store: true, which you can use for stateful continuation, or ignore it for a one-shot call.

Step 1 — initial request

Terminal window
curl -X POST https://api.kindo.ai/v1/responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $KINDO_API_KEY" \
-d '{
"model": "claude-sonnet-4-5-20250929",
"store": true,
"input": "What is the weather in San Francisco?",
"tools": [
{
"type": "function",
"name": "get_weather",
"description": "Get the current weather for a city.",
"parameters": {
"type": "object",
"properties": { "location": { "type": "string" } },
"required": ["location"]
}
}
]
}'

Step 2 — tool-call response

{
"id": "resp_abc123",
"status": "incomplete",
"incomplete_details": { "reason": "tool_use" },
"conversation": { "id": "conv_abc123" },
"output": [
{
"type": "function_call",
"call_id": "call_abc",
"name": "get_weather",
"arguments": "{\"location\":\"San Francisco, CA\"}"
}
]
}

status: "incomplete" with incomplete_details.reason: "tool_use" is the canonical signal that the model wants you to run a tool.

Kindo divergence from OpenAI spec: OpenAI’s published Responses API uses status: "completed" with a function_call output item when the model calls a tool. Kindo returns status: "incomplete" with incomplete_details.reason: "tool_use" instead. Stock OpenAI SDKs will see this divergence when consuming raw HTTP responses.

Step 3 — execute the tool client-side

Run get_weather("San Francisco, CA") in your application. This is your code, not Kindo’s — function tools are by definition client-executed.

Step 4 — submit the result

Terminal window
curl -X POST https://api.kindo.ai/v1/responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $KINDO_API_KEY" \
-d '{
"model": "claude-sonnet-4-5-20250929",
"conversation": "conv_abc123",
"input": [
{ "type": "message", "role": "user", "content": "What is the weather in San Francisco?" },
{
"type": "function_call",
"call_id": "call_abc",
"name": "get_weather",
"arguments": "{\"location\":\"San Francisco, CA\"}"
},
{
"type": "function_call_output",
"call_id": "call_abc",
"output": "{\"temperature\":68,\"unit\":\"fahrenheit\"}"
}
],
"tools": [
{
"type": "function",
"name": "get_weather",
"description": "Get the current weather for a city.",
"parameters": {
"type": "object",
"properties": { "location": { "type": "string" } },
"required": ["location"]
}
}
]
}'

The call_id must match the one from Step 2. Re-include the tools definition so the model sees the schema again. Re-include the prior input messages so the model has the context of the tool call. previous_response_id is not yet supported; use the conversation field to continue an existing exchange. See Chat Actions extensions for the stateful continuation recipe.

Step 5 — final answer

{
"id": "resp_def456",
"status": "completed",
"output": [
{
"type": "message",
"role": "assistant",
"content": [
{ "type": "output_text", "text": "It's 68 °F in San Francisco." }
]
}
],
"output_text": "It's 68 °F in San Francisco."
}

tool_choice

ValueBehavior
"auto" (default)The model decides whether to call a tool.
"none"The model must produce a message; never a tool call.
"required"The model must call at least one tool.
{type: "function", name: "..."}The model must call exactly that function.
{type: "allowed_tools", mode: "auto", tools: [...]}Restricts the model to a subset of declared tools. Works for function, mcp, and kindo_* tools.

Other tool types

POST /v1/responses also accepts:

  • {type: "mcp", server_label: "<your-mcp-server>"} — MCP tools registered against your organization.
  • {type: "kindo_<name>"} — individual Kindo-hosted tools (opt-in).
  • {type: "kindo_tools"} — sugar that expands to the full Kindo hosted-tool catalog.

Stock OpenAI built-in types (web_search, file_search, computer, shell, etc.) are forwarded verbatim and dispatched to the upstream model when supported.

See Chat Actions extensions for the full hosted-tool catalog and stateful flow patterns.

See also