Skip to content

Multi-Agent Coordination Patterns

Complex security operations often require multiple specialized agents working together. Kindo currently supports agent coordination through external systems and triggers rather than direct agent-to-agent invocation. This guide covers proven patterns for building multi-agent workflows.

Why Multiple Agents?

Not every workflow needs multiple agents. Use multiple agents when:

  • Different permission levels — Investigation vs. remediation require different access
  • Human approval gates — Separate analysis from action with human review
  • Team boundaries — Different teams own different parts of the workflow
  • Schedule differences — Continuous monitoring vs. weekly reporting
  • Separation of concerns — Simpler, more testable individual agents

A single multi-step workflow agent is simpler and preferred when all steps can run with the same permissions, at the same time, without human review.

Pattern 1: Trigger Chaining via Ticketing System

How It Works

Agent A writes a result to a ticketing system (Jira, ServiceNow) → Agent B triggers on that ticket event.

Architecture

[Agent A: Investigate] [External System] [Agent B: Remediate]
Trigger: SIEM alert → Creates Jira ticket → Triggers on ticket
LLM: Analyze finding with structured findings assignment to "AutoRemediate"
API Action: Create ticket in a JSON custom field LLM: Parse findings
API Action: Execute fix
API Action: Update ticket

Step-by-Step Setup

Agent A (Investigate):

  1. Create a trigger agent monitoring SIEM integration
  2. Add LLM step: analyze the alert, classify severity, identify affected systems
  3. Add API Action step: create Jira ticket with findings as structured JSON
  4. Set ticket assignee to a sentinel value (e.g., “auto-remediate-queue”)

Agent B (Remediate):

  1. Create a trigger agent monitoring Jira
  2. Set trigger condition: Assignee = “auto-remediate-queue” AND Priority = “Critical”
  3. Add LLM step: parse the JSON findings from Agent A, determine remediation steps
  4. Add API Action steps: execute remediation (isolate host, rotate credential, etc.)
  5. Add API Action step: update the Jira ticket with remediation report

Optional: Human Approval Gate:

  1. Agent A creates ticket assigned to a human reviewer
  2. Human reviews findings, changes assignee to “auto-remediate-queue”
  3. Agent B triggers only after human approval

Pros

  • Full audit trail in the ticketing system
  • Human approval naturally fits between agents
  • Each agent is independently testable
  • Works with any ticketing integration

Cons

  • Latency: depends on ticketing system webhook delivery
  • Coupling: agents are coupled through ticket field conventions

Pattern 2: Direct Webhook Chaining

How It Works

Agent A makes an API Action call to Agent B’s direct webhook URL.

Step-by-Step Setup

  1. Create Agent B as a Triggered agent with a direct webhook trigger
    • Note the webhook URL and secret token
  2. Create Agent A as any agent type
    • Final API Action step: POST to Agent B’s webhook URL
    • Include the structured output from Agent A’s analysis in the POST body
  3. Agent B fires immediately when it receives the webhook

Pros

  • Fastest coordination pattern (no external system delay)
  • No external system dependency
  • Simple to set up

Cons

  • No built-in approval gate (fully automated)
  • No persistent record of the handoff (unless agents write logs)
  • Webhook tokens must be managed securely

When to Use

  • Fully automated pipelines where speed matters
  • Internal agent coordination that doesn’t need human oversight
  • Chaining lightweight processing stages

Pattern 3: Shared External State

How It Works

Multiple agents read from and write to the same external system (Notion database, Google Sheet, Confluence page) on independent schedules.

Example: Security Posture Dashboard

  • Agent 1 (Vuln Scanner): Scheduled daily, writes vulnerability counts to Notion
  • Agent 2 (Compliance Checker): Scheduled weekly, writes compliance status to same Notion DB
  • Agent 3 (Posture Report): Scheduled monthly, reads all data, generates executive summary

Setup

  1. Create a shared Notion database (or Google Sheet) with defined columns
  2. Each agent uses API Actions to read/write its designated columns
  3. Use timestamps and agent IDs to track provenance

Pros

  • Agents are fully decoupled (don’t know about each other)
  • Easy to add new agents to the ecosystem
  • Persistent, human-readable state

Cons

  • No real-time coordination (schedule-dependent)
  • Potential race conditions if agents write concurrently
  • Requires consistent data format conventions

Pattern 4: Manual Handoff with Structured Output

How It Works

Agent A produces structured output → human copies it → human provides it as input to Agent B.

When This Is The Right Choice

  • Regulatory requirement for human review before action
  • Low-volume workflows (incident-level, not alert-level)
  • Initial deployment before automating the handoff

How to Set It Up

  1. Agent A: ensure final LLM step outputs well-structured JSON or markdown
  2. Agent B: define a workflow input that accepts the output format from Agent A
  3. Human: run Agent A → review output → run Agent B with output as input

Transitioning to Automation

  • Start with manual handoff to validate the workflow
  • Once confident in Agent A’s output quality, switch to Pattern 1 or 2

Choosing a Pattern

NeedPattern
Human approval between agentsTrigger Chaining (with approval gate) or Manual Handoff
Fastest automated handoffDirect Webhook
Agents run on different schedulesShared External State
Full audit trailTrigger Chaining (ticketing system)
Simplest setupManual Handoff or Direct Webhook
Maximum decouplingShared External State

Design Principles for Multi-Agent Systems

  1. Start with one agent, split when needed. Don’t over-architect. Begin with a single multi-step workflow and only split into multiple agents when you have clear separation requirements (permissions, schedules, teams).

  2. Use structured output formats. JSON with a defined schema is the lingua franca between agents. Include version fields so you can evolve the format over time.

  3. Make each agent independently testable. You should be able to run Agent B with mock input without needing Agent A. This makes debugging and iteration much faster.

  4. Log everything. Each agent should record what it received, what it decided, and what it did — in the ticketing system or external store, not just in Kindo Terminal.

  5. Plan for failure. What happens if Agent B fails? Design agents to be idempotent when possible (running twice produces the same result as running once).

Limitations

Kindo currently does not support direct agent-to-agent invocation. Agents cannot call other agents as tools.

Kindo currently does not provide a built-in orchestration layer for agent pipelines. The patterns above use external systems for coordination.

Data passing between agents requires explicit configuration (API Actions, webhooks, or manual copy).

These are architectural constraints of the current platform, not fundamental design decisions.