Skip to content

Memory and Persistence Patterns

Kindo agents currently operate within a single conversation or run context. This guide covers patterns for extending that scope to persistent, cross-run, and cross-agent memory using platform features and external systems.

Overview

Kindo currently scopes agent memory to individual runs. These patterns extend that scope using integrations and external systems:

PatternScopeReadWriteDurabilityComplexity
Knowledge StoreOrg-wide❌ (manual upload)HighLow
SandboxSingle runNone (ephemeral)Low
External SystemCross-workflow✅ via API/MCP✅ via API/MCPHighMedium
Git RepositoryTeam-wide✅ (manual sync)✅ (commits)Very highMedium

Pattern 1: Knowledge Store — Static Reference Data

When to Use

  • Agent needs access to policies, runbooks, compliance frameworks, threat intel feeds
  • Data changes infrequently (updated by humans, not agents)

How It Works

  1. Upload files to Knowledge Store (PDF, markdown, CSV, etc.)
  2. Attach Knowledge Store to agent(s) — multiple agents can share the same files
  3. Agent references files via Library Search or direct attachment

Limitations

  • Read-only: Agents cannot update Knowledge Store files
  • Manual upload: No API for programmatic updates (currently)
  • File size limits: Apply to uploads

Example

Upload your organization’s incident response playbook as a PDF. Attach it to all SOC agents — they reference it when classifying incidents.

Pattern 2: Sandbox — Ephemeral Per-Run Memory

When to Use

  • Intermediate results within a multi-step workflow
  • Large data that needs filtering before LLM processing
  • Temporary files (scripts, extracted data, reports)

How It Works

  1. API Action steps and tool outputs write to the sandbox automatically when large
  2. LLM steps can read/write sandbox files using shell commands
  3. Data persists for the duration of the workflow run, then is discarded

Cross-Reference

→ See Working with Large Context for detailed sandbox techniques

Limitations

  • Ephemeral: Gone after the run ends
  • Not shareable: Between agents or runs

Pattern 3: External Systems — Persistent Cross-Workflow Memory

When to Use

  • Agent needs to remember findings from previous runs
  • Multiple agents need to share state (e.g., investigate → remediate pipeline)
  • Audit trail required for agent decisions

How It Works

Write: Use API Action steps to create/update records in external systems:

  • Create Jira tickets with structured findings
  • Post to Slack channels as a persistent log
  • Write to Notion/Confluence pages
  • Push to Google Sheets for tabular data

Read: Use API Action steps at the start of a workflow to load previous state:

  • Query Jira for tickets created by previous agent runs
  • Read from a designated Notion “agent memory” page
  • Fetch latest entries from a Google Sheet

Architecture

Agent run N writes structured JSON to Notion page
Agent run N+1 reads that page as first step
Processes new data
Writes updated state back

Example: Vulnerability Tracking Over Time

  1. Weekly scan agent writes top 10 findings to a Notion database
  2. Each row: CVE, severity, first seen date, current status, remediation owner
  3. Next week’s run reads the database → compares with new scan → updates statuses → adds new findings → flags regressions

Design Considerations

  • Choose familiar systems: Use tools your team already uses (don’t add new tools just for agent memory)
  • Use structured formats: JSON in Jira custom fields, database rows in Notion — so agents can parse reliably
  • Include provenance: Timestamps and run IDs so you can trace which agent run wrote what
  • Consider rate limits: Especially for high-frequency scheduled agents hitting external APIs

Pattern 4: Git Repository — Versioned Agent Configuration

When to Use

  • Managing production agents across a team
  • Need change history, rollback capability, peer review for prompt changes
  • Regulatory/compliance requirement for audit trail of agent behavior changes

How It Works

Maintain a Git repository with agent configurations:

agents/
alert-triage/
README.md # Purpose, owner, version
system-prompt.md # The agent's system prompt
steps.yaml # Step definitions and configuration
CHANGELOG.md # Version history
vuln-reporter/
...
knowledge-store/
playbooks/
compliance-frameworks/

When making changes: update in Git → review via PR → apply to Kindo manually.

→ See Prompt and Agent Configuration Management for full versioning and governance guidance.

Limitations

  • Manual sync: Currently no Git → Kindo auto-deploy pipeline
  • Recreation required: Agent configuration must be recreated in the Kindo UI after Git changes

Customer Evidence

A customer managing 6 production agents maintains a GitHub repository with versioned prompts, changelogs, and a meta-agent that updates the repo README — proving this pattern works at scale.

Choosing a Pattern

QuestionPattern
Agent needs static reference data?Knowledge Store
Need intermediate results within one run?Sandbox
Agent needs to remember across runs?External System
Need team-wide agent config management?Git Repository
Multiple agents share state?External System
Compliance requires audit trail?Git Repository + External System

Limitations and Roadmap Context

Kindo currently does not provide a built-in persistent memory store that agents can write to and read from across runs. The patterns above use existing integrations to achieve this.

Kindo currently does not support programmatic Knowledge Store updates via API. File uploads are manual through the UI.

These are architectural constraints of the current platform, not fundamental design decisions.