Skip to content

Upgrade

Move a Self-Managed Kindo (SMK) deployment to a new release — safely, repeatably, and with a clear rollback path. This page covers the standard kindo upgrade flow, the one-shot reconstruction for clusters installed from the legacy kindo-helm-e2e-dist Makefile, and the MCP-only partial rebuild for releases that do not require a full stack bump.

Release cadence and versioning

Kindo ships a monthly release train. Releases are tagged with calendar versions — for example 2026.04.0.

Standard upgrade: kindo upgrade

The kindo upgrade command is the single entry point for all version moves once kindo install has been run at least once. It replays four steps against the cluster, using the install-contract.yaml, environment-bindings.yaml, and kindo-secrets-config that already live in the cluster.

Preview the upgrade

Always run --plan first. It is read-only — no cluster changes, no secret writes.

Terminal window
kindo upgrade --version 2026.04.0 --plan

The plan surfaces:

  • The domain, current version, and target version.
  • Prerequisite check (kubectl, helm, helmfile, yq, cluster reachability).
  • Chart availability — every periphery and application chart is probed against the registry at the target version. If any chart cannot be pulled, the upgrade would fail — fix credentials or the version string before applying.
  • The four upgrade steps in the order they will run.

Apply the upgrade

Once --plan is clean, run --apply:

Terminal window
kindo upgrade --version 2026.04.0 --apply

Optional flags:

Terminal window
# Preview helmfile diffs without deploying
kindo upgrade --version 2026.04.0 --apply --dry-run
# Stream raw helmfile output for debugging
kindo upgrade --version 2026.04.0 --apply --verbose

The CLI loads the centralized secrets from kindo-secrets-config in the kindo-system namespace, logs in to registry.kindo.ai, and runs the four upgrade steps sequentially.

  1. preflight-charts — Verify every periphery and application chart is available in the registry at the target version. Fails fast if a chart is missing, before any cluster change.

  2. peripherieshelmfile sync group=peripheries to upgrade Unleash, Qdrant, Presidio, Speaches, and Hatchet. Hatchet’s secret is pre-created with Helm ownership labels so its pre-install migration job can reference it.

  3. migrationshelmfile sync group=migrations to run the Prisma database schema migrations against every application database. Migrations are forward-only — a failure here is the point to pause and investigate, not to reroll.

  4. applicationshelmfile sync group=applications to upgrade every Kindo application service (api, next, litellm, credits, external-sync/poller, audit-log-exporter, cerbos, ssoready, nango, task-worker-ts, mcp-unified, sandbox). The install-state ConfigMap is then updated with the new version.

On completion the CLI prints Upgrade to <version> complete! and the state ConfigMap reflects the new version, so kindo status and any subsequent upgrade will see the correct baseline.

Customer Helm values are preserved

Customer-supplied Helm values — ingress host rules, replica counts, resource requests, node selectors, and anything else overlaid on top of the default chart values — are preserved across upgrades by _preserve_helm_values.

Before each upgrade step runs, the CLI calls helm get values <release> for every application and extracts the values the customer supplied at install or last upgrade. Those values are filtered (secret-related keys are excluded — they are handled by the secret-merge logic below) and deep-merged into the generated values files that helmfile consumes. The chart upgrade then re-applies the customer’s overlay on top of the new defaults.

The practical effect: an ingress that was customized for a split-domain deployment, an api replica count bumped to 4, or a custom resources.limits.memory for task-worker-ts all survive a kindo upgrade unchanged — no manual re-application needed.

Per-application secrets are preserved on upgrade

When kindo upgrade rebuilds an application’s secret (api-env, litellm-env, next-env, and the rest), existing keys in the running secret are kept as-is. The target version’s defaults only fill in keys that are genuinely new.

What this protects:

  • Customer-set overrides — a custom SMTP_HOST, a hand-tuned NEXTAUTH_URL for a split-domain install, a manually rotated KEY_ENCRYPTION_KEY.
  • Domain-derived URLs edited to point to a non-default host.
  • Anything the CLI cannot re-derive deterministically — upgrade should never surprise an operator with a regenerated value.

If the target version introduces a new required secret key that the CLI cannot generate (an OAuth client secret for a new integration, for example), kindo upgrade prompts for it before continuing. The prompt happens once — the value is saved to kindo-secrets-config and reused on every subsequent upgrade.

Upgrading a legacy Makefile install (reconstruction)

For clusters that were originally installed via the kindo-helm-e2e-dist Makefile (the pre-CLI install path), there is no kindo-secrets-config, no kindo-install-state ConfigMap, and no install-contract.yaml on any operator’s laptop. The CLI handles this transparently.

What reconstruction does:

  1. Detect missing stateSecretsStore.exists() returns false because the kindo-secrets-config Secret in kindo-system is absent.

  2. Prompt for confirmation — the CLI asks Reconstruct from live deployment?. Answer y to continue, n to exit.

  3. Extract secrets from per-app Secretsreconstruct_secrets_from_cluster walks every known application namespace, reads the <app>-env Secret, and rehydrates the full centralized secrets dict. This includes registry credentials (from image pull secrets), database connection strings (per-service, from DATABASE_URL-style env vars), Redis/RabbitMQ/S3 bindings (from api-env), Hatchet tokens, generated API keys, and encryption keys.

  4. Detect the app version — the CLI reads helm list --all-namespaces -o json and parses the chart version (e.g., api-2026.03.4) to infer the currently-running version.

  5. Save to cluster — the reconstructed secrets land in kindo-secrets-config, and every install step is marked complete in the kindo-install-state ConfigMap.

  6. Generate config filesinstall-contract.yaml and environment-bindings.yaml are written to the current directory with per-service DB connection strings, ingress annotations, and storage configuration extracted from the cluster. Admin DB credentials are placeholders — operators fill these in only if they later need to run db-bootstrap for a new service.

  7. Continue as a standard upgrade — the preflight-charts, peripheries, migrations, and applications steps run as usual.

From the operator’s perspective, the entire legacy-to-CLI migration plus the version bump happens in a single kindo upgrade --version <new> --apply invocation. After the first upgrade completes, every subsequent upgrade is a standard upgrade — no re-reconstruction.

If you want to inspect what reconstruction would extract without running an upgrade, use the standalone command:

Terminal window
kindo config reconstruct --dry-run

This prints the extracted install-contract.yaml and environment-bindings.yaml content to stdout and exits without touching the cluster.

MCP-only partial rebuild

Not every release bumps the entire Kindo stack. Some releases change only the MCP layer — new integrations, bug fixes in mcp-unified, a legacy MCP pod getting a new image tag — and in those cases there is no reason to run the full kindo upgrade against every application.

For MCP-scoped releases, do not run kindo upgrade. Instead, use kindo config edit to bump the version pin for the MCP(s) you want to update, then apply the integrations layer.

  1. Edit the contract with kindo config edit and update the MCP version(s) in the integrations: section:

    Terminal window
    kindo config edit
  2. Apply the integrations layer — this rolls only the MCP releases, leaving every other application at its current version:

    Terminal window
    kindo integrations apply

    This registers every enabled integration with Nango and deploys legacy MCP pods (one Helm release per legacy MCP, e.g., mcp-github, mcp-slack). If the enabled integrations all route through mcp-unified, no additional pods are deployed.

Inspect what is currently deployed:

Terminal window
kindo integrations status

Shows whether mcp-unified is running and lists every legacy MCP Helm release along with its status and version.

To enable new integrations that ship in a release, edit the contract the same way and apply:

Terminal window
kindo integrations enable github slack
kindo integrations apply

OAuth-capable integrations prompt for clientId and clientSecret interactively; the values are written to install-contract.yaml and deployed via Nango on the next apply.

Rollback

Rollbacks are a last resort — because database migrations are forward-only, there is no clean path back from a completed migrations step without restoring the database.

Scenario 1: a step failed before migrations ran

If kindo upgrade --apply fails in preflight-charts or peripheries, the database is untouched. Resolve the underlying issue (most commonly registry credentials or a missing chart at the target version) and re-run kindo upgrade --version <new> --apply — the step re-runs idempotently.

If the periphery charts are in a partially-upgraded state and need rolling back, use helm rollback per release:

Terminal window
helm rollback hatchet -n hatchet
helm rollback qdrant -n qdrant
# ...etc. per periphery

Scenario 2: migrations or applications failed — full rollback needed

  1. Restore the database snapshot you took pre-upgrade (see Pre-upgrade checklist). Rolling back application charts without restoring the database will leave the app pointing at schema it does not understand.

  2. Rollback Helm releases per application and per periphery:

    Terminal window
    for app in api next litellm credits external-sync external-poller \
    audit-log-exporter cerbos ssoready nango task-worker-ts \
    mcp-unified prisma-migrations; do
    helm rollback "$app" -n "$app" 2>/dev/null || true
    done
  3. Restore pre-upgrade secrets from the <app>-env-pre-upgrade backup created by the secret-merge step. For each app whose secret needs restoring:

    Terminal window
    kubectl get secret api-env-pre-upgrade -n api -o json \
    | jq 'del(.metadata.resourceVersion, .metadata.uid, .metadata.creationTimestamp) \
    | .metadata.name = "api-env"' \
    | kubectl apply -f -
  4. Verify — run kindo status to confirm the install-state ConfigMap version, then smoke-test the UI, API, and agent execution per the checklist in Configure & Validate.

Pre-upgrade checklist

Run through every item before kindo upgrade --apply:

CheckHow
Database snapshots are current and restorableVerify your DB snapshot policy; run a test restore if possible
Helm registry credentials in install-contract.yaml are validkindo install --plan or kindo upgrade --version <target> --plan — prereq check surfaces login failures
kindo upgrade --version <target> --plan is cleanNo missing charts, all prereqs green, cluster reachable
Maintenance window scheduledBroadcast expected duration based on prior upgrades; peripheries and applications each have 900s timeouts
Current CLI version matches target trackkindo --version; upgrade the CLI binary if the target release bumps the CLI
Outbound egress to registry.kindo.ai is open from the clusterTest with helm registry login registry.kindo.ai from a node or bastion
No concurrent kindo install --resume or kindo config apply runningBoth touch cluster state

Post-upgrade checklist

After Upgrade to <version> complete!:

CheckHow
kindo status reports the new versionkindo status
All pods are Ready in every namespacekubectl get pods -A | grep -v Running — the list should be empty after startup
Prisma migrations completed without errorkubectl logs -n prisma-migrations -l app.kubernetes.io/name=prisma-migrations
Hatchet workflow engine is healthykindo status plus Hatchet pod logs; hatchet-client-config Secret should be present
Smoke test the UI, chat, and agent executionPer Configure & Validate
No new secret keys are missing from <app>-envRun kindo upgrade --plan — it notifies you of any new required secret keys the target version introduces before you apply
Pre-upgrade secret backups are in placekubectl get secret -A | grep env-pre-upgrade lists one per app touched by the upgrade
Observability dashboards greenError rates, latency, queue depth — baseline against pre-upgrade values

Keep the <app>-env-pre-upgrade backups until you have soaked the new version long enough that rollback is no longer a live option. Delete them once you are ready to commit.

Next steps