Skip to content

Deployment Overview

Kindo is available in two deployment models. Both provide the full feature set — agents, integrations, governance, and AI chat — with differences in hosting, data residency, and operational responsibility.

Deployment Models

SaaS (Cloud-Hosted)

Kindo manages the infrastructure. You sign up, sign in, and start working.

AspectDetails
HostingKindo-managed cloud infrastructure
UpdatesAutomatic — always on the latest version
Data residencyKindo-managed (US-based)
Models26+ models available immediately
Setup timeMinutes
Operational burdenNone — Kindo handles infrastructure, scaling, and availability

Best for: Teams that want to get started quickly without managing infrastructure.

Get started: Sign in and explore

Self-Hosted (On-Premises)

You deploy and manage Kindo in your own infrastructure using Kubernetes and Helm charts.

AspectDetails
HostingYour infrastructure (AWS, GCP, Azure, on-prem Kubernetes)
UpdatesYou control the upgrade cadence
Data residencyYour environment — data never leaves your network
ModelsBring your own models, including air-gapped DeepHat
Setup timeHours to days depending on infrastructure readiness
Operational burdenYou manage Kubernetes, databases, and service health

Best for: Organizations with strict data residency, compliance, or air-gap requirements.

Get started: Self-Hosted Overview

Feature Comparison

Both deployment models include the same core features:

FeatureSaaSSelf-Hosted
AI ChatYesYes
Agents (Chatbot, Workflow, Trigger)YesYes
Integrations (MCP)YesYes
DLP and GovernanceYesYes
RBACYesYes
Audit LoggingYesYes
SSOYesYes
API AccessYesYes
DeepHatHostedSelf-hosted on your GPUs
Custom Model HostingNoYes
Air-Gap SupportNoYes

Choosing a Model

Consider self-hosted deployment if you need:

  • Data sovereignty — All data stays in your environment
  • Regulatory compliance — Deploy into FedRAMP, PCI-DSS, or GDPR-compliant environments
  • Air-gapped operation — Run without internet connectivity
  • Custom models — Host proprietary or fine-tuned models on your own GPUs

Self-Hosted Deployment Path