1. What is Kindo?
Kindo is an AI native automation platform for enterprise technical operations, including SecOps, DevOps, and ITOps teams. It uses intelligent AI agents and a domain-tuned large language model (LLM) to offload repetitive toil and execute operational workflows end-to-end at high speed. Kindo helps eliminate the brittle scripts and siloed tools that have plagued ops teams, by providing a unified agentic system that can run your runbooks, enforce security policies, and respond to incidents in real time. Kindo is model agnostic, and compatible with 26+ LLMs. Our agents can secure infrastructure and respond to threats at GPU speed, all while keeping your data private and compliant. Everything you do in the UI is available through a public API so you can create agents, trigger runs, manage approvals, fetch logs, and wire Kindo into your own apps and pipelines.
2. How is Kindo different from chatbot-based automation platforms?
Kindo is a full agentic automation platform. Most chatbot-based tools simply give you an AI assistant that can answer questions or summarize information, but they stop short of taking real action. Kindo, by contrast, was built AI-first for enterprise operations: its agents don’t just chat, they autonomously make decisions and take actions in your systems. Unlike basic bots that might alert a human to an issue, Kindo can actually orchestrate multi-step workflows across your tech stack to resolve the issue, with appropriate approvals. It’s like moving from a passive advisor to an active team member. Instead of scripting data integrations and normalization, just connect to integrations and prompt in natural language.
3. What is an AI-native agent and how does it work in Kindo?
An AI native agent in Kindo is essentially an intelligent software assistant that can understand natural language, reason through a task, and perform actions on your behalf. These agents encapsulate the routines or playbooks that ops teams normally follow. What makes Kindo’s agents unique is that they can read and interpret human language instructions or documents, then translate that intent into programmatic actions (API calls, scripts, etc.). When you deploy an agent, you might give it a goal or let it trigger off an event. The agent will then plan the steps needed, call the right tools via integrations, and adapt as needed to achieve the goal. A Kindo agent could detect a new vulnerability report, look up the affected servers, cross-check if those servers are impacted, and then automatically apply a patch or open a ticket, all in one continuous, AI driven flow.
4. What is Deep Hat and why is it tuned for technical operations?
Deep Hat is Kindo’s proprietary LLM tailored specifically for DevSecOps, SecOps, and infrastructure tasks. Unlike a general-purpose model, Deep Hat has been trained on thousands of real-world IT incidents, cybersecurity scenarios, code logs, and CLI syntax, so it speaks the language of cloud infrastructure and security teams. This tuning means it excels at tasks like reading firewall configs, demonstrating an exploit, or explaining root causes in a deployment failure. Deep Hat delivers faster inference and deeper infrastructure awareness than standard models, giving more accurate and relevant answers for technical work. It’s also designed for adversary simulation and red teaming, which helps in security workflows.
5. Can Kindo run without internet access?
Yes, Kindo supports on-premise deployments with no internet connection required. You can deploy Kindo entirely in your own environment (your data center or private cloud) so that all data, models, and operations stay behind your firewall. In this self-managed mode, Kindo doesn’t rely on any external cloud services; even the AI models (like Deep Hat or other supported models) can run locally on GPUs or your infrastructure. A lot of customers choose this option for security or compliance reasons. In practice, once you install Kindo on your internal network, it can operate completely offline, the agents will use your internal data sources and the platform won’t call out to the internet. (Of course, Kindo also offers a cloud-hosted version for those who prefer SaaS, but on-prem is fully available.)
6. What is the Model Context Protocol (MCP) and how does Kindo use it?
The Model Context Protocol (MCP) is an open standard (introduced by Anthropic in late 2024) that allows AI models/agents to connect with external tools and data in a uniform way.
Think of MCP as a kind of universal adapter for AI.
Instead of writing a custom integration for every single API or database, an MCP-compatible agent can talk to any MCP-enabled service using a standard JSON message format. It’s often described as a “USB-C port” for AI, because it replaces dozens of custom adapters with one common interface.
Kindo embraces MCP under the hood to integrate with virtually any system. The platform provides a fleet of MCP integrations (both native and auto-generated) to popular tools, source control, cloud services, ticketing systems, SIEMs, databases, even generic shell environments.
This means a Kindo agent can, for example, fetch data from Jira or trigger an AWS Lambda via the MCP protocol without needing a bespoke plugin for each. MCP is the technology that lets Kindo’s AI agents plug into all your existing tools through a consistent interface. Kindo acts as the hub that speaks MCP on one side and translates to the specific tool API on the other.
7. How does Kindo secure MCP connections?
Kindo takes security extremely seriously when connecting AI agents to external tools via MCP. By design, Kindo sits as a governance layer between the AI and your tools. No agent in Kindo ever gets free, direct access to your systems. Instead, every command or request goes through Kindo’s platform where it undergoes security checks and policy enforcement.
Some important security measures include:
• Data Loss Prevention (DLP)
All data passing through Kindo, both prompts and responses, can be scanned for sensitive information. Kindo will automatically redact or block anything disallowed (e.g. secrets or customer PII) from leaving the system. This means Kindo’s DLP prevents sensitive data from getting pulled into any LLM from a connected integration.
• No Raw Credentials Exposure
Kindo ensures the LLM never sees sensitive credentials. If an agent needs to call an external API, it doesn’t get the API key directly. Instead, Kindo’s integration layer injects the encrypted, fresh credential at execution time without revealing it to the model. So even if the agent’s context were somehow leaked, your passwords/API keys remain safe.
• Role-Based Access Control (RBAC)
Administrators can define exactly which actions an AI agent is allowed to perform on which systems. Kindo will only permit tool invocations that comply with these role permissions and policies. This prevents an AI from executing commands it shouldn’t (for example, an agent limited to read-only monitoring won’t be allowed to run deletion commands). When Kindo end users connect an integration to an agent, the agent generally inherits the user’s authorization, ensuring the agent cannot escalate privileges.
• Whitelisted Integrations
By default, Kindo doesn’t let agents arbitrarily connect to any random MCP server out on the internet. All integrations (MCP integrations) are explicitly set up and vetted by your team. This prevents an agent from being tricked into loading a malicious third-party tool. Only approved, trusted MCP connections are allowed, closing the door to untrusted external servers.
• Detailed Audit Logging
Security teams can review these logs to see exactly what the AI did, which tools it accessed, and what the results were. This ensures accountability and makes it easier to investigate or rollback anything if needed.
8. Can Kindo connect to third-party MCP servers?
Yes, but with caution and controls. Kindo’s architecture can integrate with external third-party MCP servers, provided they meet your governance requirements. This isn’t something an end user AI can just do on the fly; an admin would explicitly configure the connection. The reason for caution is security: not all MCP servers out in the wild are trustworthy, and allowing an agent to plug into an unknown service could be dangerous.
Kindo mitigates this by requiring that any external MCP integration be whitelisted/approved in the platform. If you have a third-party MCP server, you can connect it to Kindo, but you’ll do so in Kindo’s interface, Kindo will then treat it like any other tool – applying the same RBAC, DLP, and audit controls to interactions with that server.
Most Kindo deployments use the platform’s built-in integrations or host their own MCP servers for internal tools. However, if there’s a compelling external MCP service your organization wants to use, Kindo can accommodate it; you just need to ensure it’s vetted.
Kindo will never let the AI agent randomly decide to connect to some unknown URL or MCP server without oversight. Everything has to go through the governance layer. So, while third-party MCP integrations are possible, they come under the same governance umbrella: you maintain control, and security checks remain in place. This way, you can extend Kindo’s reach to external services safely. (It’s a contrast to a raw MCP approach where an AI might discover and use any tool, Kindo won’t permit that unless you say it’s okay.)
9. What types of tasks can Kindo agents perform out of the box?
Kindo agents are flexible and can handle many different tasks across SecOps, DevOps, and ITOps. Here are a few examples of tasks you can start doing right now with Kindo:
• Vulnerability Management
Agents can continuously scan for new vulnerabilities affecting your systems (this includes red teaming tasks) and even apply patches or mitigations automatically. For instance, a Kindo agent could detect a critical CVE in your Linux servers and submit a code improvement pull request, trigger a patch rollout, or container rebuild across your fleet.
• Identity and Access Enforcement
Kindo can audit IAM roles, cloud permissions, and user accounts to enforce least privilege. An agent might review all AWS IAM roles and flag (or auto-correct) those with overly broad access, or disable stale accounts, ensuring your access policies stay tight.
• Incident Detection and Response
Agents can perform incident triage and tracing. For example, upon an alert (high CPU, suspicious login, etc.), an agent could gather related logs from Splunk, correlate them to find the root cause, contain the issue (e.g. isolating a server or killing a process), and notify the on-call team with a summary. This accelerates response and reduces manual digging during outages or security incidents.
• Infrastructure Automation
Kindo agents can execute routine ops tasks like provisioning or updating infrastructure. Need to rotate certificates, clean up S3 buckets, or apply a config change across 100 VMs? An agent can do that in one go. They effectively act like an Ops bot running your playbooks (Terraform, Ansible, etc.) but with intelligence – ensuring steps succeed and adjusting if necessary.
• Ticketing and Workflow Orchestration
Out of the box, Kindo integrates with ITSM tools (like Jira, ServiceNow). Agents can automatically escalate tickets or create new ones based on conditions. For example, if an incident meets certain criteria, the agent can open a high-priority ticket and even populate it with all relevant information (logs, diagnostics) it gathered. Conversely, agents can read tickets (or chat channels) to kick off workflows – e.g. see a Slack message “user X is locked out” and trigger an unlock/reset process.
• Compliance and Audit Tasks
Kindo can regularly check configurations against compliance benchmarks (CIS standards, internal policies). An agent might run daily audits of cloud resource settings (ensuring encryption is enabled, ports closed, etc.) and either auto-remediate or report violations. It can also gather evidence needed for audits (like pulling access logs or control settings) without you manually doing it.
10. Do I need to rewrite my runbooks or playbooks to use Kindo?
No, you do not need to throw away or rewrite your existing runbooks/playbooks.
Kindo is built to augment and leverage what you already have, not replace it outright. A lot of Kindo agents are essentially codifying the same steps that exist in your current manual procedures or automation scripts. The difference is that Kindo allows you to run those procedures via natural language and AI decision-making.
Here’s how it works:
You can take an existing runbook, like a wiki page that describes how to handle a server outage, and use it as the basis for an agent’s logic. Kindo’s AI can read human-written instructions and follow them. You also have the option to directly call external scripts or tools within an agent – for example, if you have a bash script that does cleanup, a Kindo agent can execute that via a connector rather than you having to recode it. Kindo integrates with your tooling, so an agent can trigger those existing pipelines and scripts.
Kindo provides a no-code interface to define workflows, so you might refactor some manual steps into that format, but this doesn’t mean starting from scratch. It’s more like orchestrating your current playbook with AI: you set up the steps or use provided templates, and let the agent handle the heavy lifting. You can also feed your knowledge base or past incident reports into Kindo, the AI will use that context rather than you having to encode it line by line.
11. How does human-in-the-loop control work in Kindo?
Kindo is designed with human oversight in mind.
You can configure human-in-the-loop checkpoints so that an AI agent must get approval before taking certain actions. In practice, this works via Kindo’s interface (or chat UI): when an agent reaches a step that requires sign-off, it will pause and ask for approval, presenting the proposed action to a human operator. The operator can then review and approve or deny with a single click.
Beyond approvals, Kindo also supports escalation paths. That means if an agent is unsure how to proceed or gets a denial, it could escalate the issue – for instance, by handing off to a human operator with all the context gathered so far, or by notifying a higher-level oversight team. Everything the agent does and every approval request is logged, so you have a full audit trail of who approved what and when.
12. What models can I run inside Kindo?
Kindo is model-agnostic and supports a wide range of AI models to run your tasks.
Out of the box, it comes with Deep Hat, Kindo’s own DevSecOps-optimized LLM, but you can also bring your choice of 3rd-party models. This includes popular models from OpenAI like GPT-4/GPT-3.5, Anthropic’s Claude, Google’s PaLM, open-source models like Meta’s LLaMA/Mistral, and more.
Importantly, you’re not locked in to one vendor’s AI.
For example, you might use GPT-4 for one type of task but switch to an open-source model for another, all within the same Kindo platform. The agents can be configured to use whichever model best suits the job. All these models operate under Kindo’s secure execution framework, meaning even if you’re using a cloud API model, Kindo still wraps it with DLP and audit logging, etc., for safety.
13. Can I run Kindo on premises?
Absolutely. Kindo offers flexible deployment options, including fully on-premises or in your private cloud.
Many enterprise customers deploy Kindo self-managed on their own infrastructure – whether that’s on your own bare-metal servers, within your VMware/OpenStack environment, or in a VPC in AWS/Azure that you control. The platform is built to be cloud-agnostic and containerized for easy installation (e.g., via Kubernetes/Terraform).
When running on-prem, you maintain full control over all data, models, and operations. All the AI model inference happens on hardware you manage or in a network you trust.
This is important for meeting certain compliance or data residency requirements. Kindo’s on-prem version still includes all the features – the agent framework, Deep Hat if you want to run it locally, the integrations, etc. Customers can opt to enable phone home for stats gathering or to send us a report they download from within our app on a monthly basis.
If you don’t want the overhead of managing it, Kindo does offer a SaaS hosted version. You could start there for a pilot and later move on-premises.
In terms of installation effort, Kindo provides deployment scripts and documentation to spin it up on prem. It’s deployed as a Kubernetes cluster: a set of Docker containers or Helm charts. Terraform plans are included for major cloud providers for ease of deployment. Once deployed, you access it via a web interface just like the cloud version.
So yes, you can run Kindo on premises – it was designed with this in mind for security-conscious customers. Whether on your own metal or isolated cloud, you get the same functionality but in an environment fully controlled by you.
14. What security measures are built into Kindo?
Kindo has been designed from the ground up with a strong emphasis on enterprise-grade security. It incorporates multiple layers of advanced security features:
• Fine-Grained Access Controls (RBAC)
Kindo has role-based access control for both users and agents. You can define which team or role is allowed to run which agents or access certain integrations. This ensures that, for example, only authorized individuals can trigger an agent that makes production changes. All access is centrally managed and can tie into Single Sign-On.
• Data Loss Prevention (DLP)
The platform provides built-in DLP mechanisms that monitor all data going into and coming out of the AI. If an agent’s prompt contains sensitive information (like credentials, personal data, etc.), Kindo can automatically block or redact it. This prevents accidental leakage of secrets or regulated data through the AI’s outputs. Essentially, it keeps the AI from spilling what it shouldn’t.
• Audit Logging & Traceability
Every action and decision in Kindo is logged. There’s a detailed audit trail of which agent did what, which tool was called, who approved an action, and what the outcome was. These logs are invaluable for security audits, compliance (e.g. proving controls for SOC 2/ISO 27001), and incident investigation. If something goes wrong, you can replay exactly what the AI did step by step.
• Compliance Alignment
Kindo aligns with major security frameworks and offers features to help with compliance (RBAC, SSO, logging, etc.). While not a measure per se, it means the product was built to make passing audits easier – evidence generation and enforcement points are baked in.
• Encryption and Data Security
All data stored in Kindo (conversation logs, configurations, secrets) is encrypted at rest, and all communication is encrypted in transit (using TLS). If you run Kindo on-prem, it works with your encryption and key management policies. Kindo also isolates execution (see “secure sandbox” below) so that any code run by the AI is in a contained environment, adding an extra layer of protection. A dedicated secrets manager ensures sensitive credentials are stored securely and only provided to agents on a need-to-know basis.
• Policy Enforcement & Guardrails
You can define custom policies – for example, limiting an agent’s actions to read-only mode. Kindo’s policy engine will enforce these rules. Out-of-the-box, guardrails like preventing certain dangerous commands or requiring approvals for specific tasks are available. This stops the AI from doing anything outside the bounds you set.
15. Does Kindo support compliance like SOC 2, ISO 27001, or FedRAMP?
Yes. Kindo supports enterprise compliance programs.
Kindo is SOC 2 Type 2 compliant and supports frameworks like ISO/IEC 27001, GDPR, and NIST CSF. Many core features map directly to required controls. Access control is enforced with RBAC and SSO. Events are captured with comprehensive audit logs. Data is protected with encryption and DLP. These capabilities align with ISO 27001 and similar standards and help check those boxes.
FedRAMP is more involved because it is a U.S. government cloud standard. Kindo is not publicly listed as FedRAMP-authorized as of 2025. Customers can deploy Kindo in an on-prem environment or within a FedRAMP-approved cloud such as AWS GovCloud under their own controls. Because Kindo can run fully self-managed, this approach is feasible. Encryption, logging, and access controls support the strict requirements up to FedRAMP High and can aid an Authority to Operate process.
To clarify scope, we are audited for SOC 2 Type 2. We are not audited for ISO 27001, NIST RMF, NIST RMF AI 1.0, ISO 42001, or FedRAMP. While we do not offer a dedicated “compliance management” module for those frameworks, Kindo assists compliance teams in two ways. First, the platform helps write and refine policies and documentation, assess existing materials, map controls within and across frameworks (for example, SOC 2 to ISO 27001), and identify gaps. Second, Kindo integrates via API with compliance management products like Vanta to orchestrate and manage tasks those systems define
16. How does Kindo integrate with my existing tools?
Kindo integrates via a universal connector system that covers a vast array of development and operations tools you likely use. Under the hood, Kindo uses the MCP standard and integration to talk to other systems.
What this means practically is that Kindo can plug into your source code repositories (GitHub, GitLab), your CI/CD pipelines (Jenkins, GitHub Actions), your cloud platforms (AWS, Azure, GCP APIs), container orchestration, your ITSM/ticketing tools (Jira, ServiceNow), your monitoring and logging systems (Datadog, Splunk, CloudWatch), databases, message queues, and even custom internal tools.
The integration process is usually straightforward: from the Kindo interface, you go to the integrations section and add a new integration. For example, to connect GitHub, you might provide an access token and the scope (repos) the agent can use. To connect Slack, you’d authorize Kindo’s app to your workspace via OAuth 2.0. Kindo supports a wide variety of authentication methods for integrations. Kindo uses those credentials to maintain a secure connection to the service.
Once connected, these tools become available to Kindo agents. So you can ask something like “List open critical Jira tickets” and the Kindo agent will use the Jira integrations to fetch that information. Or “Restart the failed EC2 instances” and it will use the AWS connector. Over 200 pre-built integrations exist, and Kindo can interface with any API given the right configuration.
Kindo’s integrations are bi-directional in context: they can pull data in (like get data from Splunk) and push actions out (like create a ticket in ServiceNow). Agents can chain multiple integrations in one workflow (e.g. detect an alert from Datadog, then create a Jira ticket and page on-call via PagerDuty, all in one go). Integrations also push data in as triggers for Agents to react to autonomously.
Kindo integrates with pretty much whatever you’ve got, from version control to cloud to IT service management. Setup typically just involves providing credentials or API access. Once done, Kindo’s agents can read from and write to those systems in their workflows, effectively making your toolchain operate like one cohesive system.
17. How does agentic execution differ from automation scripts?
Agentic execution refers to an AI agent’s ability to autonomously plan and perform a series of actions to achieve a goal, adapting along the way, much like a human operator would. This is quite different from a traditional automation script.
Here are the key differences:
• Dynamic vs. Static
A script (like a Bash or Python script, or even a playbook) is static and linear – it does exactly what it’s programmed to do, in a fixed sequence. If something unexpected happens that the script doesn’t anticipate, it typically fails or stops. In contrast, an agentic AI dynamically figures out the steps needed. It can branch, loop, or change strategy based on live feedback. The agent has a kind of decision-making loop: it looks at the result of each step and decides the next step, rather than just following a pre-written list. In Kindo, you create a controlled environment and guidelines for the Agent to work within, rather than a rigid sequence of steps to execute.
• Understanding Intent
With scripts, you have to explicitly code the intent (“do X, then do Y, then if Z happens do Q…”). With an agent, you often just express a high-level intent (“resolve this incident” or “optimize this config”) and the AI figures out the procedure. The Kindo agent will plan multiple steps and even consult intermediate results to adjust its plan. For example, if step 2 fails, a script might just error out; an agent could recognize the failure, try an alternative, or ask for clarification.
• Multi-Step Autonomy
Agentic execution means the AI can handle an entire workflow chain on its own. Instead of you running one script for step 1, then feeding output to another script for step 2, etc., the agent can do the whole thing in one flow. It’s aware of the big picture goal and can carry context from step to step (thanks to memory and the LLM’s reasoning). It’s like having an intelligent coordinator that not only runs tasks but also knows why it’s doing them and when it’s done.
• Explainability and Collaboration
Another angle – an agent can explain its steps and reasoning in natural language. It’s like pair-programming with an AI. A script doesn’t tell you why it’s doing step 3; a Kindo agent could (in a verbose mode) explain “I’m restarting the service because the previous log search showed a null pointer exception, indicating a likely transient issue.” This makes it easier for humans to collaborate with automation. Traditional scripts lack that interactive element.
18. How does Kindo handle sensitive data?
Kindo is built to protect sensitive data throughout every stage of an AI workflow. Here are the primary ways it ensures the secure management of confidential information:
• Data Stays in Your Control
If you have strict data privacy needs, you can deploy Kindo in an on-premises or private cloud environment so that no sensitive data ever leaves your infrastructure. Unlike some SaaS AI tools that send your data to third-party servers, Kindo allows all processing (including model inference) to happen in an environment you manage.
• No Exposure to AI Systems
Kindo never exposes raw secrets (passwords, API keys) to the AI models. If an agent needs to use a credential, that secret is fetched from a secure vault and injected at the moment of use, but the AI doesn’t see it. So the model can’t accidentally output it or be manipulated into revealing it. If you choose to use third-party LLM providers, they can’t log or train on your credentials. All secrets are stored encrypted in a vault, and access to them is tightly controlled.
• Data Encryption and Isolation
All data at rest in Kindo (like logs, agent conversation history, etc.) is encrypted using strong encryption. In transit, data is sent over HTTPS/TLS. Additionally, when the AI needs to execute code or analyze data, it runs in Kindo’s secure sandbox (see below) which is a temporary environment isolated from other systems. This means any sensitive data processed there has no route to leak out except through the controlled channels that Kindo governs. Sandboxes are destroyed after inactivity, and never contain credentials.
• DLP Filters
Kindo incorporates data loss prevention filters that act in real-time on the AI’s inputs and outputs. If, say, a user accidentally asks an agent a question and includes a confidential snippet of code or a private key, the DLP can catch that and mask it before it goes to the LLM. Similarly, if an agent’s answer were to include some sensitive information, Kindo could redact it before showing it. These filters are customizable to your definitions of sensitive data (PII, secrets, etc.).
19. What governance features does Kindo include?
Kindo includes a strong set of governance and oversight features to ensure that the AI agents operate within your organization’s rules. Here are the main governance features:
• Central Policy Management
Kindo provides a console where you can define and manage policies for AI behavior. This includes things like which agents are allowed to use which integrations, what hours certain automations can run, and what kinds of actions require approval. You essentially set the guardrails here – e.g., “Agent X can never delete resources” or “Agent Y can run in read-only mode unless a manager approves escalation.”
• Role-Based Access Control (RBAC)
Governance starts with controlling access. Kindo’s RBAC not only governs human users (who can use or configure the system) but also agent roles. You can assign permissions to agents similar to how you would to a user. For example, an “ITOps Agent” role might have access to VMware and AD integrations but not to Finance databases. Kindo ensures agents cannot step outside the permissions of their role.
• Approval Workflows
Kindo allows you to insert human approval steps into any agent workflow. From a governance perspective, this is huge – it means you can enforce that “any action affecting production must be approved by a human with Ops Manager role.” You configure these rules, and the platform will automatically route the approvals and log the outcomes.
• Auditing and Logs
Every agent action, every decision, and every human override is logged in an audit trail. Governance means being able to prove and trace what happened. Kindo’s logs show time-stamped records of events (with rich detail, often in JSON). There’s also a UI to review past runs of agents, see what steps they took, and even replay their reasoning if needed. This satisfies auditors and gives internal security teams comfort that they can always investigate the AI’s behavior.
• Evaluations, Monitoring, and Blocking
All LLM inference supports monitoring both input prompts and LLM outputs, running evaluators to produce scores, connecting external systems for monitoring, and blocking LLM requests or responses based on evaluations. This is useful for tracking Agent quality metrics in aggregate, blocking prompt injection attempts, performing content moderation on input, bringing your own DLP, and blocking LLM responses from containing unwanted information.
20. How fast can I get Kindo up and running?
Kindo is designed for quick deployment and fast time-to-value. The speed to get started generally depends on your deployment choice:
• Cloud (SaaS) Deployment
If you opt for Kindo’s hosted cloud service, you can get started within minutes. It’s as simple as creating an account (or having your org admin set one up), and then logging in via the web UI. From there, you can immediately begin connecting integrations (which often just involve OAuth or plugging in an API key). Many users are able to run their first agent on the same day they sign up. The onboarding will guide you through creating an agent or using a template.
• On-Premise Deployment
For self-hosting Kindo, you’ll need a bit more time to install it in your environment. Kindo provides Terraform scripts, Helm charts, or Docker Compose files to simplify this. Spinning up Kindo on a Kubernetes cluster could take a few hours for a basic setup – essentially, it’s about as complex as deploying a typical web application that has a few microservices. You might allocate a day or two for a proof-of-concept deployment if going on-prem, which includes configuring your servers, storage, etc.
• Initial Configuration
Once the platform is running (cloud or on-prem), the main setup task is connecting to your tools and setting any guardrails. Thanks to Kindo’s wide array of pre-built integrations, integrating each tool is usually a matter of minutes. For instance, connecting Slack or Jira might take 5 minutes each to create tokens and validate connectivity. There isn’t much development work, it’s mostly configuration.
• Out-of-the-Box Agents
Kindo comes with some pre-built agent templates and playbooks for common tasks (e.g., an incident response template, a CVE patching workflow, etc.). This means you don’t have to build everything from scratch. You can often activate or lightly customize a template agent and have it start performing useful work on day one or two. Many new users get immediate value by deploying one of the top use-case agents (like an “open tickets summarizer” or “cloud compliance checker”) right after installation.
• Pilot and Expansion
Strategically, Kindo often recommends starting with a pilot team or a particular use case (something like “automate user access reviews” or “AI-assisted incident triage”). You could have that pilot in place in a matter of days. From there, expanding to more use cases is usually just a matter of adding agents and integrations, not reworking the core. So the rollout can be incremental and fast per each new workflow.
21. What is a secure sandbox in Kindo?
A secure sandbox is an isolated VM that is created for every agent conversation. Agents use sandboxes as a workspace to perform precise operations like math to ensure accuracy, run ad hoc shell commands and python scripts, download and transform data, handle large connector tool responses without running out of context window, incrementally create documents, etc. Sandboxes are hardened and come with commonly used packages pre-installed so Agents can get right to work. Sandboxes can be disabled, forcing Agents to use their LLM context window and integrations alone to do work, or have network access disabled, ensuring that the only I/O is through the LLM (e.g. the Agent cannot download or upload data within the sandbox). The lifecycle and image for sandboxes are configurable. To ensure Kindo’s governance applies, the sandbox never receives credentials. Thus for authenticated actions, Agents are required to use connector tools, not free-for-all sandbox commands.