Glossary
Quick reference for terms you will encounter across the Kindo documentation.
| Term | Definition |
|---|---|
| Agent | A Kindo-managed automation that executes a series of steps. Types include Chatbots, Workflow Agents, and Trigger Agents. |
| Agent Run | The final output of an agent’s last LLM Step. |
| Agent Step | An atomic operation within an agent: LLM Step, Action Step, or API Action Step. |
| Auth Backend | The component managing enterprise SSO and authentication. Self-Managed Kindo uses SSOReady. |
| Chat | A collection of requests from a user to LLMs through the Kindo Terminal without an agent. |
| Chatbot | An agent that responds to users using information from an AI model and a knowledge base. |
| DeepHat | Kindo’s proprietary cybersecurity-focused LLM, optimized for DevSecOps tasks and adversary simulation. |
| DLP (Data Loss Prevention) | Filters that tokenize or redact sensitive data before it reaches AI models. |
| Embeddings Model | A model that converts content into numeric vectors for semantic search and retrieval. |
| Environment | (Self-Managed) The Kubernetes platform where Kindo’s Helm Chart runs. |
| File | A file within the Kindo Platform — uploaded by users, provided by integrations, or generated by AI. |
| Integration | An external system connected to Kindo for data ingestion and outbound actions (e.g., Jira, Slack). |
| Kindo API | REST API endpoints: Inference API (single LLM request) and Agent API (agent run output). |
| Kindo SaaS | The cloud-hosted version of Kindo, managed by the Kindo team. |
| Kindo Terminal | The primary user interface for making requests to AI models. |
| LLM | Large Language Model — the AI model used in requests (e.g., GPT-4, Claude, DeepSeek R1). |
| LLM Step | An agent step that requests a response from an LLM. |
| Audit Log | A record of user, AI agent, and administrative actions within Kindo. Stored in the database by default; optionally forwarded to syslog in Self-Managed deployments. See Security Controls. |
| Log Backend | Where Kindo streams its audit log (Syslog server in Self-Managed). |
| MCP | Model Context Protocol — an open standard for connecting AI agents to external tools. |
| Memory | The historical collection of LLM responses within a chat or agent session. |
| Peripheries | Third-party services Kindo depends on (Unleash, Qdrant, Presidio, etc.). |
| Redis Streams | The Redis data structure Kindo currently uses for real-time conversation streaming between backend services. Requires a non-sharded Redis deployment. See Infrastructure Requirements. |
| Resource | A structured data entity from an integration (Tickets, Users, Emails, Logs). |
| Run | An instance of interaction between a user and an LLM brokered by Kindo. |
| Sandbox | An isolated VM created per agent conversation for secure code execution. |
| Secret | A sensitive value (API key, token, etc.) stored in the Secrets Vault. |
| Self-Managed Kindo (SMK) | Kindo deployed in customer-controlled infrastructure via Kubernetes Helm Chart. |
| Self-Managed Model | An AI model hosted and managed by the customer rather than a cloud provider. |
| State DB | The backend relational database (PostgreSQL) holding Kindo’s application state. |
| Step Run | The individual response of a single LLM Step within an agent. |
| Sync Backend | The component that syncs Kindo with external SaaS services (ticketing, storage, etc.). |
| Target | The output destination of an agent run (Terminal, ticketing system, API endpoint). |
| Trigger Agent | An agent that activates in response to a specific event in an integrated system. |
| Unleash | Feature flag management platform used by Kindo for model configuration and feature toggles. |
| User LLM | An LLM that users or agents make requests to. Shown as “Models” in the Kindo UI. |
| Vector DB | Vector database used for RAG and semantic search (Pinecone or Qdrant). |
| Worker LLM | An LLM used internally by Kindo for non-user operations. |
| Workflow Agent | An agent that conducts operations within systems of record, including API calls. |