Concepts and Terminology
This glossary covers the core concepts and terminology used when working with the Kindo platform.
AI Models
Generative AI Model
A broad term for any ML model that the Kindo platform supports. While Kindo functionality is primarily driven by LLMs, the platform also supports other model types.
Embeddings Models
Models that tokenize content into numeric vectors representing semantic meaning. Used for knowledge retrieval across files and large contexts.
Multi-modal Models
Models that work with multiple content types such as text, audio, images, and video. Models are typically constrained to a few content types.
Large Language Model (LLM)
The fundamental AI model used in requests. Examples include OpenAI o1, Anthropic Claude, and DeepSeek R1.
LLMs used within Kindo are one of two types:
- Worker LLM — The LLM used within Kindo for internal, non-user operations.
- User LLM — The LLM that users or agents make requests to. Within Kindo’s user interface these are referred to as “Models.”
Self-Managed Model
A generative AI model that an administrator manages and configures for Kindo’s use. This may include self-hosted models or cloud deployments such as Anthropic models via AWS Bedrock in GovCloud.
Platform Concepts
Kindo Terminal
The primary user interface within Kindo. At the Terminal, a user makes requests to a User LLM, either directly through a Chat or through an Agent Run.
Chat
A collection of requests from a user to User LLMs through the Kindo Terminal without an Agent. Chats are divided by Memory, enabling the user to communicate with different models across different historical contexts.
Agent
A Kindo-managed automation that executes a series of steps or sequential, atomic operations. There are three types:
- Chatbots — Agents designed to respond to users with information gathered from an AI model and a knowledge base.
- Workflow Agents — Agents that conduct a set of operations within a system of record, including API calls and external tool invocations.
- Trigger Agents — Workflow agents that lie dormant until a specified event occurs (such as a new ticketing system entry), then launch a response.
Memory
The historical collection of LLM responses from user requests. Memory enables users to mix and match User LLM responses and leverage output from previous interactions. Chats and all types of Agents hold memory.
Run
An instance of interaction between a user and a User LLM brokered by Kindo. Runs result from communications through Chats, Agents, Triggered Agents, and Public API calls.
- Agent Run — The final output of the last LLM Step.
- Step Run — The individual response of a single LLM Step within an Agent.
Kindo API
The set of REST API endpoints exposed by Kindo:
- Inference API — An individual request to a User LLM with no memory.
- Agent API — A call that produces the output of an Agent Run.
Target
The output location of an Agent Run or Step Run. Targets typically refer to the Kindo Terminal but can also refer to external systems like ticketing platforms or APIs.
Resource
A structured data entity provided by an Integration. Examples include Tickets, Users, Emails, Logs, and Metrics.
File
A file within the Kindo Platform. Files can be uploaded by users, provided by Integrations, or generated by AI Models.
Secret
A sensitive text blob consumed on the Kindo Platform, typically representing API keys, access tokens, or environment variables. Secrets are stored in the Secrets Vault.
Infrastructure Terminology
Integration
A system external to Kindo that connects to enable data ingestion and outbound actions. For example, the Linear ticketing system provides Resources to Kindo and enables actions like adding comments to tickets.
Product Offerings
Kindo SaaS
The cloud-hosted version of Kindo, managed by the Kindo team.
Self-Managed Kindo
The version deployed by administrators into an environment they control, such as a public cloud, hybrid cloud, or on-premises infrastructure.