This guide covers all prerequisites and planning considerations before deploying Kindo infrastructure. Proper planning is essential for a successful deployment.
Kindo Terraform Modules Overview
Directory Structure
The Kindo infrastructure is organized into a modular architecture with the following structure:
kindo-terraform-modules/
├── modules/ # Core Terraform modules
│ ├── kindo-infra/ # AWS infrastructure (VPC, EKS, RDS, etc.)
│ ├── kindo-secrets/ # Secrets management
│ ├── kindo-peripheries/ # Supporting services (Unleash, OTEL, etc.)
│ └── kindo-applications/ # Core Kindo applications
├── stacks/ # Pre-configured deployment stacks
│ ├── infra-aws/ # AWS infrastructure deployment
│ ├── secrets/ # Secrets configuration
│ ├── peripheries/ # Peripheral services deployment
│ └── applications/ # Application deployment
└── docs/ # DocumentationComponent Descriptions
Core Modules (modules/)
kindo-infra: Base AWS infrastructure module
Creates and manages VPC, subnets, and networking
Provisions EKS cluster with managed node groups
Sets up RDS PostgreSQL for persistent data
Deploys Redis for caching and session management
Configures RabbitMQ for message queuing
Manages S3 buckets for object storage
Sets up CloudWatch logging and monitoring
kindo-secrets: Secrets management module
Creates AWS Secrets Manager entries for all services
Generates secure passwords for databases and services
Manages API keys and external service credentials
Configures environment-specific secrets
Integrates with External Secrets Operator in Kubernetes
kindo-peripheries: Supporting services module
Deploys Unleash for feature flag management
Sets up External Secrets Operator for Kubernetes secret sync
Configures OpenTelemetry collector for observability
Installs ALB Ingress Controller for load balancing
Deploys Presidio for PII detection (optional)
Manages other supporting infrastructure components
kindo-applications: Core application module
Deploys API service (Node.js backend)
Installs Next.js frontend application
Sets up LiteLLM for AI model routing
Deploys worker services (external-poller, external-sync)
Configures Cerbos for authorization
Manages application-specific resources and scaling
Deployment Stacks (stacks/)
Pre-configured Terraform configurations that use the core modules. Each stack includes:
main.tf: Module configuration
provider.tf: AWS and Kubernetes provider setup
variables.tf: Input variable definitions
outputs.tf: Output values
terraform.tfvars.example: Example configuration file
Users primarily work with these stacks and only need to customize the terraform.tfvars file.
Deployment Order
The modules must be deployed in the following sequence:
Infrastructure (stacks/infra-aws): Creates the base AWS infrastructure
Secrets (stacks/secrets): Configures all application secrets
Peripheries (stacks/peripheries): Deploys supporting services
Applications (stacks/applications): Deploys the Kindo application stack
Each subsequent module depends on outputs from the previous ones, ensuring proper resource dependencies and configuration flow.
Table of Contents
AWS Account and IAM Requirements
Required Tools and Software
External Service Requirements
Network Planning
Domain and Certificate Planning
Security Planning
Resource Planning
Cost Estimation
Pre-Deployment Checklist
AWS Account and IAM Requirements
AWS Account Setup
Dedicated AWS Account (Strongly Recommended)
Use a separate AWS account for production deployments
Implement AWS Organizations for multi-account management
Enable CloudTrail for comprehensive audit logging
Enable AWS Config for compliance monitoring
Required AWS Regions
Ensure your chosen region supports all required services:
Amazon EKS
Amazon RDS (PostgreSQL)
Amazon ElastiCache (Redis)
Amazon MQ (RabbitMQ)
Amazon SES (if using for email)
IAM Permissions Requirements
You need an IAM user or role with permissions to create and manage AWS resources. Below are the minimum required permissions:
Option 1: Administrator Access (Simplest for Initial Setup)
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "*", "Resource": "*" } ]}
Option 2: Granular Permissions (Recommended for Production)
{ "Version": "2012-10-17", "Statement": [ { "Sid": "NetworkingPermissions", "Effect": "Allow", "Action": [ "ec2:*VPC*", "ec2:*Subnet*", "ec2:*Gateway*", "ec2:*Route*", "ec2:*SecurityGroup*", "ec2:*NetworkAcl*", "ec2:*InternetGateway*", "ec2:*NatGateway*", "ec2:*Address*", "ec2:*Endpoint*", "ec2:Describe*", "ec2:CreateTags", "ec2:DeleteTags" ], "Resource": "*" }, { "Sid": "EKSPermissions", "Effect": "Allow", "Action": [ "eks:*", "ec2:*Instance*", "ec2:*Volume*", "ec2:*Image*", "ec2:*KeyPair*", "ec2:*LaunchTemplate*", "autoscaling:*", "elasticloadbalancing:*" ], "Resource": "*" }, { "Sid": "IAMPermissions", "Effect": "Allow", "Action": [ "iam:CreateRole", "iam:DeleteRole", "iam:GetRole", "iam:ListRoles", "iam:UpdateRole", "iam:AttachRolePolicy", "iam:DetachRolePolicy", "iam:GetRolePolicy", "iam:PutRolePolicy", "iam:DeleteRolePolicy", "iam:ListRolePolicies", "iam:CreateInstanceProfile", "iam:DeleteInstanceProfile", "iam:GetInstanceProfile", "iam:AddRoleToInstanceProfile", "iam:RemoveRoleFromInstanceProfile", "iam:CreateOpenIDConnectProvider", "iam:DeleteOpenIDConnectProvider", "iam:GetOpenIDConnectProvider", "iam:TagOpenIDConnectProvider", "iam:UntagOpenIDConnectProvider", "iam:CreateServiceLinkedRole", "iam:PassRole", "iam:CreatePolicy", "iam:DeletePolicy", "iam:GetPolicy", "iam:GetPolicyVersion", "iam:ListPolicyVersions", "iam:TagRole", "iam:UntagRole" ], "Resource": "*" }, { "Sid": "DatabasePermissions", "Effect": "Allow", "Action": [ "rds:*", "elasticache:*", "mq:*" ], "Resource": "*" }, { "Sid": "StoragePermissions", "Effect": "Allow", "Action": [ "s3:*", "secretsmanager:*", "kms:*" ], "Resource": "*" }, { "Sid": "MonitoringPermissions", "Effect": "Allow", "Action": [ "cloudwatch:*", "logs:*", "events:*" ], "Resource": "*" }, { "Sid": "DNSPermissions", "Effect": "Allow", "Action": [ "route53:*", "acm:*" ], "Resource": "*" }, { "Sid": "EmailPermissions", "Effect": "Allow", "Action": [ "ses:*" ], "Resource": "*" }, { "Sid": "SSMPermissions", "Effect": "Allow", "Action": [ "ssm:*", "ssmmessages:*", "ec2messages:*" ], "Resource": "*" }, { "Sid": "KinesisPermissions", "Effect": "Allow", "Action": [ "kinesis:*", "firehose:*" ], "Resource": "*" } ]}Service Quotas
Check and increase the following AWS service quotas if needed:
Service | Resource | Minimum Required | Recommended |
|---|---|---|---|
EC2 | Elastic IPs | 5 | 10 |
EC2 | VPCs per region | 2 | 5 |
EC2 | Security groups per VPC | 50 | 100 |
EC2 | On-Demand instances (vCPUs) | 32 | 64 |
EKS | Clusters per region | 1 | 3 |
RDS | DB instances | 4 | 10 |
ElastiCache | Nodes | 5 | 20 |
MQ | Brokers | 2 | 5 |
S3 | Buckets | 10 | 100 |
AWS CLI Configuration
# Configure AWS credentials using environment variables (recommended)export AWS_PROFILE=your-profile-name
export AWS_REGION=us-west-2 # or your preferred region# Or configure using AWS CLIaws configure --profile your-profile-name
# Verify accessaws sts get-caller-identity
# Test required permissionsaws ec2 describe-vpcs
aws eks list-clusters
aws rds describe-db-instancesRequired Tools and Software
Local Development Environment
Install the following tools on your local machine with the specified minimum versions:
Terraform
Required Version: >= 1.12.0 (Terraform 1.12.2 recommended)
# macOS
brew tap hashicorp/tap
brew install hashicorp/tap/terraform
# Linux
wget -O- <https://apt.releases.hashicorp.com/gpg> | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] <https://apt.releases.hashicorp.com> $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update && sudo apt install terraform
# Windows (using Chocolatey)
choco install terraform
# Verify version
terraform version # Should show >= 1.12.0
Helm
Required Version: >= 3.8.0 (for OCI registry support)
# macOS
brew install helm
# Linux
curl <https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3> | bash# Windows (using Chocolatey)choco install kubernetes-helm
# Verify version
helm version # Should show >= 3.8.0
kubectl
Required Version: >= 1.24 (should match your EKS cluster version ±1 minor version)
# macOS
brew install kubectl
# Linux
curl -LO "<https://dl.k8s.io/release/$>(curl -L -s <https://dl.k8s.io/release/stable.txt>)/bin/linux/amd64/kubectl"sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
# Windows (using Chocolatey)
choco install kubernetes-cli
# Verify version
kubectl version --client # Should show >= 1.24
AWS CLI
Required Version: >= 2.0
# macOS
brew install awscli
# Linux
curl "<https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip>" -o "awscliv2.zip"unzip awscliv2.zip
sudo ./aws/install
# Windows# Download and run the MSI installer from <https://aws.amazon.com/cli/#>
# Verify version
aws --version # Should show >= 2.0
Additional Required Tools
#jq for JSON processing
# macOs
brew install jq
# Ubuntu/Debian
apt-get install jq
# RHEL/CentOS
yum install jq
# Python 3.8+ (for scripts and secrets generation)
# macOS
brew install [email protected]
# Ubuntu/Debian
apt-get install python3.11 python3-pip
# Git (for version control)
# macOS
brew install git
# Ubuntu/Debian
apt-get install git Verify All Installations
Run this script to verify all tools are properly installed:
#!/bin/bashecho "Checking required tools..."echo ""# Check Terraformif command -v terraform &> /dev/null; then tf_version=$(terraform version -json | jq -r '.terraform_version') echo "✓ Terraform: $tf_version" if [[ ! "$tf_version" =~ ^1\\.(1[2-9]|[2-9][0-9]) ]]; then echo " ⚠ Warning: Terraform version should be >= 1.12.0" fielse echo "✗ Terraform: Not installed"fi# Check Helmif command -v helm &> /dev/null; then helm_version=$(helm version --short | cut -d: -f2 | tr -d ' v') echo "✓ Helm: $helm_version"else echo "✗ Helm: Not installed"fi# Check kubectlif command -v kubectl &> /dev/null; then kubectl_version=$(kubectl version --client -o json | jq -r '.clientVersion.gitVersion') echo "✓ kubectl: $kubectl_version"else echo "✗ kubectl: Not installed"fi# Check AWS CLIif command -v aws &> /dev/null; then aws_version=$(aws --version 2>&1 | cut -d' ' -f1 | cut -d/ -f2) echo "✓ AWS CLI: $aws_version"else echo "✗ AWS CLI: Not installed"fi# Check Pythonif command -v python3 &> /dev/null; then python_version=$(python3 --version | cut -d' ' -f2) echo "✓ Python: $python_version"else echo "✗ Python 3: Not installed"fi# Check jqif command -v jq &> /dev/null; then jq_version=$(jq --version | cut -d- -f2) echo "✓ jq: $jq_version"else echo "✗ jq: Not installed"fi# Check Gitif command -v git &> /dev/null; then git_version=$(git --version | cut -d' ' -f3) echo "✓ Git: $git_version"else echo "✗ Git: Not installed"fiExternal Service Requirements
Before deploying the Kindo infrastructure, you must set up accounts and obtain credentials for the following external services.
Authentication Services
SSOready
SSOready is an open-source alternative that can be self-hosted for complete control over authentication.
Setup Requirements:
Deploy SSOready in your infrastructure (can be deployed alongside Kindo)
Configure SAML/OIDC providers
Set up the SSOready API endpoint
Configuration:
Configure SAML metadata for enterprise SSO providers
Set up redirect URIs for your application
Generate API keys for service communication
Note: You must choose either WorkOS OR SSOready, not both. The choice depends on your requirements:
Choose WorkOS for: Managed service, quick setup, enterprise features out-of-box
Choose SSOready for: Self-hosted control, open-source, custom authentication flows
Vector Database Services
Pinecone (Required)
Required for semantic search and AI-powered features.
Setup Requirements:
Create a Pinecone account at pinecone.io
Create a non-serverless index with the following specifications:
Metric: cosine
Dimensions: 1536 (matches OpenAI embeddings)
Pod Type: p1.x1 or higher (based on data volume)
Important: Kindo requires a pod-based (non-serverless) Pinecone index. Serverless Pinecone is not currently supported.
Credentials Needed:
Pinecone API Key: Found in Pinecone Console → API Keys
Pinecone Environment: Your Pinecone environment/region
Index Name: Name of your created index
AI/ML Services
OpenAI (Optional)
Required for AI-powered features and embeddings generation.
Setup Requirements:
Create an OpenAI account at platform.openai.com
Set up billing and usage limits
Generate API key with appropriate permissions
Credentials Needed:
OpenAI API Key: sk-... format key
Anthropic (Optional)
Optional for additional AI model support via LiteLLM.
Setup Requirements:
Create an Anthropic account at console.anthropic.com
Generate API key
Credentials Needed:
Anthropic API Key: sk-ant-... format key
Integration Services
Merge API (Optional)
For unified API integrations with third-party services.
Setup Requirements:
Create a Merge account at merge.dev
Configure integrations as needed
Credentials Needed:
Merge API Key: Found in Merge Dashboard
Merge Webhook Security: Webhook validation secret
Email Services
Amazon SES (Recommended)
For sending application notifications and emails.
Setup Requirements:
Configure SES in your AWS account
Verify sending domains
Move out of sandbox mode for production use
Alternative Options:
SMTP-compatible email service
SendGrid, Mailgun, or similar services
Syslog Server (Required)
A dedicated syslog endpoint is mandatory for audit log collection.
Requirements:
Syslog server supporting RFC3164 protocol
Network accessible from Kubernetes cluster
Typically uses UDP port 514 or TCP port 514
Must support audit log retention (1+ years for compliance)
Configuration Needed:
Syslog Host: Hostname or IP address
Syslog Port: Usually 514
Protocol: UDP (standard) or TCP
Deployment Options:
Use Existing Syslog Server: If you already have a syslog server in your network, you can configure Kindo to use it by providing the appropriate host, port, and protocol settings
Deploy New Syslog Solution: The kindo-infra module can optionally deploy a managed syslog solution using AWS services
Monitoring and Observability
OpenTelemetry Collector (Recommended)
For metrics and tracing collection.
Deployment Options:
Use Existing OTEL Collector: If you already have an OpenTelemetry Collector in your infrastructure, Kindo can be configured to connect to it
Deploy ADOT EKS Addon: The kindo-infra module can deploy the AWS Distro for OpenTelemetry (ADOT) EKS addon and configure it to send traces to AWS X-Ray and logs to CloudWatch Logs
Setup Requirements (for existing collector):
OpenTelemetry Collector accessible from your Kubernetes cluster
Configure exporters to your monitoring backend (Prometheus, Datadog, New Relic, etc.)
Container Registry Access
Kindo Private Registry (Required)
Access to Kindo’s private container registry and Helm charts.
Provided by Kindo:
Registry URL: Provided by Kindo team
Username: Provided by Kindo team
Password/Token: Provided by Kindo team
Helm Registry: OCI registry URL provided by Kindo team
Requirements:
Kubernetes nodes must have internet access to pull images
Configure image pull secrets in Kubernetes
Network Planning
VPC CIDR Selection
Choosing the right VPC CIDR is critical for avoiding conflicts and enabling connectivity.
CIDR Selection Criteria
Check Existing Networks
# List existing VPCs in your account
aws ec2 describe-vpcs --query 'Vpcs[*].[VpcId,CidrBlock]' --output tableConsider Future Connectivity
Site-to-site VPN connections
Site-to-site VPN connections
AWS Direct Connect
VPC Peering
AWS Transit Gateway connections
Client VPN endpoints
Recommended CIDR Ranges
Environment | Recommended CIDR | Subnet Capacity | Rationale |
|---|---|---|---|
Production | 10.0.0.0/16 | 65,536 IPs | Large address space for growth |
Staging | 10.1.0.0/16 | 65,536 IPs | Isolated from production |
Development | 10.2.0.0/16 | 65,536 IPs | Isolated from other environments |
Avoid Common Conflicts
Corporate networks often use: 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16
AWS services use: 169.254.0.0/16 (link-local)
Docker default: 172.17.0.0/16
Kubernetes pod CIDR: Often 10.244.0.0/16 or 192.168.0.0/16
Subnet Planning
The infrastructure module automatically creates subnets across availability zones:
Example for VPC CIDR: 10.0.0.0/16
Public Subnets (3 AZs):
- 10.0.0.0/20 (AZ-a) - 4,096 IPs
- 10.0.16.0/20 (AZ-b) - 4,096 IPs
- 10.0.32.0/20 (AZ-c) - 4,096 IPs
Private Subnets (3 AZs):
- 10.0.48.0/20 (AZ-a) - 4,096 IPs
- 10.0.64.0/20 (AZ-b) - 4,096 IPs
- 10.0.80.0/20 (AZ-c) - 4,096 IPs
Database Subnets (if RDS Multi-AZ):
- Automatically created in private subnetsDomain and Certificate Planning
Domain Requirements
Public Domain
Required for SSL certificates
Can be registered with any registrar
Must be able to modify DNS records
Subdomain Strategy
Component | Subdomain Example | Purpose |
|---|---|---|
Main App | Primary application access | |
API | API endpoint | |
Admin | Administrative interfaces | |
Monitoring | Observability dashboards |
SSL Certificate Options
AWS Certificate Manager (ACM) - Recommended
Free SSL certificates
Automatic renewal
Integrated with ALB/NLB
Let’s Encrypt with cert-manager
Free SSL certificates
More flexibility
Requires additional configuration
Security Planning
Encryption Requirements
Data at Rest
RDS: Enable encryption (AWS KMS)
EBS volumes: Enable encryption by default
S3 buckets: Enable default encryption (SSE-S3 or SSE-KMS)
ElastiCache: Enable encryption (Redis 3.2.6+)
Data in Transit
Force HTTPS for all web traffic
Use SSL/TLS for database connections
Enable encryption for Redis replication
Encrypt inter-service communication with service mesh (optional)
Access Control
IAM Roles and Policies
Use IRSA (IAM Roles for Service Accounts) for pods
Implement least privilege principle
Regular access reviews and rotation
Network Security
Use private subnets for sensitive workloads
Implement Kubernetes network policies
Enable VPC Flow Logs for traffic analysis
Use security groups as virtual firewalls
Resource Planning
Compute Resources
T-Shirt Sizing for Deployments
Size | Use Case | Node Configuration | Database Sizing | Estimated Monthly Cost |
|---|---|---|---|---|
dev | Development/Testing | 1-3 t3.medium nodes | db.t3.micro RDS | ~$150-200 |
small | Small Production | 2-5 t3.large nodes | db.t3.small RDS | ~$400-500 |
medium | Standard Production | 3-8 t3.xlarge nodes | db.t3.medium RDS | ~$800-1000 |
large | High-Traffic Production | 5-15 m5.xlarge nodes | db.m5.large RDS Multi-AZ | ~$2000-3000 |
xlarge | Enterprise Production | 10-30 m5.2xlarge nodes | db.m5.xlarge RDS Multi-AZ | ~$5000+ |
Storage Planning
S3 Buckets
Uploads bucket for application files
Audit logs bucket with lifecycle policies
Backup bucket for disaster recovery
EBS Volumes
Use GP3 for better price/performance
100GB minimum for node root volumes
Consider separate volumes for container storage
Cost Estimation
Monthly Cost Breakdown by Size
Deployment Size | Estimated Monthly Cost | Suitable For |
|---|---|---|
Development | $150-200 | Development and testing |
Small | $400-500 | Startups, small teams (<50 users) |
Medium | $800-1,000 | Growing companies (50-200 users) |
Large | $2,000-3,000 | Established companies (200-1000 users) |
XLarge | $5,000+ | Enterprise deployments (1000+ users) |
Cost Optimization Tips
Use Spot instances for non-critical workloads
Implement aggressive auto-scaling policies
Use Reserved Instances or Savings Plans for predictable workloads
Enable S3 Intelligent-Tiering for automatic cost optimization
Review AWS Cost Explorer weekly
Set up billing alerts
Pre-Deployment Checklist
Before proceeding with deployment, ensure:
AWS Environment
[ ] AWS account is properly configured
[ ] IAM user/role has required permissions (test with dry-run operations)
[ ] Service quotas are sufficient for your deployment size
[ ] AWS CLI is configured with correct profile
[ ] Selected AWS region supports all required services
Tools and Software
[ ] Terraform >= 1.12.0 installed and verified
[ ] Helm >= 3.8.0 installed and verified
[ ] kubectl >= 1.24 installed and verified
[ ] AWS CLI >= 2.0 installed and configured
[ ] Python >= 3.8 installed
[ ] jq installed for JSON processing
[ ] Git installed and repository cloned
External Services
[ ] Authentication: Either WorkOS OR SSOready configured (choose one)
[ ] API credentials obtained
[ ] Redirect URIs planned
[ ] SSO providers configured (if applicable)
[ ] Pinecone: Account created, index configured, API key obtained
[ ] OpenAI: Account created, API key generated, billing configured
[ ] Anthropic: API key obtained (if using)
[ ] Merge API: Credentials obtained (if using)
[ ] Email Service: SES or SMTP service configured
[ ] Container Registry: Kindo registry credentials received
Network Planning
[ ] VPC CIDR chosen and documented (no conflicts)
[ ] Subnet strategy defined
[ ] Security group rules planned
[ ] VPN/Direct Connect requirements identified (if applicable)
Domain and Certificates
[ ] Domain registered and DNS control verified
[ ] Subdomain naming convention established
[ ] SSL certificate strategy decided (ACM recommended)
[ ] Route53 hosted zone ready (or external DNS provider configured)
Security and Compliance
[ ] Encryption requirements defined (at-rest and in-transit)
[ ] Compliance requirements identified (HIPAA, SOC2, etc.)
[ ] Backup and disaster recovery strategy defined
[ ] Access control and audit logging requirements specified
Resource Planning
[ ] Deployment size selected (dev/small/medium/large/xlarge)
[ ] Node group sizing determined
[ ] Database sizing determined
[ ] Storage requirements estimated
[ ] High availability requirements defined
Cost Management
[ ] Monthly budget approved
[ ] Cost alerts configured in AWS
[ ] Resource tagging strategy defined
[ ] Cost optimization opportunities identified
Next Steps
Once all prerequisites are met, proceed to:
Infrastructure Deployment - Deploy the base AWS infrastructure using the kindo-infra module
Secrets Configuration - Generate and configure application secrets
Application Deployment - Deploy the Kindo application stack