AWS Secrets Management Guide

Prev Next

This guide walks you through configuring and deploying application secrets using the kindo-secrets module.

Table of Contents

  1. Quick Start
  2. Understanding the Secrets Module
  3. Infrastructure Output Consumption
  4. Core Configuration Parameters
  5. API Keys and External Services
  6. Environment Templates
  7. Advanced Configuration
  8. Deployment Process
  9. Post-Deployment Verification
  10. Troubleshooting

Quick Start

1. Set Up Your Deployment Directory

# Navigate to the secrets 
cd kindo-modules/examples/kindo-secrets-example
# Copy the example configuration
cp terraform.tfvars.example terraform.tfvars
# Edit your configuration
vi terraform.tfvars

2. Minimal Configuration

Here’s the absolute minimum you need in terraform.tfvars:

# Core configuration (REQUIRED)
project_name = "mycompany"
environment  = "production"
aws_region   = "us-west-2"

# External services (REQUIRED)
pinecone_api_key = "pc-xxxxx"  # Vector database

# At least ONE LLM provider (REQUIRED)
openai_api_key = "sk-xxxxx"     # Or use anthropic_api_key, etc.

# Authentication (Choose ONE)
workos_api_key   = "sk_xxxxx"   # If using WorkOS
workos_client_id = "client_xxxxx"
# OR
# ssoready_api_key = "xxxxx"     # If using SSOready

# Integration services (REQUIRED for full functionality)
merge_api_key = "xxxxx"
merge_webhook_security = "whsec_xxxxx"

3. Deploy

# Set your AWS profile
export AWS_PROFILE=your-aws-profile
# Initialize Terraform
terraform init
# Review the plan
terraform plan
# Deploy
terraform apply

Understanding the Secrets Module

What the Module Does

The kindo-secrets module:

  1. Consumes infrastructure outputs from the previously deployed kindo-infra module
  2. Generates random secrets for internal service communication
  3. Processes environment templates replacing placeholders with actual values
  4. Creates AWS Secrets Manager entries for each application
  5. Outputs configuration for use by the application deployment

Module Architecture

Infrastructure Outputs ─┐
                       ├─> kindo-secrets module ─> AWS Secrets Manager
External API Keys ─────┤                          │
Random Secrets ────────┘                          └─> Application configs

Secret Naming Convention

Secrets follow a consistent pattern in AWS Secrets Manager:

{project}-{environment}-{service}-config

Examples:
- mycompany-production-api-config
- mycompany-production-litellm-config
- mycompany-staging-next-config

Infrastructure Output Consumption

The Try() Pattern

The secrets module uses Terraform’s try() function to gracefully handle infrastructure outputs. This allows flexibility in deployment scenarios:

# Example from the module
locals {
  # Try to get from infrastructure output, fall back to variable
  base_domain = try(
    data.terraform_remote_state.infrastructure.outputs.base_domain,
    local.infra_outputs.base_domain,
    var.fallback_domain  # Ultimate fallback
  )
}

How Infrastructure Outputs Are Consumed

Automatic consumption (default):

# The module automatically reads from ../kindo-infra-aws-example/terraform.tfstate
# No configuration needed if using standard setup

Manual override (if needed):

# Override specific infrastructure values in terraform.tfvars
infrastructure_outputs = {
  postgres_main_connection_string = "postgresql://..."
  redis_connection_string = "redis://..."
  # ... other overrides
}

Key Infrastructure Values Used

Infrastructure Output Used For Fallback Behavior
base_domain Generating app/api domains Falls back to “example.com”
postgres_*_connection_string Database connections Must be provided if not from infra
redis_connection_string Cache configuration Must be provided if not from infra
rabbitmq_connection_string Message queue Must be provided if not from infra
s3_uploads_bucket File storage Must be provided if not from infra
syslog_endpoint Audit logging Optional, can be empty
smtp_* Email configuration Optional, can use external SMTP

Core Configuration Parameters

Project Identification

project_name = "mycompany"
environment  = "production"
aws_region   = "us-west-2"

What these do:

  • project_name: Used in secret names and resource tagging
  • environment: Distinguishes between deployments (dev/staging/production)
  • aws_region: Where secrets are stored (should match infrastructure)

Deployment Environment

deployment_environment = "production"  # or "development", "staging"

What this does:

  • Sets application behavior (logging levels, debug features)
  • Affects certain security settings
  • Controls feature flags defaults

Secret Generation

generate_random_secrets = true  # Recommended

What this does:

  • true: Auto-generates secure random values for:
  • NextAuth secret (session encryption)
  • Key Encryption Key (data encryption)
  • Internal API keys (service-to-service auth)
  • Unleash tokens (feature flags)
  • false: You must provide these values manually

API Keys and External Services

AI/LLM Providers

At least ONE is required:

# Option 1: OpenAI (Most common)
openai_api_key = "sk-xxxxx"

# Option 2: Anthropic Claude
anthropic_api_key = "sk-ant-xxxxx"

# Option 3: Azure OpenAI (Enterprise)
azure_openai_api_key = "xxxxx"
azure_openai_endpoint = "https://myinstance.openai.azure.com"
azure_openai_deployment = "gpt-4"

# Additional providers (optional)
groq_api_key = ""           # Fast inference
together_ai_api_key = ""    # Open source models
cohere_api_key = ""         # Specialized NLP
deepseek_api_key = ""       # Code generation

How to choose:

  • OpenAI: Best general support, most models
  • Anthropic: Superior for complex reasoning
  • Azure OpenAI: Enterprise compliance (SOC2, HIPAA)
  • Others: Specific use cases or cost optimization

Vector Database

pinecone_api_key = "pc-xxxxx"
# Optional: Override defaults
# pinecone_environment = "us-east-1-aws"
# pinecone_index_name = "kindo-embeddings"

What this does:

  • Enables semantic search functionality
  • Powers RAG (Retrieval Augmented Generation)
  • Stores and retrieves document embeddings

Authentication Services

Choose ONE:

# Option 1: WorkOS (Managed service)
workos_api_key   = "sk_xxxxx"
workos_client_id = "client_xxxxx"

# Option 2: SSOready (Self-hosted)
ssoready_api_endpoint = "https://sso.mycompany.com"
ssoready_api_key = "xxxxx"

Decision factors:

  • WorkOS: Quick setup, enterprise SSO support, managed service
  • SSOready: Full control, self-hosted, custom authentication flows

Integration Services

# Merge.dev (Required for integrations)
merge_api_key = "xxxxx"
merge_webhook_security = "whsec_xxxxx"

What this enables:

  • CRM integrations (Salesforce, HubSpot)
  • HRIS integrations (Workday, BambooHR)
  • Unified API for third-party services

Optional Service Integrations

# Additional integrations (provide API keys if using these services)
slack_app_token = ""     # Slack notifications
segment_write_key = ""   # Analytics
stripe_api_key = ""      # Payments
twilio_account_sid = ""  # SMS notifications
twilio_auth_token = ""   # SMS notifications

Note: Service deployment flags like enable_presidio and enable_cerbos are configured in the kindo-peripheries module, not here. This module only handles secrets and API keys.

Environment Templates

Understanding Template Processing

The module processes .env template files, replacing {{variables}} with actual values:

# Template (env_templates/api.env)
DATABASE_URL={{postgres.main.connection_string}}
REDIS_URL={{redis.connection_string}}
# After processing
DATABASE_URL=postgresql://kindo:[email protected]:5432/kindo
REDIS_URL=redis://cache.amazonaws.com:6379

Template Variables Reference

Infrastructure variables (auto-populated):
| Variable | Description | Example Value |
|———-|————-|—————|
| {{storage.bucket_name}} | S3 uploads bucket | mycompany-prod-uploads |
| {{storage.access_key}} | S3 access key | AKIA... |
| {{storage.secret_key}} | S3 secret key | wJal... |
| {{postgres.main.connection_string}} | Main database URL | postgresql://... |
| {{postgres.kindo.connection_string}} | Kindo database URL | postgresql://kindo:... |
| {{redis.connection_string}} | Redis URL | redis://... |
| {{rabbitmq.connection_string}} | RabbitMQ URL | amqps://... |
| {{smtp.host}} | SMTP server | email-smtp.region.amazonaws.com |
| {{syslog.endpoint}} | Audit log endpoint | syslog-nlb.region.elb.amazonaws.com |

Generated secrets (auto-generated):
| Variable | Description | Generated? |
|———-|————-|————|
| {{secrets.nextauthsecret}} | Session encryption | Yes |
| {{secrets.kek}} | Key encryption key | Yes |
| {{secrets.uminternalapikey}} | Internal API key | Yes |
| {{secrets.litellmapikey}} | LiteLLM API key | Yes |

External API keys (from tfvars):
| Variable | Maps To | Required? |
|———-|———|———–|
| {{secrets.openai_api_key}} | openai_api_key | One LLM required |
| {{secrets.anthropic_api_key}} | anthropic_api_key | One LLM required |
| {{secrets.pinecone_api_key}} | pinecone_api_key | Yes |
| {{secrets.workos_api_key}} | workos_api_key | If using WorkOS |

Customizing Templates

You can customize templates in the env_templates/ directory:

# Edit a templatevi env_templates/api.env
# Add custom variablesCUSTOM_FEATURE=enabled
MY_SERVICE_URL={{custom.my_service_url}}
# Provide the value in terraform.tfvarscustom_env_overrides = {
  api = {
    "custom.my_service_url" = "https://myservice.com"  }}

Advanced Configuration

Custom Environment Overrides

Override or add environment variables per service:

custom_env_overrides = {
  api = {
    LOG_LEVEL = "debug"
    ENABLE_PROFILING = "true"
    CUSTOM_TIMEOUT = "30000"
  }
  next = {
    NEXT_PUBLIC_API_URL = "https://api.custom.com"
    NEXT_PUBLIC_FEATURE_X = "enabled"
  }
  litellm = {
    LITELLM_MASTER_KEY = "custom-master-key"
    LITELLM_PROXY_TIMEOUT = "300"
  }
}

Using Different Infrastructure State

Remote state backend:

# In main.tf, modify the data source
data "terraform_remote_state" "infrastructure" {
  backend = "s3"  # Instead of "local"

  config = {
    bucket = "my-terraform-state"
    key    = "infrastructure/terraform.tfstate"
    region = "us-west-2"
  }
}

Manual infrastructure values:

# Provide values directly in terraform.tfvars
infrastructure_outputs = {
  postgres_main_connection_string = "postgresql://user:pass@host:5432/db"
  postgres_kindo_connection_string = "postgresql://kindo:pass@host:5432/kindo"
  redis_connection_string = "redis://cache.region.amazonaws.com:6379"
  rabbitmq_connection_string = "amqps://user:[email protected]:5671"
  s3_uploads_bucket = "my-uploads-bucket"
  s3_access_key = "AKIA..."
  s3_secret_key = "secret..."
}

Multi-Region Deployment

For multi-region setups:

# Primary region configuration
aws_region = "us-west-2"

# Cross-region resources
cross_region_resources = {
  backup_region = "us-east-1"
  dr_bucket = "mycompany-dr-bucket"
  replica_kms_key = "arn:aws:kms:us-east-1:..."
}

Secret Rotation

Configure automatic rotation:

# Enable rotation for generated secrets
enable_secret_rotation = true
rotation_days = 90

# Exclude specific secrets from rotation
rotation_exclude_list = [
  "nextauth_secret",  # May break active sessions
  "kek"              # Requires re-encryption
]

Deployment Process

1. Pre-Deployment Checklist

  • Infrastructure deployed successfully (kindo-infra module)
  • Infrastructure outputs available
  • All required API keys obtained
  • AWS credentials configured
  • terraform.tfvars completed

2. Validate Configuration

# Check infrastructure connectivity
terraform state show data.terraform_remote_state.infrastructure
# Validate your configuration
terraform fmt
terraform validate
# Preview what will be created
terraform plan

3. Deploy Secrets

# Apply the configuration
terraform apply
# For CI/CD automation
terraform apply -auto-approve

Deployment typically takes:

  • Random secret generation: Instant
  • Template processing: 1-2 seconds per template
  • AWS Secrets Manager creation: 10-20 seconds
  • Total: Under 1 minute

4. Capture Outputs

# Save outputs for application deployment
terraform output -json > secrets-outputs.json
# View specific outputs
terraform output aws_secret_names
terraform output k8s_secret_names

Post-Deployment Verification

1. Verify Secrets in AWS

# List created secrets
aws secretsmanager list-secrets \
  --filters "Key=name,Values=${PROJECT}-${ENVIRONMENT}" \
  --query 'SecretList[*].Name' \
  --region $(terraform output -raw aws_region)
# Expected output:
# [
#   "mycompany-production-api-config",
#   "mycompany-production-next-config",
#   "mycompany-production-litellm-config",
#   ...
# ]

2. Validate Secret Contents

# Retrieve and validate a secret (be careful with sensitive data!)
aws secretsmanager get-secret-value \
  --secret-id "mycompany-production-api-config" \
  --query 'SecretString' \
  --region $(terraform output -raw aws_region) | \
  jq -r | jq '.DATABASE_URL' | head -c 50
# Should show the beginning of a connection string

3. Check Template Processing

# View processed templates metadata
terraform output config_metadata
# Check which templates were processed
terraform output processed_templates

4. Verify Integration Points

Database connectivity:

# Test database connection from secret
DB_URL=$(aws secretsmanager get-secret-value \
  --secret-id "mycompany-production-api-config" \
  --query 'SecretString' \
  --region $(terraform output -raw aws_region) | \
  jq -r | jq -r '.DATABASE_URL')
# Test connection (requires psql)
psql "$DB_URL" -c "SELECT 1"

Redis connectivity:

# Get Redis URL from secret
REDIS_URL=$(aws secretsmanager get-secret-value \
  --secret-id "mycompany-production-api-config" \
  --query 'SecretString' \
  --region $(terraform output -raw aws_region) | \
  jq -r | jq -r '.REDIS_URL')
# Test connection (requires redis-cli)
redis-cli -u "$REDIS_URL" PING

5. Document Secrets Configuration

cat > secrets-summary.md << EOF# Secrets Configuration Summary## Deployment Information- **Date**: $(date)- **Project**: $(terraform output -raw project_name)- **Environment**: $(terraform output -raw environment)- **Region**: $(terraform output -raw aws_region)## Secrets Created$(terraform output -json aws_secret_names | jq -r '.[]' | sed 's/^/- /')## External Services Configured- **LLM Provider**: $([ -n "$OPENAI_API_KEY" ] && echo "OpenAI" || echo "Check tfvars")- **Vector DB**: Pinecone- **Authentication**: $([ -n "$WORKOS_API_KEY" ] && echo "WorkOS" || echo "SSOready")- **Integrations**: Merge.dev## Next Steps1. Deploy applications using kindo-applications module2. Configure peripherals using kindo-peripheries moduleEOFecho "✅ Secrets summary saved to secrets-summary.md"

Troubleshooting

Common Issues and Solutions

1. Infrastructure State Not Found

Error: “Error: Error reading terraform_remote_state data source”

Solution:

# Option 1: Ensure infrastructure is deployed first
cd ../kindo-infra-aws-example
terraform apply

# Option 2: Use manual infrastructure values
infrastructure_outputs = {
  # Add all required values
}

2. Missing Required API Keys

Error: “Error: Invalid value for variable”

Solution:

# Ensure at least one LLM provider is configured
openai_api_key = "sk-xxxxx"  # Or another provider

# Ensure required services are configured
pinecone_api_key = "pc-xxxxx"
merge_api_key = "xxxxx"

3. Template Variable Not Found

Error: “Template variable ‘{{variable}}’ not found in template_variables”

Solution:

# Add the missing variable to custom_env_overrides
custom_env_overrides = {
  service_name = {
    "variable" = "value"
  }
}

4. AWS Secrets Manager Permissions

Error: “AccessDeniedException: User is not authorized to perform: secretsmanager:CreateSecret”

Solution:

# Ensure your AWS credentials have the necessary permissions
aws iam attach-user-policy \
  --user-name your-user \
  --policy-arn arn:aws:iam::aws:policy/SecretsManagerReadWrite

5. Secret Already Exists

Error: “ResourceExistsException: A resource with the ID already exists”

Solution:

# Option 1: Import existing secret
terraform import aws_secretsmanager_secret.api mycompany-production-api-config
# Option 2: Delete existing secret (careful!)
aws secretsmanager delete-secret \
  --secret-id mycompany-production-api-config \
  --force-delete-without-recovery

Debugging Commands

# Check Terraform stateterraform state list
terraform state show module.kindo_secrets
# Verify template processing
terraform console
> module.kindo_secrets.application_configs["api"]
# Test template variable substitution
echo '{{postgres.main.connection_string}}' | \
  terraform console -var-file=terraform.tfvars
# Enable debug logging
export TF_LOG=DEBUG
terraform apply

Next Steps

After successful secrets deployment:

  1. Deploy Applications: Follow the Application Deployment Guide to deploy Kindo applications
  2. Configure Peripherals: Set up additional services using the Peripherals Configuration Guide
  3. Set Up Monitoring: Ensure secrets are being properly consumed by applications
  4. Document Access: Record who has access to which secrets for compliance

Appendix: Complete terraform.tfvars Reference

# ===============================================================
# Complete Secrets Configuration Reference
# ===============================================================

# Core Configuration (REQUIRED)
project_name = "mycompany"
environment  = "production"
aws_region   = "us-west-2"

# Deployment Settings
deployment_environment = "production"
generate_random_secrets = true

# Vector Database (REQUIRED)
pinecone_api_key = "pc-xxxxx"
pinecone_environment = "us-east-1-aws"
pinecone_index_name = "kindo-embeddings"

# Authentication (Choose ONE)
workos_api_key = "sk_xxxxx"
workos_client_id = "client_xxxxx"
# OR
ssoready_api_endpoint = ""
ssoready_api_key = ""

# Integration Services (REQUIRED)
merge_api_key = "xxxxx"
merge_webhook_security = "whsec_xxxxx"

# Optional Service Integrations
slack_app_token = ""
segment_write_key = ""
stripe_api_key = ""
twilio_account_sid = ""
twilio_auth_token = ""

# Infrastructure Overrides (if not using standard setup)
infrastructure_outputs = {
  # postgres_main_connection_string = ""
  # redis_connection_string = ""
  # Add other overrides as needed
}

# Custom Environment Variables
custom_env_overrides = {
  api = {
    LOG_LEVEL = "info"
    CUSTOM_VAR = "value"
  }
  next = {
    NEXT_PUBLIC_FEATURE = "enabled"
  }
}

# Secret Rotation (Advanced)
enable_secret_rotation = false
rotation_days = 90
rotation_exclude_list = []

Remember: Start with the minimal configuration and add parameters as needed. The module handles most complexity through smart defaults and infrastructure output consumption.