This guide provides detailed instructions for configuring supporting infrastructure components using the kindo-peripheries module as part of the base stack deployment.
Table of Contents
Overview
The kindo-peripheries module is deployed as part of the base stack and installs essential supporting infrastructure:
External Secrets Operator (ESO): Syncs secrets from AWS Secrets Manager to Kubernetes
Unleash: Feature flag management system
Unleash Edge: Edge proxy for feature flags
ALB Ingress Controller: AWS Load Balancer controller for routing traffic
Certificate Manager: Automatic SSL certificate management
External DNS: Automatic DNS record management (optional)
Presidio: Data anonymization service
OpenTelemetry Collector: Observability data collection (optional)
Integration in Base Stack
The peripheries module is deployed after infrastructure and secrets modules, using their outputs for configuration.
Pre-Deployment Setup
1. Prerequisites
Ensure you have in your base stack: - Infrastructure module deployed - Secrets module configured - Provider configuration added to provider.tf
2. Values Files Setup
The peripheries module uses Helm values files for configuration. Copy these from the example:
# In your kindo-base directory
# Copy values files from example
cp -r ../../peripheries-values ./values
cp ../../feature_flags.json .
# Your directory structure should now include:
# kindo-base/
# ├── values/
# │ ├── alb-ingress.yaml
# │ ├── external-secrets-operator.yaml
# │ ├── presidio.yaml
# │ ├── unleash.yaml
# │ └── unleash-edge.yaml
# ├── feature_flags.json
# ├── main.tf # Contains all modules
# ├── secrets.tf # Secret generation and configuration
# ├── provider.tf # Provider configuration
# ├── variables.tf
# ├── outputs.tf
# └── terraform.tfvars
Important: The values files contain templates that reference variables from your infrastructure and secrets modules. Ensure these files are properly copied before proceeding.
Peripheries Module Configuration
1. Configure Peripheries Module
Add the following to your main.tf (or create a separate peripheries.tf file) after the infrastructure and secrets modules:
# --- Deploy Peripheries --- #
module "kindo_peripheries" {
source = "../../modules/kindo-peripheries" # Adjust path as needed
# Explicitly pass the providers
providers = {
kubernetes = kubernetes
helm = helm
}
# Pass registry credentials from shared.tfvars
registry_username = var.registry_username
registry_password = var.registry_password
# Pass Cluster Info
aws_region = local.region
cluster_name = local.cluster_name
# Optional: OTel Collector Configuration
enable_otel_collector_cr = var.enable_otel_collector_cr
otel_collector_iam_role_arn = module.kindo_infra.otel_collector_iam_role_arn
otel_collector_config_region = local.region
# Optional: ExternalDNS Configuration
enable_external_dns = var.enable_external_dns
external_dns_iam_role_arn = module.kindo_infra.external_dns_iam_role_arn
external_dns_domain_filter = var.base_domain
external_dns_txt_owner_id = coalesce(var.external_dns_txt_owner_id, local.cluster_name)
# Peripheries Configuration Map
peripheries_config = {
# ALB Ingress Controller
alb_ingress = {
install = true
helm_chart_version = "1.7.1"
namespace = "kube-system"
create_namespace = false
values_content = templatefile("${path.module}/values/alb-ingress.yaml", {
cluster_name = local.cluster_name
region = local.region
controller_role_arn = module.kindo_infra.alb_controller_role_arn
})
}
# Certificate Manager
cert_manager = {
install = true
helm_chart_version = "v1.14.5"
namespace = "cert-manager"
create_namespace = true
dynamic_helm_sets = {
"installCRDs" = "true"
}
}
# External Secrets Operator
external_secrets_operator = {
install = true
helm_chart_version = "0.9.9"
namespace = "external-secrets"
create_namespace = true
values_content = templatefile("${path.module}/values/external-secrets-operator.yaml", {
role_arn = module.kindo_infra.external_secrets_role_arn != null ? module.kindo_infra.external_secrets_role_arn : ""
})
secret_stores = {
"aws-secrets-manager" = {
provider = "aws"
config = {
service = "SecretsManager"
region = local.region
service_account_name = "external-secrets"
service_account_namespace = "external-secrets"
}
}
}
}
# Unleash Feature Flags
unleash = {
install = true
helm_chart_version = "5.4.3"
namespace = "unleash"
create_namespace = true
values_content = templatefile("${path.module}/values/unleash.yaml", {
admin_password = local.unleash_admin_password
admin_token = local.unleash_admin_token
client_token = local.unleash_client_token
frontend_token = local.unleash_frontend_token
domain_name = local.domain_name
postgres_host = local.unleash_postgres.host
postgres_password = local.unleash_postgres.password
postgres_ssl = local.unleash_postgres.ssl
# WARNING: The following import variables should typically only be set during the *initial* deployment.
# Setting them on subsequent updates might cause Unleash to re-attempt the import process on every restart.
# Any subsequent updates should be performed via the import feature in the Unleash UI.
import_flags_json_content = file("${path.module}/feature_flags.json")
import_project = "default"
import_environment = "development"
})
dynamic_helm_sets = {
"ingress.hosts[0].host" = "unleash.${local.domain_name}"
}
}
# Unleash Edge
unleash_edge = {
install = true
helm_chart_version = "3.0.0"
namespace = "unleash"
create_namespace = false # Uses same namespace as Unleash
values_content = templatefile("${path.module}/values/unleash-edge.yaml", {
unleash_tokens = local.unleash_edge_tokens
domain_name = local.domain_name
})
}
# Presidio Data Anonymization
presidio = {
install = true
helm_chart_version = "2.1.95"
namespace = "presidio"
create_namespace = true
values_content = file("${path.module}/values/presidio.yaml")
}
}
depends_on = [
module.kindo_infra,
module.kindo_secrets
]
}
2. Add Required Variables
Ensure these variables are in your existing variables.tf (most should already be present from the infrastructure deployment):
# OpenTelemetry Configuration
variable "enable_otel_collector_cr" {
description = "Whether to deploy the OpenTelemetryCollector CR"
type = bool
default = false
}
# ExternalDNS Configuration
variable "external_dns_txt_owner_id" {
description = "TXT record owner ID for ExternalDNS (defaults to cluster name)"
type = string
default = ""
}
3. Add Outputs for Peripheries
Add these outputs to your outputs.tf to track deployed components:
output "peripheries_components" {
description = "List of deployed periphery components"
value = {
alb_ingress_installed = true
cert_manager_installed = true
external_secrets_operator_installed = true
unleash_installed = true
unleash_edge_installed = true
presidio_installed = true
external_dns_enabled = var.enable_external_dns
otel_collector_enabled = var.enable_otel_collector_cr
}
}
output "unleash_credentials" {
description = "Unleash admin credentials"
value = {
username = "admin"
password = local.unleash_admin_password
url = "https://unleash.${local.domain_name}"
}
sensitive = true
}
output "unleash_tokens" {
description = "Unleash tokens for different applications"
value = {
admin_token = local.unleash_admin_token
client_token = local.unleash_client_token
frontend_token = local.unleash_frontend_token
}
sensitive = true
}
Deployment Process
Since the peripheries module is part of the base stack, it will be deployed together with infrastructure and secrets:
1. Ensure Prerequisites
Before deployment: - Infrastructure module is deployed (EKS cluster is running) - Secrets module is configured (application configs created) - Values files are copied to your directory - Provider configuration is updated
2. Deploy the Complete Base Stack
# Initialize if needed (after adding providers)
terraform init -upgrade
# Deploy everything together
terraform apply -var-file="../shared.tfvars" -var-file="terraform.tfvars"
# Or if infrastructure and secrets are already deployed
terraform apply -var-file="../shared.tfvars" -var-file="terraform.tfvars" -target="module.kindo_peripheries"
Note: The first deployment may take 10-15 minutes as it installs all Helm charts and waits for pods to be ready.
Peripheries values configuration
External Secrets Operator
The values/external-secrets-operator.yaml template configures ESO with IAM role:
# Template receives role_arn variable
installCRDs: true
serviceAccount:
create: true
name: external-secrets
annotations:
eks.amazonaws.com/role-arn: "${role_arn}"
webhook:
port: 9443
certController:
create: true
The module also creates ClusterSecretStore resources for each configured secret store.
ALB Ingress Controller
The values/alb-ingress.yaml template receives cluster configuration:
clusterName: ${cluster_name}
serviceAccount:
create: true
name: aws-load-balancer-controller
annotations:
eks.amazonaws.com/role-arn: "${controller_role_arn}"
# AWS Configuration
region: ${region}
defaultTargetType: ip
# Enable WAFv2 support
enableWAF: true
enableWAFv2: true
# Create default IngressClass
createIngressClassResource: true
ingressClass: alb
Unleash Configuration
The values/unleash.yaml template receives database and authentication configuration:
# Using external PostgreSQL
postgresql:
enabled: false
database:
host: "${postgres_host}"
port: 5432
database: unleash
user: unleash_user
password: "${postgres_password}"
ssl: ${postgres_ssl}
# Authentication tokens
unleash:
adminAuthentication:
createAdminUser: true
adminPassword: "${admin_password}"
apiTokens:
- secret: "${admin_token}"
type: "admin"
username: "admin"
- secret: "${client_token}"
type: "client"
username: "default"
- secret: "${frontend_token}"
type: "frontend"
username: "frontend"
# Import feature flags on first deployment
import:
enabled: true
data: |
${import_flags_json_content}
project: "${import_project}"
environment: "${import_environment}"
ingress:
enabled: true
className: alb
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/group.name: kindo
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
hosts:
- host: unleash.${domain_name}
paths:
- path: /
pathType: Prefix
Unleash Edge Configuration
The values/unleash-edge.yaml template configures the edge proxy:
app:
environment:
TOKENS: "${unleash_tokens}"
UPSTREAM_URL: "http://unleash:4242/api"
ingress:
enabled: true
className: alb
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/group.name: kindo
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
hosts:
- host: unleash-edge.${domain_name}
paths:
- path: /
pathType: Prefix
Feature Flags Initialization
The feature_flags.json file contains initial feature flags that are imported on first deployment. Example structure:
{
"version": 1,
"features": [
{
"name": "new-feature",
"description": "Description of the feature",
"enabled": false,
"strategies": [
{
"name": "default"
}
]
}
]
}
Verification
1. Check Deployments
# List all periphery namespaces
kubectl get namespaces | grep -E "(cert-manager|external-secrets|unleash|presidio)"
# Check pod status
kubectl get pods -n external-secrets
kubectl get pods -n unleash
kubectl get pods -n cert-manager
kubectl get pods -n presidio
kubectl get pods -n kube-system | grep aws-load-balancer-controller
2. Verify External Secrets Operator
# Check if CRDs are installed
kubectl get crd | grep external-secrets
# Check ClusterSecretStore
kubectl get clustersecretstore
kubectl describe clustersecretstore aws-secrets-manager
# Check if secrets are being synced
kubectl get externalsecrets -A
# Verify secret creation
kubectl get secrets -A | grep app-config
3. Verify Ingress Controller
# Check ingress controller
kubectl get deployment -n kube-system aws-load-balancer-controller
kubectl logs -n kube-system deployment/aws-load-balancer-controller --tail=20
# List ingresses and their ALB addresses
kubectl get ingress -A
# Check ALBs in AWS
aws elbv2 describe-load-balancers \
--region $(terraform output -raw aws_region) \
--profile $(terraform output -raw aws_profile) \
--query "LoadBalancers[?contains(LoadBalancerName, 'k8s-')].{Name:LoadBalancerName,DNS:DNSName,State:State.Code}"
4. Access Services
# Get Unleash URL
kubectl get ingress -n unleash unleash -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'
# Get Unleash Edge URL
kubectl get ingress -n unleash unleash-edge -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'
# Get credentials from Terraform outputs
terraform output -json unleash_credentials
# Access URLs (after DNS propagation)
echo "Unleash URL: https://unleash.$(terraform output -raw base_domain)"
echo "Unleash Edge URL: https://unleash-edge.$(terraform output -raw base_domain)"
# Or get the password directly
terraform output -raw unleash_credentials | jq -r '.password'
Troubleshooting
Common Issues
Kubernetes Provider Errors
Error: Get "http://localhost/api/v1/namespaces/default": dial tcp [::1]:80: connect: connection refused
Solution: Ensure the infrastructure module is deployed first and providers are configured correctly.
External Secrets Not Syncing
SecretStore is not ready
Solution: Check IAM role trust policy and permissions:
kubectl describe clustersecretstore aws-secrets-manager
kubectl logs -n external-secrets -l app.kubernetes.io/name=external-secrets
ALB Not Creating
Failed to build LoadBalancer configuration
Solution: Check subnets are tagged correctly and controller has permissions:
kubectl logs -n kube-system deployment/aws-load-balancer-controller -f
aws ec2 describe-subnets --subnet-ids $(terraform output -json public_subnet_ids | jq -r '.[]')
Unleash Connection Issues
Error: connect ECONNREFUSED
Solution: Verify database is accessible from EKS nodes:
kubectl run -it --rm debug --image=postgres:15 --restart=Never -- \
psql -h $(terraform output -raw postgres_endpoint | cut -d: -f1) -U unleash_user -d unleash
Validation Commands
# Check Helm releases
helm list -A | grep -E "(cert-manager|external-secrets|unleash|presidio|aws-load-balancer)"
# Check resource health
kubectl get pods -A | grep -v "Running\|Completed" | grep -E "(cert-manager|external-secrets|unleash|presidio)"
# Verify ingress annotations
kubectl get ingress -n unleash unleash -o yaml | grep -A5 annotations
# Check External DNS records (if enabled)
kubectl logs -n external-dns deployment/external-dns --tail=20 | grep "Applying changes"
# Verify certificates (if using cert-manager)
kubectl get certificates -A
kubectl get certificaterequests -A
Best Practices
1. Module Dependencies
Always deploy infrastructure module first
Ensure secrets module completes before peripheries
Use explicit depends_on to enforce ordering
Pass providers explicitly to avoid conflicts
2. Values File Management
Use templatefile() for dynamic values
Keep sensitive data in Terraform variables
Version control values templates, not rendered files
Use consistent variable naming across templates
3. Security Configuration
All components use IRSA (IAM Roles for Service Accounts)
External Secrets Operator uses cluster-wide role
ALB controller requires specific subnet tags
Unleash tokens are auto-generated and stored securely
4. High Availability
Configure multiple replicas in values files
Use pod disruption budgets for critical services
Enable RDS Multi-AZ (configured in infrastructure)
Set resource requests and limits appropriately
Next Steps
The peripheries module is deployed as part of the base stack. After successful deployment:
Continue with Applications Deployment in a separate stack
Verify all ingresses have ALB addresses assigned
Wait for DNS propagation if using External DNS
Access Unleash UI to verify feature flags were imported
Test External Secrets synchronization for applications
Summary
The peripheries module provides the foundation for: - Secret Management: External Secrets Operator syncs from AWS Secrets Manager - Traffic Routing: ALB Ingress Controller manages load balancers - Feature Flags: Unleash provides dynamic configuration - SSL Certificates: Cert-manager handles certificate lifecycle - Data Privacy: Presidio enables PII detection and anonymization
All these components work together to support the Kindo applications deployed in the next step.