Helm Chart Installation
Page not available in that version
The current page Helm Chart Installation doesn't exist in version v1.4.1 of the documentation for this product.
Overview
This guide covers the common steps for deploying the CDN Manager Helm chart. These steps apply to all deployment types (single-node, multi-node, and air-gapped) after the Kubernetes cluster is fully operational.
Prerequisites: This guide assumes the Kubernetes cluster is already installed and all system pods are running. If you haven’t installed the cluster yet, refer to:
- Single-Node Installation for lab environments
- Multi-Node Installation for production deployments
- Air-Gapped Deployment for air-gapped environments
Prerequisites
Before proceeding, verify the following:
- Cluster operational: All nodes show
Readystatus - System pods running: All pods in
kube-systemandlonghorn-systemnamespaces areRunning - ISO mounted: Installation ISO is mounted at
/mnt/esb3027 - Extras ISO mounted (air-gapped only): Extras ISO is mounted at
/mnt/esb3027-extrasand images are loaded on all nodes
Step 1: Create Configuration File
Create a Helm values file (~/values.yaml) with your deployment configuration. At minimum, configure the manager hostname and at least one router:
# ~/values.yaml
global:
hosts:
manager:
- host: manager.local
routers:
- name: default
address: 127.0.0.1
Single-Node Configuration
For single-node deployments, disable Kafka replication:
# Single-node: Disable Kafka replication
kafka:
replicaCount: 1
controller:
replicaCount: 1
Multi-Node Configuration
For multi-node deployments, configure all manager hostnames and Zitadel external domain:
# Multi-node configuration
global:
hosts:
manager:
- host: manager.example.com
- host: manager-backup.example.com
routers:
- name: director-1
address: 192.0.2.1
- name: director-2
address: 192.0.2.2
zitadel:
zitadel:
ExternalDomain: manager.example.com
Important: The
zitadel.zitadel.ExternalDomainmust match the first entry inglobal.hosts.manageror authentication will fail due to CORS policy violations.
Configuration Sources
Complete default template: A complete default values.yaml file is available on the installation ISO at /mnt/esb3027/values.yaml. Copy this file to use as a starting point:
cp /mnt/esb3027/values.yaml ~/values.yaml
Split configuration files: For better organization, split your configuration into multiple files and specify them with repeated --values flags:
helm install acd-manager /mnt/esb3027/charts/acd-manager \
--values ~/values-base.yaml \
--values ~/values-tls.yaml \
--values ~/values-autoscaling.yaml
Later files override earlier files, allowing you to maintain a base configuration with environment-specific overrides.
Step 2: Load MaxMind GeoIP Databases (Optional)
If you plan to use GeoIP-based routing or validation features, load the MaxMind GeoIP databases. The following databases are used by the manager:
GeoIP2-City.mmdb- The City DatabaseGeoLite2-ASN.mmdb- The ASN DatabaseGeoIP2-Anonymous-IP.mmdb- The VPN and Anonymous IP Database
Create the Kubernetes volume using the helper utility:
/mnt/esb3027/generate-maxmind-volume
The utility will prompt for:
- Location of
GeoIP2-City.mmdb - Location of
GeoLite2-ASN.mmdb - Location of
GeoIP2-Anonymous-IP.mmdb - Name of the volume
After running this command, reference the volume in your configuration file:
manager:
maxmindDbVolume: maxmind-db-volume
Replace maxmind-db-volume with the volume name you specified when running the utility.
Tip: When naming the volume, include a revision number or date (e.g.,
maxmind-db-volume-2026-04ormaxmind-db-volume-v2). This simplifies future updates: create a new volume with an updated name, update thevalues.yamlto reference the new volume, and delete the old volume after verification.
Step 3: Configure TLS Certificates (Optional)
For production deployments, configure a valid TLS certificate from a trusted Certificate Authority (CA). A self-signed certificate is deployed by default if no certificate is provided.
Method 1: Create TLS Secret Manually
Create a Kubernetes TLS secret with your certificate and key:
kubectl create secret tls acd-manager-tls --cert=tls.crt --key=tls.key
Method 2: Helm-Managed Secret
Add the certificate directly to your values.yaml:
ingress:
secrets:
acd-manager-tls: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
tls:
- hosts:
- manager.example.com
secretName: acd-manager-tls
Configuring All Ingress Controllers
All ingress controllers must be configured with the same certificate secret and hostname:
ingress:
hostname: manager.example.com
tls: true
secretName: acd-manager-tls
zitadel:
ingress:
tls:
- hosts:
- manager.example.com
secretName: acd-manager-tls
confd:
ingress:
hostname: manager.example.com
tls: true
secretName: acd-manager-tls
mib-frontend:
ingress:
hostname: manager.example.com
tls: true
secretName: acd-manager-tls
Important: The hostname must match the first entry in
global.hosts.managerfor Zitadel CORS compatibility. The secret name has a maximum length of 53 characters.
Step 4: Deploy the Manager Helm Chart
Deploy the CDN Manager application:
helm install acd-manager /mnt/esb3027/charts/acd-manager --values ~/values.yaml
Real-time output: By default, helm install runs silently until completion. To see real-time output during deployment, add the --debug flag:
helm install acd-manager /mnt/esb3027/charts/acd-manager --values ~/values.yaml --debug
Monitor deployment:
kubectl get pods --watch
Wait for all pods to show Running status before proceeding.
Timeout handling: The default Helm timeout is 5 minutes. If the installation fails due to a rollout timeout, retry with a larger timeout value:
helm install acd-manager /mnt/esb3027/charts/acd-manager --values ~/values.yaml --timeout 10m
Retry failed installation: If a previous installation attempt failed and you receive an error that the release name is already in use, uninstall the previous release before retrying:
helm uninstall acd-manager
helm install acd-manager /mnt/esb3027/charts/acd-manager --values ~/values.yaml
Step 5: Verify Deployment
Verify all application pods are running:
kubectl get pods
Expected Output: Single-Node
NAME READY STATUS RESTARTS AGE
acd-manager-5b98d569d9-abc12 1/1 Running 0 3m
acd-manager-confd-6fb78548c4-xnrh4 1/1 Running 0 3m
acd-manager-gateway-8bc8446fc-chs26 1/1 Running 0 3m
acd-manager-kafka-controller-0 2/2 Running 0 3m
acd-manager-metrics-aggregator-76d96c4964-lwdcj 1/1 Running 0 3m
acd-manager-mib-frontend-7bdb69684b-6qxn8 1/1 Running 0 3m
acd-manager-postgresql-0 1/1 Running 0 3m
acd-manager-redis-master-0 2/2 Running 0 3m
acd-manager-redis-replicas-0 2/2 Running 0 3m
acd-manager-selection-input-5fb694b857-qxt67 1/1 Running 0 3m
acd-manager-zitadel-8448b4c4fc-2pkd8 1/1 Running 0 3m
acd-manager-zitadel-init-hh6j7 0/1 Completed 0 4m
acd-manager-zitadel-setup-nwp8k 0/2 Completed 0 4m
alertmanager-0 1/1 Running 0 3m
grafana-6d948cfdc6-77ggk 1/1 Running 0 3m
victoria-metrics-agent-dc87df588-tn8wv 1/1 Running 0 3m
victoria-metrics-alert-757c44c58f-kk9lp 1/1 Running 0 3m
victoria-metrics-longterm-server-0 1/1 Running 0 3m
victoria-metrics-server-0 1/1 Running 0 3m
Expected Output: Multi-Node
NAME READY STATUS RESTARTS AGE
acd-cluster-postgresql-1 1/1 Running 0 11m
acd-cluster-postgresql-2 1/1 Running 0 11m
acd-cluster-postgresql-3 1/1 Running 0 10m
acd-manager-5b98d569d9-2pbph 1/1 Running 0 3m
acd-manager-5b98d569d9-m54f9 1/1 Running 0 3m
acd-manager-5b98d569d9-pq26f 1/1 Running 0 3m
acd-manager-confd-6fb78548c4-xnrh4 1/1 Running 0 3m
acd-manager-gateway-8bc8446fc-chs26 1/1 Running 0 3m
acd-manager-gateway-8bc8446fc-wzrml 1/1 Running 0 3m
acd-manager-kafka-controller-0 2/2 Running 0 3m
acd-manager-kafka-controller-1 2/2 Running 0 3m
acd-manager-kafka-controller-2 2/2 Running 0 3m
acd-manager-metrics-aggregator-76d96c4964-lwdcj 1/1 Running 2 3m
acd-manager-mib-frontend-7bdb69684b-6qxn8 1/1 Running 0 3m
acd-manager-mib-frontend-7bdb69684b-pkjrw 1/1 Running 0 3m
acd-manager-redis-master-0 2/2 Running 0 3m
acd-manager-redis-replicas-0 2/2 Running 0 3m
acd-manager-selection-input-5fb694b857-qxt67 1/1 Running 2 3m
acd-manager-zitadel-8448b4c4fc-2pkd8 1/1 Running 0 3m
acd-manager-zitadel-8448b4c4fc-vchp9 1/1 Running 0 3m
acd-manager-zitadel-init-hh6j7 0/1 Completed 0 4m
acd-manager-zitadel-setup-nwp8k 0/2 Completed 0 4m
alertmanager-0 1/1 Running 0 3m
grafana-6d948cfdc6-77ggk 1/1 Running 0 3m
telegraf-54779f5f46-2jfj5 1/1 Running 0 3m
victoria-metrics-agent-dc87df588-tn8wv 1/1 Running 0 3m
victoria-metrics-alert-757c44c58f-kk9lp 1/1 Running 0 3m
victoria-metrics-longterm-server-0 1/1 Running 0 3m
victoria-metrics-server-0 1/1 Running 0 3m
Pod Distribution Verification
Verify pods are distributed across nodes:
kubectl get pods -o wide
Expected Behavior
- Init pods (such as
zitadel-initandzitadel-setup) will showCompletedstatus after successful initialization. This is expected behavior. - Multi-node deployments: Some pods may enter
CrashLoopBackoffstate during initial deployment depending on the timing of other containers starting up. This is expected behavior as some services wait for dependencies (such as databases or Kafka) to become available. The deployment should stabilize automatically after a few minutes. - Restart counts: Some pods may show restart counts as they wait for dependencies to become available. This is normal during initial deployment.
Next Steps
After successful deployment:
- Next Steps Guide - Post-installation configuration
- Getting Started Guide - Accessing the system
- Configuration Guide - System configuration
- Operations Guide - Day-to-day operations