Installation Guide
Step-by-step installation and upgrade procedures
Overview
This guide provides detailed instructions for installing the AgileTV CDN Manager (ESB3027) in various deployment scenarios. The installation process varies depending on the target environment and desired configuration.
Estimated Installation Time:
| Deployment Type | Time |
|---|
| Single-Node (Lab) | ~15 minutes |
| Multi-Node (3 servers) | ~30 minutes |
Actual installation time may vary depending on hardware performance, network speed, and whether air-gapped procedures are required.
Note: These estimates assume the operating system is already installed on all nodes. OS installation is outside the scope of this guide.
Installation Types
| Installation Type | Description | Use Case |
|---|
| Single-Node (Lab) | Minimal installation on a single host | Acceptance testing, demonstrations, development |
| Multi-Node (Production) | Full high-availability cluster with 3+ server nodes | Production deployments |
Installation Process Summary
The installation follows a sequential process:
- Prepare the host system - Verify requirements and mount the installation ISO
- Install the Kubernetes cluster - Deploy K3s, Longhorn storage, and PostgreSQL
- Join additional nodes (production only) - Expand the cluster for HA or capacity
- Deploy the Manager application - Install the CDN Manager Helm chart
- Post-installation configuration - Configure authentication, networking, and users
Quick Links
Prerequisites
Before beginning installation, ensure the following requirements are met:
- Hardware: Nodes meeting the System Requirements including CPU, memory, and disk specifications
- Operating System: RHEL 9 or compatible clone (details); air-gapped deployments require the OS ISO mounted on all nodes
- Network: Proper firewall configuration between nodes (port requirements, firewall configuration)
- Software: Installation ISO obtained from AgileTV; air-gapped deployments also require the Extras ISO
- Kernel Tuning: For production deployments, apply recommended sysctl settings (Performance Tuning Guide)
We recommend using the Installation Checklist to track your progress through the installation process.
Getting Help
If you encounter issues during installation:
1 - Installation Checklist
Step-by-step checklist to track installation progress
Overview
Use this checklist to track your installation progress. Print this page or keep it open during your installation to ensure all steps are completed correctly.
Pre-Installation
Hardware and Software
Air-Gapped Deployments
Special Requirements
Cluster Installation
Single-Node Deployment
Follow the Single-Node Installation Guide.
Multi-Node Deployment
Follow the Multi-Node Installation Guide.
Primary Server Node
Additional Server Nodes
Agent Nodes (Optional)
Cluster Verification
Application Deployment
Post-Installation
Initial Access
Security Configuration
Monitoring and Operations
Next Steps
Troubleshooting
If you encounter issues during installation:
- Check pod status:
kubectl describe pod <pod-name> - Review logs:
kubectl logs <pod-name> - Check cluster events:
kubectl get events --sort-by='.lastTimestamp' - Review the Troubleshooting Guide for common issues
2 - Single-Node Installation
Lab and acceptance testing deployment
Warning: Single-node deployments are for lab environments, acceptance testing, and demonstrations only. This configuration is not suitable for production workloads. For production deployments, see the Multi-Node Installation Guide, which requires a minimum of 3 server nodes for high availability.
Air-Gapped Deployment? This guide assumes internet connectivity. For air-gapped deployments, see the Air-Gapped Deployment Guide for additional requirements and procedures.
Overview
This guide describes the installation of the AgileTV CDN Manager on a single node. This configuration is intended for lab environments, acceptance testing, and demonstrations only. It is not suitable for production workloads.
Prerequisites
Hardware Requirements
Refer to the System Requirements Guide for hardware specifications. Single-node deployments require the “Single-Node (Lab)” configuration.
Operating System
Refer to the System Requirements Guide for supported operating systems.
Software Access
- Installation ISO:
esb3027-acd-manager-X.Y.Z.iso - Extras ISO (air-gapped only):
esb3027-acd-manager-extras-X.Y.Z.iso
Network Configuration
Ensure that required firewall ports are configured before installation. See the Networking Guide for complete firewall configuration requirements.
SELinux
If SELinux is to be used, it must be set to “Enforcing” mode before running the installer script. The installer will configure appropriate SELinux policies automatically. SELinux cannot be enabled after installation.
Installation Steps
Step 1: Mount the ISO
Create a mount point and mount the installation ISO:
mkdir -p /mnt/esb3027
mount -o loop,ro esb3027-acd-manager-X.Y.Z.iso /mnt/esb3027
Replace X.Y.Z with the actual version number.
Step 2: Install the Base Cluster
Run the installer to set up the K3s Kubernetes cluster:
This installs:
- K3s Kubernetes distribution
- Longhorn distributed storage
- Cloudnative PG operator for PostgreSQL
- Base system dependencies
The installer will configure the node as both a server and agent node.
Step 3: Verify Cluster Status
After the installer completes, verify that all components are operational before proceeding. This verification serves as an important checkpoint to confirm the installation is progressing correctly.
1. Verify the node is ready:
Expected output:
NAME STATUS ROLES AGE VERSION
k3s-server Ready control-plane,etcd,master 2m v1.33.4+k3s1
2. Verify system pods in both namespaces are running:
# Check kube-system namespace (Kubernetes core components)
kubectl get pods -n kube-system
# Check longhorn-system namespace (distributed storage)
kubectl get pods -n longhorn-system
All pods should show Running status. If any pods are still Pending or ContainerCreating, wait until they are ready. Proceeding with incomplete system pods can cause subsequent steps to fail in unpredictable ways.
This verification confirms:
- K3s cluster is operational
- Longhorn distributed storage is running
- Cloudnative PG operator is deployed
- All core components are healthy before continuing
Step 4: Air-Gapped Deployments (If Applicable)
If deploying in an air-gapped environment, load container images from the extras ISO:
mkdir -p /mnt/esb3027-extras
mount -o loop,ro esb3027-acd-manager-extras-X.Y.Z.iso /mnt/esb3027-extras
/mnt/esb3027-extras/load-images
Step 5: Create Configuration File
Create a Helm values file for your deployment. At minimum, configure the manager hostname and at least one router:
# ~/values.yaml
global:
hosts:
manager:
- host: manager.local
routers:
- name: default
address: 127.0.0.1
The routers configuration specifies CDN Director instances. For lab deployments, a placeholder entry is sufficient. For production, specify the actual Director hostnames or IP addresses.
For single-node deployments, you must also disable Kafka replication:
kafka:
replicaCount: 1
controller:
replicaCount: 1
Step 6: Load MaxMind GeoIP Databases (Optional)
If you plan to use GeoIP-based routing or validation features, load the MaxMind GeoIP databases. The following databases are used by the manager:
GeoIP2-City.mmdb - The City DatabaseGeoLite2-ASN.mmdb - The ASN DatabaseGeoIP2-Anonymous-IP.mmdb - The VPN and Anonymous IP Database
A helper utility is provided on the ISO to create the Kubernetes volume:
/mnt/esb3027/generate-maxmind-volume
The utility will prompt for the locations of the three database files and the name of the volume. After running this command, reference the volume in your configuration file:
manager:
maxmindDbVolume: maxmind-db-volume
Replace maxmind-db-volume with the volume name you specified when running the utility.
Tip: When naming the volume, include a revision number or date (e.g., maxmind-db-volume-2026-04 or maxmind-db-volume-v2). This simplifies future updates: create a new volume with an updated name, update the values.yaml to reference the new volume, and delete the old volume after verification.
Step 7: Deploy the Manager Helm Chart
Deploy the CDN Manager application:
helm install acd-manager /mnt/esb3027/helm/charts/acd-manager --values ~/values.yaml
Monitor the deployment progress:
Wait for all pods to show Running status before proceeding.
Note: The default Helm timeout is 5 minutes. If the installation fails due to a rollout timeout, retry with a larger timeout value:
helm install acd-manager /mnt/esb3027/helm/charts/acd-manager --values ~/values.yaml --timeout 10m
If a previous installation attempt failed and you receive an error that the release name is already in use, uninstall the previous release before retrying:
helm uninstall acd-manager
helm install acd-manager /mnt/esb3027/helm/charts/acd-manager --values ~/values.yaml
Step 8: Verify Deployment
Verify all application pods are running:
Expected output for a single-node deployment (pod names will vary):
NAME READY STATUS RESTARTS AGE
acd-manager-5b98d569d9-abc12 1/1 Running 0 3m
acd-manager-confd-6fb78548c4-xnrh4 1/1 Running 0 3m
acd-manager-gateway-8bc8446fc-chs26 1/1 Running 0 3m
acd-manager-kafka-controller-0 2/2 Running 0 3m
acd-manager-metrics-aggregator-76d96c4964-lwdcj 1/1 Running 0 3m
acd-manager-mib-frontend-7bdb69684b-6qxn8 1/1 Running 0 3m
acd-manager-postgresql-0 1/1 Running 0 3m
acd-manager-redis-master-0 2/2 Running 0 3m
acd-manager-redis-replicas-0 2/2 Running 0 3m
acd-manager-selection-input-5fb694b857-qxt67 1/1 Running 0 3m
acd-manager-zitadel-8448b4c4fc-2pkd8 1/1 Running 0 3m
acd-manager-zitadel-init-hh6j7 0/1 Completed 0 4m
acd-manager-zitadel-setup-nwp8k 0/2 Completed 0 4m
alertmanager-0 1/1 Running 0 3m
grafana-6d948cfdc6-77ggk 1/1 Running 0 3m
victoria-metrics-agent-dc87df588-tn8wv 1/1 Running 0 3m
victoria-metrics-alert-757c44c58f-kk9lp 1/1 Running 0 3m
victoria-metrics-longterm-server-0 1/1 Running 0 3m
victoria-metrics-server-0 1/1 Running 0 3m
Note: Init pods (such as zitadel-init and zitadel-setup) will show Completed status after successful initialization. This is expected behavior.
Post-Installation
After installation completes, proceed to the Next Steps guide for:
- Initial user configuration
- Accessing the web interfaces
- Configuring authentication
- Setting up monitoring
Accessing the System
Refer to the Accessing the System section in the Getting Started guide for service URLs and default credentials.
Note: A self-signed SSL certificate is deployed by default. You will need to accept the certificate warning in your browser.
Troubleshooting
If pods fail to start:
- Check pod status:
kubectl describe pod <pod-name> - Review logs:
kubectl logs <pod-name> - Verify resources:
kubectl top pods
See the Troubleshooting Guide for additional assistance.
Next Steps
After successful installation:
- Next Steps Guide - Post-installation configuration
- Configuration Guide - System configuration
- Operations Guide - Day-to-day operations
Appendix: Example Configuration
The following values.yaml provides a minimal working configuration for lab deployments:
# Minimal lab configuration for single-node deployment
global:
hosts:
manager:
- host: manager.local
routers:
- name: default
address: 127.0.0.1
# Single-node: Disable Kafka replication
kafka:
replicaCount: 1
controller:
replicaCount: 1
Customization notes:
- Replace
manager.local with your desired hostname - The
routers entry specifies CDN Director instances. The placeholder 127.0.0.1 may be used if a Director instance isn’t available, or specify actual Director hostnames for production testing - For air-gapped deployments, see Step 4: Air-Gapped Deployments
3 - Multi-Node Installation
Production high-availability deployment
Overview
This guide describes the installation of the AgileTV CDN Manager across multiple nodes for production deployments. This configuration provides high availability and horizontal scaling capabilities.
Air-Gapped Deployment? This guide assumes internet connectivity. For air-gapped deployments, see the Air-Gapped Deployment Guide for additional requirements and procedures.
Prerequisites
Hardware Requirements
Refer to the System Requirements Guide for hardware specifications. Production deployments require:
- Minimum 3 Server nodes (Control Plane Only or Combined role)
- Optional Agent nodes for additional workload capacity
Operating System
Refer to the System Requirements Guide for supported operating systems.
Software Access
- Installation ISO:
esb3027-acd-manager-X.Y.Z.iso (for each node) - Extras ISO (air-gapped only):
esb3027-acd-manager-extras-X.Y.Z.iso
Network Configuration
Ensure that required firewall ports are configured between all nodes before installation. See the Networking Guide for complete firewall configuration requirements.
Multiple Network Interfaces
If your nodes have multiple network interfaces and you want to use a separate interface for cluster traffic (not the default route interface), configure the INSTALL_K3S_EXEC environment variable before installing the cluster or joining nodes.
For example, if bond0 has the default route but you want cluster traffic on bond1:
# For server nodes
export INSTALL_K3S_EXEC="server --node-ip 10.0.0.10 --flannel-iface=bond1"
# For agent nodes
export INSTALL_K3S_EXEC="agent --node-ip 10.0.0.20 --flannel-iface=bond1"
Where:
- Mode: Use
server for the primary node establishing the cluster, or for additional server nodes. Use agent for agent nodes joining the cluster. --node-ip: The IP address of the interface to use for cluster traffic--flannel-iface: The network interface name for Flannel VXLAN overlay traffic
Set this variable on each node before running the install or join scripts.
SELinux
If SELinux is to be used, it must be set to “Enforcing” mode before running the installer script. The installer will configure appropriate SELinux policies automatically. SELinux cannot be enabled after installation.
Installation Steps
Step 1: Prepare the Primary Server Node
Mount the installation ISO on the primary server node:
mkdir -p /mnt/esb3027
mount -o loop,ro esb3027-acd-manager-X.Y.Z.iso /mnt/esb3027
Replace X.Y.Z with the actual version number.
Step 2: Install the Base Cluster on Primary Server
If your node has multiple network interfaces and you need to specify a separate interface for cluster traffic, set the INSTALL_K3S_EXEC environment variable before running the installer (see Multiple Network Interfaces):
export INSTALL_K3S_EXEC="server --node-ip <node-ip> --flannel-iface=<interface>"
Run the installer to set up the K3s Kubernetes cluster:
This installs:
- K3s Kubernetes distribution
- Longhorn distributed storage
- Cloudnative PG operator for PostgreSQL
- Base system dependencies
Important: After the installer completes, verify that all system pods in both namespaces are in the Running state before proceeding:
# Check kube-system namespace (Kubernetes core components)
kubectl get pods -n kube-system
# Check longhorn-system namespace (distributed storage)
kubectl get pods -n longhorn-system
All pods should show Running status. If any pods are still Pending or ContainerCreating, wait until they are ready. Proceeding with incomplete system pods can cause subsequent steps to fail in unpredictable ways.
This verification confirms:
- K3s cluster is operational
- Longhorn distributed storage is running
- Cloudnative PG operator is deployed
- All core components are healthy before continuing
Step 3: Retrieve the Node Token
Retrieve the node token for joining additional nodes:
cat /var/lib/rancher/k3s/server/node-token
Save this token for use on additional nodes. Also note the IP address of the primary server node.
Step 4: Server vs Agent Node Roles
Before joining additional nodes, determine which nodes will serve as Server nodes vs Agent nodes:
| Role | Control Plane | Workloads | HA Quorum | Use Case |
|---|
| Server Node (Combined) | Yes (etcd, API server) | Yes | Participates | Default production role; minimum 3 nodes |
| Server Node (Control Plane Only) | Yes (etcd, API server) | No | Participates | Dedicated control plane; requires separate Agent nodes |
| Agent Node | No | Yes | No | Additional workload capacity only |
Guidance:
- Combined role (default): Server nodes run both control plane and workloads; minimum 3 nodes required for HA
- Control Plane Only: Dedicate nodes to control plane functions; requires at least 3 Server nodes plus 3+ Agent nodes for workloads
- Agent nodes are required if using Control Plane Only servers; optional if using Combined role servers
- For most deployments, 3 Server nodes (Combined role) with no Agent nodes is sufficient
- Add Agent nodes to scale workload capacity without affecting control plane quorum
Proceed to Step 5 to join Server nodes. Agent nodes are joined after all Server nodes are ready.
Step 5: Join Additional Server Nodes
On each additional server node:
Mount the ISO:
mkdir -p /mnt/esb3027
mount -o loop,ro esb3027-acd-manager-X.Y.Z.iso /mnt/esb3027
Join the cluster:
If your node has multiple network interfaces, set the INSTALL_K3S_EXEC environment variable with the server mode before running the join script (see Multiple Network Interfaces):
export INSTALL_K3S_EXEC="server --node-ip <node-ip> --flannel-iface=<interface>"
Run the join script:
/mnt/esb3027/join-server https://<primary-server-ip>:6443 <node-token>
Replace <primary-server-ip> with the IP address of the primary server and <node-token> with the token retrieved in Step 3.
- Verify the node joined successfully:
Repeat for each server node. A minimum of 3 server nodes is required for high availability.
Step 5b: Taint Control Plane Only Nodes (Optional)
If you are using dedicated Control Plane Only nodes (not Combined role), apply taints to prevent workload scheduling:
kubectl taint nodes <node-name> CriticalAddonsOnly=true:NoSchedule
Apply this taint to each Control Plane Only node. Verify taints are applied:
kubectl describe nodes | grep -A 5 "Taints"
Note: This step is only required if you want dedicated control plane nodes. For Combined role deployments, do not apply taints.
Important: Control Plane Only Server nodes can be deployed with lower hardware specifications (2 cores, 4 GiB, 64 GiB) than the installer’s default minimum requirements. If your Control Plane Only Server nodes do not meet the Single-Node Lab configuration minimums (8 cores, 16 GiB, 128 GiB), you must set the SKIP_REQUIREMENTS_CHECK environment variable before running the installer or join command:
# For the primary server node
export SKIP_REQUIREMENTS_CHECK=1
/mnt/esb3027/install
# For additional Control Plane Only Server nodes
export SKIP_REQUIREMENTS_CHECK=1
/mnt/esb3027/join-server https://<primary-server-ip>:6443 <node-token>
Note: This applies to Server nodes only. Agent nodes have separate minimum requirements.
Step 6: Join Agent Nodes (Optional)
On each agent node:
Mount the ISO:
mkdir -p /mnt/esb3027
mount -o loop,ro esb3027-acd-manager-X.Y.Z.iso /mnt/esb3027
Join the cluster as an agent:
If your node has multiple network interfaces, set the INSTALL_K3S_EXEC environment variable with the agent mode before running the join script (see Multiple Network Interfaces):
export INSTALL_K3S_EXEC="agent --node-ip <node-ip> --flannel-iface=<interface>"
Run the join script:
/mnt/esb3027/join-agent https://<primary-server-ip>:6443 <node-token>
- Verify the node joined successfully:
Agent nodes provide additional workload capacity but do not participate in the control plane quorum.
Step 7: Verify Cluster Status
After all nodes are joined, verify the cluster is operational:
1. Verify all nodes are ready:
Expected output:
NAME STATUS ROLES AGE VERSION
k3s-server-0 Ready control-plane,etcd,master 5m v1.33.4+k3s1
k3s-server-1 Ready control-plane,etcd,master 3m v1.33.4+k3s1
k3s-server-2 Ready control-plane,etcd,master 2m v1.33.4+k3s1
k3s-agent-1 Ready <none> 1m v1.33.4+k3s1
k3s-agent-2 Ready <none> 1m v1.33.4+k3s1
2. Verify system pods in both namespaces are running:
# Check kube-system namespace (Kubernetes core components)
kubectl get pods -n kube-system
# Check longhorn-system namespace (distributed storage)
kubectl get pods -n longhorn-system
All pods should show Running status. If any pods are still Pending or ContainerCreating, wait until they are ready.
This verification confirms:
- K3s cluster is operational across all nodes
- Longhorn distributed storage is running
- Cloudnative PG operator is deployed
- All core components are healthy before proceeding to application deployment
Step 9: Air-Gapped Deployments (If Applicable)
If deploying in an air-gapped environment, on each node:
mkdir -p /mnt/esb3027-extras
mount -o loop,ro esb3027-acd-manager-extras-X.Y.Z.iso /mnt/esb3027-extras
/mnt/esb3027-extras/load-images
Step 10: Create Configuration File
Create a Helm values file for your deployment. At minimum, configure the manager hostnames, Zitadel external domain, and at least one router:
# ~/values.yaml
global:
hosts:
manager:
- host: manager.example.com
- host: manager-backup.example.com
routers:
- name: director-1
address: 192.0.2.1
- name: director-2
address: 192.0.2.2
zitadel:
zitadel:
ExternalDomain: manager.example.com
Tip: A complete default values.yaml file is available on the installation ISO at /mnt/esb3027/values.yaml. Copy this file to use as a starting point for your configuration.
Important: The zitadel.zitadel.ExternalDomain must match the first entry in global.hosts.manager or authentication will fail due to CORS policy violations.
Important: For multi-node deployments, Kafka replication is enabled by default with 3 replicas. Do not modify the kafka.replicaCount or kafka.controller.replicaCount settings unless you understand the implications for data durability.
Step 11: Load MaxMind GeoIP Databases (Optional)
If you plan to use GeoIP-based routing or validation features, load the MaxMind GeoIP databases. The following databases are used by the manager:
GeoIP2-City.mmdb - The City DatabaseGeoLite2-ASN.mmdb - The ASN DatabaseGeoIP2-Anonymous-IP.mmdb - The VPN and Anonymous IP Database
A helper utility is provided on the ISO to create the Kubernetes volume:
/mnt/esb3027/generate-maxmind-volume
The utility will prompt for the locations of the three database files and the name of the volume. After running this command, reference the volume in your configuration file:
manager:
maxmindDbVolume: maxmind-db-volume
Replace maxmind-db-volume with the volume name you specified when running the utility.
Tip: When naming the volume, include a revision number or date (e.g., maxmind-db-volume-2026-04 or maxmind-db-volume-v2). This simplifies future updates: create a new volume with an updated name, update the values.yaml to reference the new volume, and delete the old volume after verification.
For production deployments, configure a valid TLS certificate from a trusted Certificate Authority (CA). A self-signed certificate is deployed by default if no certificate is provided.
Method 1: Create TLS Secret Manually
Create a Kubernetes TLS secret with your certificate and key:
kubectl create secret tls acd-manager-tls --cert=tls.crt --key=tls.key
Method 2: Helm-Managed Secret
Add the certificate directly to your values.yaml:
ingress:
secrets:
acd-manager-tls: |
-----BEGIN CERTIFICATE-----
...
-----END CERTIFICATE-----
tls:
- hosts:
- manager.example.com
secretName: acd-manager-tls
Configuring All Ingress Controllers
All ingress controllers must be configured with the same certificate secret and hostname:
ingress:
hostname: manager.example.com
tls: true
secretName: acd-manager-tls
zitadel:
ingress:
tls:
- hosts:
- manager.example.com
secretName: acd-manager-tls
confd:
ingress:
hostname: manager.example.com
tls: true
secretName: acd-manager-tls
mib-frontend:
ingress:
hostname: manager.example.com
tls: true
secretName: acd-manager-tls
Important: The hostname must match the first entry in global.hosts.manager for Zitadel CORS compatibility. The secret name has a maximum length of 53 characters.
Step 13: Deploy the Manager Helm Chart
Deploy the CDN Manager application:
helm install acd-manager /mnt/esb3027/helm/charts/acd-manager --values ~/values.yaml
Note: By default, helm install runs silently until completion. To see real-time output during deployment, add the --debug flag:
helm install acd-manager /mnt/esb3027/helm/charts/acd-manager --values ~/values.yaml --debug
Tip: For better organization, split your configuration into multiple files and specify them with repeated --values flags:
helm install acd-manager /mnt/esb3027/helm/charts/acd-manager \
--values ~/values-base.yaml \
--values ~/values-tls.yaml \
--values ~/values-autoscaling.yaml
Later files override earlier files, allowing you to maintain a base configuration with environment-specific overrides.
Monitor the deployment progress:
Wait for all pods to show Running status before proceeding.
Note: The default Helm timeout is 5 minutes. If the installation fails due to a rollout timeout, retry with a larger timeout value:
helm install acd-manager /mnt/esb3027/helm/charts/acd-manager --values ~/values.yaml --timeout 10m
If a previous installation attempt failed and you receive an error that the release name is already in use, uninstall the previous release before retrying:
helm uninstall acd-manager
helm install acd-manager /mnt/esb3027/helm/charts/acd-manager --values ~/values.yaml
Step 14: Verify Deployment
Verify all application pods are running:
Note: During the initial deployment, several pods may enter a CrashLoopBackoff state depending on the timing of other containers starting up. This is expected behavior as some services wait for dependencies (such as databases or Kafka) to become available. The deployment should stabilize automatically after a few minutes.
Verify pods are distributed across nodes:
Expected output for a 3-node cluster (pod names will vary):
NAME READY STATUS RESTARTS AGE
acd-cluster-postgresql-1 1/1 Running 0 11m
acd-cluster-postgresql-2 1/1 Running 0 11m
acd-cluster-postgresql-3 1/1 Running 0 10m
acd-manager-5b98d569d9-2pbph 1/1 Running 0 3m
acd-manager-5b98d569d9-m54f9 1/1 Running 0 3m
acd-manager-5b98d569d9-pq26f 1/1 Running 0 3m
acd-manager-confd-6fb78548c4-xnrh4 1/1 Running 0 3m
acd-manager-gateway-8bc8446fc-chs26 1/1 Running 0 3m
acd-manager-gateway-8bc8446fc-wzrml 1/1 Running 0 3m
acd-manager-kafka-controller-0 2/2 Running 0 3m
acd-manager-kafka-controller-1 2/2 Running 0 3m
acd-manager-kafka-controller-2 2/2 Running 0 3m
acd-manager-metrics-aggregator-76d96c4964-lwdcj 1/1 Running 2 3m
acd-manager-mib-frontend-7bdb69684b-6qxn8 1/1 Running 0 3m
acd-manager-mib-frontend-7bdb69684b-pkjrw 1/1 Running 0 3m
acd-manager-redis-master-0 2/2 Running 0 3m
acd-manager-redis-replicas-0 2/2 Running 0 3m
acd-manager-selection-input-5fb694b857-qxt67 1/1 Running 2 3m
acd-manager-zitadel-8448b4c4fc-2pkd8 1/1 Running 0 3m
acd-manager-zitadel-8448b4c4fc-vchp9 1/1 Running 0 3m
acd-manager-zitadel-init-hh6j7 0/1 Completed 0 4m
acd-manager-zitadel-setup-nwp8k 0/2 Completed 0 4m
alertmanager-0 1/1 Running 0 3m
grafana-6d948cfdc6-77ggk 1/1 Running 0 3m
telegraf-54779f5f46-2jfj5 1/1 Running 0 3m
victoria-metrics-agent-dc87df588-tn8wv 1/1 Running 0 3m
victoria-metrics-alert-757c44c58f-kk9lp 1/1 Running 0 3m
victoria-metrics-longterm-server-0 1/1 Running 0 3m
victoria-metrics-server-0 1/1 Running 0 3m
Note: Init pods (such as zitadel-init and zitadel-setup) will show Completed status after successful initialization. This is expected behavior. Some pods may show restart counts as they wait for dependencies to become available.
Add DNS records for the manager hostname. For high availability, configure multiple A records pointing to different server nodes:
manager.example.com. IN A <server-1-ip>
manager.example.com. IN A <server-2-ip>
manager.example.com. IN A <server-3-ip>
Alternatively, configure a load balancer to distribute traffic across nodes.
Post-Installation
After installation completes, proceed to the Next Steps guide for:
- Initial user configuration
- Accessing the web interfaces
- Configuring authentication
- Setting up monitoring
Accessing the System
Refer to the Accessing the System section in the Getting Started guide for service URLs and default credentials.
Note: A self-signed SSL certificate is deployed by default. For production deployments, configure a valid SSL certificate before exposing the system to users.
High Availability Considerations
Pod Distribution
The Helm chart configures pod anti-affinity rules to ensure:
- Kafka controllers are scheduled on separate nodes
- PostgreSQL cluster members are distributed across nodes
- Application pods are spread across available nodes
Data Replication and Failure Tolerance
For detailed information on data replication strategies and failure scenario tolerance, refer to the Architecture Guide and System Requirements Guide.
Troubleshooting
If pods fail to start or nodes fail to join:
- Check node status:
kubectl get nodes - Describe problematic pods:
kubectl describe pod <pod-name> - Review logs:
kubectl logs <pod-name> - Check cluster events:
kubectl get events --sort-by='.lastTimestamp'
See the Troubleshooting Guide for additional assistance.
Next Steps
After successful installation:
- Next Steps Guide - Post-installation configuration
- Configuration Guide - System configuration
- Operations Guide - Day-to-day operations
4 - Air-Gapped Deployment Guide
Installation procedures for air-gapped environments
Overview
This guide describes the installation of the AgileTV CDN Manager in air-gapped environments (no internet access). Air-gapped deployments require additional preparation compared to connected deployments.
Key differences from connected deployments:
- Both Installation ISO and Extras ISO are required
- OS installation ISO must be mounted on all nodes
- Container images must be loaded from the Extras ISO on each node
- Additional firewall considerations for OS package repositories
Prerequisites
Required ISOs
Before beginning installation, obtain the following:
| ISO | Filename | Purpose |
|---|
| Installation ISO | esb3027-acd-manager-X.Y.Z.iso | Kubernetes cluster and Manager application |
| Extras ISO | esb3027-acd-manager-extras-X.Y.Z.iso | Container images for air-gapped environments |
| OS Installation ISO | RHEL 9 or compatible clone | Operating system packages (required on all nodes) |
Single-Node vs Multi-Node
Air-gapped procedures apply to both deployment types:
Network Configuration
Air-gapped environments may have internal network mirrors for OS packages. If no internal mirror exists, the OS installation ISO must be mounted on each node to provide packages during installation.
Air-Gapped Installation Steps
Step 1: Prepare All Nodes
On each node (primary server, additional servers, and agents):
Mount the OS installation ISO:
mkdir -p /mnt/os
mount -o loop,ro /path/to/rhel-9.iso /mnt/os
Configure local repository (if no internal mirror):
cat > /etc/yum.repos.d/local.repo <<EOF
[local]
name=Local OS Repository
baseurl=file:///mnt/os/BaseOS
enabled=1
gpgcheck=0
EOF
Verify repository is accessible:
Step 2: Mount Installation ISOs
On the primary server node first, then each additional node:
# Mount Installation ISO
mkdir -p /mnt/esb3027
mount -o loop,ro esb3027-acd-manager-X.Y.Z.iso /mnt/esb3027
# Mount Extras ISO
mkdir -p /mnt/esb3027-extras
mount -o loop,ro esb3027-acd-manager-extras-X.Y.Z.iso /mnt/esb3027-extras
Step 3: Install Kubernetes Cluster
Primary Server Node
Wait for the installer to complete and verify system pods are running:
kubectl get nodes
kubectl get pods -n kube-system
kubectl get pods -n longhorn-system
Additional Server Nodes (Multi-Node Only)
On each additional server node:
/mnt/esb3027/join-server https://<primary-server-ip>:6443 <node-token>
Agent Nodes (Optional)
On each agent node:
/mnt/esb3027/join-agent https://<primary-server-ip>:6443 <node-token>
Step 4: Load Container Images
On each node in the cluster:
/mnt/esb3027-extras/load-images
This script loads all container images from the Extras ISO into the local container runtime.
Important: This step must be performed on every node (primary server, additional servers, and agents) before deploying the Manager application.
Step 5: Create Configuration File
Create a Helm values file for your deployment. At minimum, configure the manager hostname and router addresses:
# ~/values.yaml
global:
hosts:
manager:
- host: manager.local
routers:
- name: default
address: 127.0.0.1
# Single-node: Disable Kafka replication
kafka:
replicaCount: 1
controller:
replicaCount: 1
For multi-node deployments, see the Multi-Node Installation Guide for complete configuration requirements.
Step 6: Deploy the Manager
Deploy the CDN Manager Helm chart:
helm install acd-manager /mnt/esb3027/helm/charts/acd-manager --values ~/values.yaml
Monitor the deployment progress:
Wait for all pods to show Running status before proceeding.
Step 7: Verify Deployment
Verify all application pods are running:
All pods should show Running status (except init pods which show Completed).
Post-Installation
After installation completes:
- Access the system via HTTPS at
https://<manager-host> - Configure authentication via Zitadel at
https://<manager-host>/ui/console - Set up monitoring via Grafana at
https://<manager-host>/grafana
See the Next Steps Guide for detailed post-installation configuration.
Updating MaxMind GeoIP Databases
If using GeoIP-based routing, load the MaxMind databases:
/mnt/esb3027/generate-maxmind-volume
The utility will prompt for the database file locations and volume name. Reference the volume in your values.yaml:
manager:
maxmindDbVolume: maxmind-geoip-2026-04
See the Operations Guide for database update procedures.
Troubleshooting
Image Pull Errors
If pods fail with image pull errors:
- Verify the
load-images script completed successfully on all nodes - Check container runtime image list:
crictl images | grep <image-name>
- Ensure image tags in Helm chart match tags on the Extras ISO
OS Package Errors
If the installer reports missing OS packages:
- Verify OS ISO is mounted on the affected node
- Check repository configuration:
dnf repolist
dnf info <package-name>
- Ensure the ISO matches the installed OS version
Longhorn Volume Issues
If Longhorn volumes fail to mount:
- Verify all nodes have the
load-images script completed - Check Longhorn system pods:
kubectl get pods -n longhorn-system
- Review Longhorn UI via port-forward:
kubectl port-forward -n longhorn-system svc/longhorn-frontend 8080:80
Next Steps
After successful installation:
- Next Steps Guide - Post-installation configuration
- Operations Guide - Day-to-day operational procedures
- Troubleshooting Guide - Common issues and resolution
5 - Upgrade Guide
Upgrading the CDN Manager to a newer version
Overview
This guide describes the procedure for upgrading the AgileTV CDN Manager (ESB3027) to a newer version. The upgrade process involves updating the Kubernetes cluster components and redeploying the Helm chart with the new version.
Prerequisites
Backup Requirements
Before beginning any upgrade, ensure you have:
- PostgreSQL Backup: Verify recent backups are available via the Cloudnative PG operator
- Configuration Backup: Save your current
values.yaml file(s) - TLS Certificates: Ensure certificate files are backed up
- MaxMind Volumes: Note the current volume names if using GeoIP databases
Version Compatibility
Review the Release Notes for the target version to check for:
- Breaking changes requiring manual intervention
- Required intermediate upgrade steps
- New configuration options that should be set
Cluster Health
Verify the cluster is healthy before upgrading:
kubectl get nodes
kubectl get pods
kubectl get pvc
All nodes should show Ready status and all pods should be Running (or Completed for job pods).
Upgrade Methods
There are three upgrade methods available. Choose the one that best fits your situation:
| Method | Downtime | Use Case |
|---|
| Rolling Upgrade | Minimal | Patch releases; minor version upgrades; configuration updates |
| Clean Upgrade | Brief | Major version upgrades; component changes; troubleshooting |
| Full Reinstall | Extended | Cluster rebuilds; troubleshooting persistent issues; ensuring clean state |
Method Selection Guidance:
Rolling Upgrade (Method 1) is the default choice for most upgrades. Use this for patch releases (e.g., 1.6.0 → 1.6.1) and even minor version upgrades (e.g., 1.4.0 → 1.6.0) where no breaking changes are documented. This method preserves all existing resources and performs an in-place update. Note: This method supports Helm’s automatic rollback (helm rollback) if the upgrade fails, allowing quick recovery to the previous state.
Clean Upgrade (Method 2) is recommended for major version upgrades (e.g., 1.x → 2.x) or when the release notes indicate significant component changes. This method ensures all resources are recreated with the new version, avoiding potential issues with stale configurations. Also use this method when troubleshooting upgrade failures from Method 1.
Full Reinstall (Method 3) should only be used when a completely clean cluster state is required. This includes troubleshooting persistent cluster-level issues, recovering from failed upgrades that cannot be rolled back, or when migrating between significantly different deployment configurations. This method requires verified backups and should be planned for extended downtime.
Upgrade Steps
Method 1: Rolling Upgrade (Recommended)
This method performs an in-place rolling upgrade with minimal downtime.
Step 1: Obtain the New Installation ISO
Unmount the old ISO (if mounted) and mount the new installation ISO:
umount /mnt/esb3027 2>/dev/null || true
mount -o loop,ro esb3027-acd-manager-X.Y.Z.iso /mnt/esb3027
Replace X.Y.Z with the target version number.
Step 2: Review and Update Configuration
Compare the default values.yaml from the new ISO with your current configuration:
diff /mnt/esb3027/values.yaml ~/values.yaml
Update your configuration file to include any new required settings. Common updates include:
# ~/values.yaml
global:
hosts:
manager:
- host: manager.example.com
routers:
- name: director-1
address: 192.0.2.1
zitadel:
zitadel:
ExternalDomain: manager.example.com
# Add any new required settings for the target version
Important: Do not modify settings unrelated to the upgrade unless specifically documented in the release notes.
Step 3: Update MaxMind GeoIP Volumes (If Applicable)
If you use MaxMind GeoIP databases, use the utility from the new ISO to create an updated volume:
/mnt/esb3027/generate-maxmind-volume
Update your values.yaml to reference the new volume name:
manager:
maxmindDbVolume: maxmind-geoip-2026-04
Tip: Using dated or versioned volume names (e.g., maxmind-geoip-2026-04) allows you to create new volumes during upgrades and delete old ones after verification.
Step 4: Update TLS Certificates (If Needed)
If your TLS certificates need renewal or the new version requires certificate updates, create or update the secret:
kubectl create secret tls acd-manager-tls --cert=tls.crt --key=tls.key --dry-run=client -o yaml | kubectl apply -f -
Step 5: Upgrade the Helm Release
Perform a Helm upgrade with the new chart:
helm upgrade acd-manager /mnt/esb3027/helm/charts/acd-manager \
--values ~/values.yaml
Note: The upgrade performs a rolling update of each deployment in the chart. Deployments are upgraded one at a time, with pods being terminated and recreated sequentially. StatefulSets (PostgreSQL, Kafka, Redis) roll out one pod at a time to maintain data availability.
Monitor the upgrade progress:
Wait for all pods to stabilize and show Running status before considering the upgrade complete. Some pods may temporarily enter CrashLoopBackoff during the transition as they wait for dependencies to become available.
Step 6: Verify the Upgrade
Check the deployed version:
helm list
kubectl get deployments -o wide
Verify application functionality:
- Access the MIB Frontend and confirm it loads
- Test API connectivity
- Verify Grafana dashboards are accessible
- Check that Zitadel authentication is working
Step 7: Clean Up
After confirming the upgrade is successful:
Unmount the old ISO (if still mounted):
Delete old MaxMind volumes (if replaced):
kubectl get pvc
kubectl delete pvc <old-volume-name>
Remove old configuration files if no longer needed.
Method 2: Clean Upgrade (Helm Uninstall/Install)
This method removes the existing Helm release before installing the new version. This is useful for major version upgrades or when troubleshooting upgrade issues.
Warning: This method causes brief downtime as all resources are deleted before reinstallation.
Step 1: Obtain the New Installation ISO
Mount the new installation ISO on the primary server node:
umount /mnt/esb3027 2>/dev/null || true
mount -o loop,ro esb3027-acd-manager-X.Y.Z.iso /mnt/esb3027
Step 2: Backup Configuration
Save your current Helm values:
helm get values acd-manager -o yaml > ~/values-backup.yaml
Step 3: Uninstall the Existing Release
Remove the existing Helm release:
helm uninstall acd-manager
Wait for pods to terminate:
Note: Helm uninstall does not remove PersistentVolumes (PVs) or PersistentVolumeClaims (PVCs). All data stored in PostgreSQL, Kafka, Redis, and Longhorn volumes is preserved during the uninstall process. When the new version is installed, it will reattach to the existing PVCs and restore data automatically.
Step 4: Review and Update Configuration
Compare the default values.yaml from the new ISO with your configuration:
diff /mnt/esb3027/values.yaml ~/values.yaml
Update your configuration file as needed.
Step 5: Install the New Release
Install the new version:
helm install acd-manager /mnt/esb3027/helm/charts/acd-manager \
--values ~/values.yaml
Monitor the deployment:
Wait for all pods to stabilize before proceeding.
Step 6: Verify the Upgrade
Verify the upgrade as described in Method 1, Step 7.
Method 3: Full Reinstall (Cluster Rebuild)
This method completely removes Kubernetes and reinstalls from scratch. Use only for cluster rebuilds or when other upgrade methods fail.
Warning: This method causes extended downtime and permanent data loss. The K3s uninstall process destroys all Longhorn PersistentVolumes (PVs) and PersistentVolumeClaims (PVCs). All data stored in PostgreSQL, Kafka, Redis, and application volumes will be permanently lost. Verified backups are required before proceeding.
Warning: This method should only be used when necessary. Ensure you have verified backups before proceeding.
Step 1: Stop Kubernetes Services
On all nodes (server and agent), stop the K3s service:
Step 2: Uninstall K3s (Server Nodes Only)
On the primary server node first, then each additional server node:
/usr/local/bin/k3s-uninstall.sh
Step 3: Clean Up Residual State (All Nodes)
On all nodes, remove residual state:
/usr/local/bin/k3s-kill-all.sh
rm -rf /var/lib/rancher/k3s/*
Warning: This removes all cluster data including Longhorn PersistentVolumes (PVs) and PersistentVolumeClaims (PVCs). All data stored in PostgreSQL, Kafka, Redis, and application volumes will be permanently lost. Ensure verified backups are available before proceeding.
Step 4: Reinstall K3s Cluster
Follow the installation procedure from the beginning:
- Primary Server: Run
/mnt/esb3027/install - Additional Servers: Join with
/mnt/esb3027/join-server - Agent Nodes: Join with
/mnt/esb3027/join-agent
Note: The K3s node token is regenerated during reinstallation. Any previously saved tokens from the old deployment will no longer be valid. Retrieve the new token from /var/lib/rancher/k3s/server/node-token on the primary server after installation.
Step 5: Reinstall MaxMind Volumes (If Applicable)
Recreate MaxMind GeoIP volumes:
/mnt/esb3027/generate-maxmind-volume
Step 6: Deploy the Helm Chart
Deploy the new version:
helm install acd-manager /mnt/esb3027/helm/charts/acd-manager \
--values ~/values.yaml
Step 7: Verify the Installation
Verify all pods are running:
Verify application functionality as described in Method 1, Step 6.
Rollback Procedure
Rollback procedures vary by upgrade method:
Method 1 (Rolling Upgrade)
Use Helm’s built-in rollback command:
helm rollback acd-manager
This reverts to the previous Helm release revision automatically.
Or manually redeploy the previous version:
helm upgrade acd-manager /mnt/esb3027-old/helm/charts/acd-manager \
--values ~/values.yaml
Note: If you use multiple --values files for organization, ensure they are specified in the same order as the original installation.
Method 2 (Clean Upgrade)
Reinstall the previous version:
helm uninstall acd-manager
helm install acd-manager /mnt/esb3027-old/helm/charts/acd-manager \
--values ~/values-backup.yaml
Method 3 (Full Reinstall)
Rollback requires repeating the full cluster reinstall procedure using the old installation ISO. Follow Method 3 steps with the previous version’s ISO. Ensure verified backups are available before attempting.
Troubleshooting
Pods Fail to Start
Check pod status and events:
kubectl describe pod <pod-name>
kubectl get events --sort-by='.lastTimestamp'
Review pod logs:
kubectl logs <pod-name>
kubectl logs <pod-name> -p # Previous instance logs
Database Migration Issues
If PostgreSQL migrations fail:
Check Cloudnative PG cluster status:
kubectl get clusters
kubectl describe cluster <cluster-name>
Review migration job logs:
kubectl get jobs
kubectl logs job/<migration-job-name>
Helm Upgrade Fails
If helm upgrade fails:
Check Helm release status:
helm status acd-manager
helm history acd-manager
Review the error message for specific failures
Attempt rollback if necessary
Post-Upgrade
After a successful upgrade:
- Review the Release Notes for any post-upgrade tasks
- Update monitoring dashboards if new metrics are available
- Test all critical functionality
- Document the upgrade in your change management system
Next Steps
After completing the upgrade:
- Next Steps Guide - Review post-installation tasks
- Operations Guide - Day-to-day operational procedures
- Release Notes - Review new features and changes
6 - Next Steps
Post-installation configuration tasks
Overview
After completing the installation of the AgileTV CDN Manager (ESB3027), several post-installation configuration tasks must be performed before the system is ready for production use. This guide walks you through the essential next steps.
Prerequisites
Before proceeding, ensure:
- The CDN Manager Helm chart is successfully deployed
- All pods are in
Running status - You have network access to the cluster hostname or IP
- You have the default credentials available
Step 1: Access Zitadel Console
The first step is to configure user authentication through Zitadel Identity and Access Management (IAM).
Navigate to the Zitadel Console:
https://<manager-host>/ui/console
Replace <manager-host> with your configured hostname (e.g., manager.local or manager.example.com).
Important: The <manager-host> must match the first entry in global.hosts.manager from your Helm values exactly. Zitadel uses name-based virtual hosting and CORS validation. If the hostname does not match, authentication will fail.
Log in with the default administrator credentials (also listed in the Glossary):
- Username:
admin@agiletv.dev - Password:
Password1!
Important: If prompted to configure Multi-Factor Authentication (MFA), you must skip this step for now. MFA is not currently supported. Attempting to configure MFA may lock you out of the administrator account.
Security Recommendation: After logging in, create a new administrator account with proper roles. Once verified, disable or delete the default admin@agiletv.dev account. For details on required roles and administrator permissions, see Zitadel’s Administrator Documentation.
Zitadel requires an SMTP server to send email notifications and perform email validations.
In the Zitadel Console, navigate to Settings > Default Settings
Configure the SMTP settings:
- SMTP Host: Your mail server hostname
- SMTP Port: Typically 587 (TLS) or 465 (SSL)
- SMTP Username: Mail account username
- SMTP Password: Mail account password
- Sender Address: Email address for outgoing mail (e.g.,
noreply@example.com)
Save the configuration
Note: Without SMTP configuration, email-based user validation and password recovery features will not function.
Step 3: Create Additional User Accounts
Create user accounts for operators and administrators:
Tip: For detailed guidance on managing users, roles, and permissions in the Zitadel Console, see Zitadel’s User Management Documentation.
In the Zitadel Console, navigate to Users > Add User
Fill in the user details:
- Username: Unique username
- First Name: User’s first name
- Last Name: User’s last name
- Email: User’s email address (this is their login username)
Known Issue: Due to a limitation in this release of Zitadel, the username must match the local part (the portion before the @) of the email address. For example, if the email is foo@example.com, the username must be foo.
If these do not match, Zitadel may allow login with the mismatched local part while blocking the full email address. For instance, if username is foo but email is foo.bar@example.com, login with foo@example.com may succeed while foo.bar@example.com is blocked.
Workaround: Always ensure the username matches the email local part exactly.
Important: The following options must be configured:
- Email Verified: Check this box to skip email verification
- Set Initial Password: Enter a temporary password for the user
Note: If you configured SMTP settings in Step 2, the user will receive an email asking to verify their address and set their initial password. If SMTP is not configured, you must check the “Email Verified” box and set an initial password manually, otherwise the user account will not be enabled.
Click Create User
Provide the user with:
- Their username
- The temporary password (if set manually)
- The Zitadel Console URL
Instruct the user to change their password on first login
Zitadel manages roles and permissions for accessing the CDN Manager:
In the Zitadel Console, navigate to Roles
Assign appropriate roles to users:
- Admin: Full administrative access
- Operator: Operational access without administrative functions
- Viewer: Read-only access
To assign a role:
- Select the user
- Click Add Role
- Select the appropriate role
- Save the assignment
Step 5: Access the MIB Frontend
The MIB Frontend is the web-based configuration GUI for CDN operators:
Navigate to the MIB Frontend:
https://<manager-host>/gui
Log in using your Zitadel credentials
Verify you can access the configuration interface
Step 6: Verify API Access
Test API connectivity to ensure the system is functioning:
curl -k https://<manager-host>/api/v1/health/ready
Expected response:
See the API Guide for detailed API documentation.
For production deployments, a valid TLS certificate from a trusted Certificate Authority should be configured. If you did not configure TLS certificates during installation, refer to Step 12: Configure TLS Certificates in the Installation Guide.
Step 8: Set Up Monitoring and Alerting
Configure monitoring dashboards and alerting:
Access Grafana:
- Navigate to
https://<manager-host>/grafana - Log in with default credentials (also listed in the Glossary):
- Username:
admin - Password:
edgeware
Review Pre-built Dashboards:
- System health dashboards are included by default
- CDN metrics dashboards show routing and usage statistics
Note: CDN Director instances automatically have DNS names configured for use in Grafana dashboards. The DNS name is derived from the name field in global.hosts.routers with .external appended. For example, a router named my-router-1 will have the DNS name my-router-1.external in Grafana configuration.
Step 9: Verify Kafka and PostgreSQL Health
Ensure the data layer components are healthy:
Verify the following pods are running:
| Component | Pod Name Pattern | Expected Status |
|---|
| Kafka | acd-manager-kafka-controller-* | Running (3 pods for production) |
| PostgreSQL | acd-cluster-postgresql-0, acd-cluster-postgresql-1, acd-cluster-postgresql-2 | Running (3-node HA cluster) |
| Redis | acd-manager-redis-master-* | Running |
All pods should show Running status with no restarts.
For improved network performance, configure availability zones to enable Topology Aware Hints. This optimizes service-to-pod routing by keeping traffic within the same zone when possible.
See the Performance Tuning Guide for detailed instructions on:
- Labeling nodes with zone and region topology
- Verifying topology configuration
- Requirements for Topology Aware Hints to activate
- Integration with pod anti-affinity rules
Note: This step is optional. If zone labels are not configured, the system will fall back to random load-balancing.
Step 11: Review System Configuration
Verify the initial configuration:
Review Helm Values:
helm get values acd-manager -o yaml
Check Ingress Configuration:
Verify Service Endpoints:
Step 12: Document Your Deployment
Maintain documentation for your deployment:
- Cluster hostname and IP addresses
- Configuration file locations
- User accounts and roles created
- TLS certificate expiration dates
- Backup procedures and schedules
- Monitoring and alerting contacts
Next Steps
After completing post-installation configuration:
- Configuration Guide - Detailed system configuration options
- Operations Guide - Day-to-day operational procedures
- Metrics & Monitoring Guide - Comprehensive monitoring setup
- API Guide - REST API reference and integration examples
Troubleshooting
Cannot Access Zitadel Console
- Verify DNS resolution or hosts file configuration
- Check that Traefik ingress is running:
kubectl get pods -n kube-system | grep traefik - Review Traefik logs:
kubectl logs -n kube-system -l app.kubernetes.io/name=traefik
Authentication Failures
- Verify Zitadel pods are healthy:
kubectl get pods | grep zitadel - Check Zitadel logs:
kubectl logs <zitadel-pod-name> - Ensure the external domain matches your hostname in Zitadel configuration
MIB Frontend Not Loading
- Verify MIB Frontend pods are running:
kubectl get pods | grep mib-frontend - Check for connectivity issues to Confd and API services
- Review browser console for JavaScript errors
API Returns 401 Unauthorized
- Verify you have a valid bearer token
- Check token expiration
- Ensure Zitadel authentication is functioning
For additional troubleshooting assistance, refer to the Troubleshooting Guide.