Single-Node Installation
Page not available in that version
The current page Single-Node Installation doesn't exist in version v1.4.1 of the documentation for this product.
Warning: Single-node deployments are for lab environments, acceptance testing, and demonstrations only. This configuration is not suitable for production workloads. For production deployments, see the Multi-Node Installation Guide, which requires a minimum of 3 server nodes for high availability.
Air-Gapped Deployment? This guide assumes internet connectivity. For air-gapped deployments, see the Air-Gapped Deployment Guide for additional requirements and procedures.
Overview
This guide describes the installation of the AgileTV CDN Manager on a single node. This configuration is intended for lab environments, acceptance testing, and demonstrations only. It is not suitable for production workloads.
Prerequisites
Hardware Requirements
Refer to the System Requirements Guide for hardware specifications. Single-node deployments require the “Single-Node (Lab)” configuration.
Operating System
Refer to the System Requirements Guide for supported operating systems.
Software Access
- Installation ISO:
esb3027-acd-manager-X.Y.Z.iso - Extras ISO (air-gapped only):
esb3027-acd-manager-extras-X.Y.Z.iso
Network Configuration
Ensure that required firewall ports are configured before installation. See the Networking Guide for complete firewall configuration requirements.
SELinux
If SELinux is to be used, it must be set to “Enforcing” mode before running the installer script. The installer will configure appropriate SELinux policies automatically. SELinux cannot be enabled after installation.
Installation Steps
Step 1: Mount the ISO
Create a mount point and mount the installation ISO:
mkdir -p /mnt/esb3027
mount -o loop,ro esb3027-acd-manager-X.Y.Z.iso /mnt/esb3027
Replace X.Y.Z with the actual version number.
Step 2: Install the Base Cluster
Run the installer to set up the K3s Kubernetes cluster:
/mnt/esb3027/install
This installs:
- K3s Kubernetes distribution
- Longhorn distributed storage
- Cloudnative PG operator for PostgreSQL
- Base system dependencies
The installer will configure the node as both a server and agent node.
Step 3: Verify Cluster Status
After the installer completes, verify that all components are operational before proceeding. This verification serves as an important checkpoint to confirm the installation is progressing correctly.
1. Verify the node is ready:
kubectl get nodes
Expected output:
NAME STATUS ROLES AGE VERSION
k3s-server Ready control-plane,etcd,master 2m v1.33.4+k3s1
2. Verify system pods in both namespaces are running:
# Check kube-system namespace (Kubernetes core components)
kubectl get pods -n kube-system
# Check longhorn-system namespace (distributed storage)
kubectl get pods -n longhorn-system
All pods should show Running status. If any pods are still Pending or ContainerCreating, wait until they are ready. Proceeding with incomplete system pods can cause subsequent steps to fail in unpredictable ways.
This verification confirms:
- K3s cluster is operational
- Longhorn distributed storage is running
- Cloudnative PG operator is deployed
- All core components are healthy before continuing
Step 4: Air-Gapped Deployments (If Applicable)
If deploying in an air-gapped environment, load container images from the extras ISO:
mkdir -p /mnt/esb3027-extras
mount -o loop,ro esb3027-acd-manager-extras-X.Y.Z.iso /mnt/esb3027-extras
/mnt/esb3027-extras/load-images
Step 5: Create Configuration File
Create a Helm values file for your deployment. At minimum, configure the manager hostname and at least one router:
# ~/values.yaml
global:
hosts:
manager:
- host: manager.local
routers:
- name: default
address: 127.0.0.1
The routers configuration specifies CDN Director instances. For lab deployments, a placeholder entry is sufficient. For production, specify the actual Director hostnames or IP addresses.
For single-node deployments, you must also disable Kafka replication:
kafka:
replicaCount: 1
controller:
replicaCount: 1
Step 6: Load MaxMind GeoIP Databases (Optional)
If you plan to use GeoIP-based routing or validation features, load the MaxMind GeoIP databases. The following databases are used by the manager:
GeoIP2-City.mmdb- The City DatabaseGeoLite2-ASN.mmdb- The ASN DatabaseGeoIP2-Anonymous-IP.mmdb- The VPN and Anonymous IP Database
A helper utility is provided on the ISO to create the Kubernetes volume:
/mnt/esb3027/generate-maxmind-volume
The utility will prompt for the locations of the three database files and the name of the volume. After running this command, reference the volume in your configuration file:
manager:
maxmindDbVolume: maxmind-db-volume
Replace maxmind-db-volume with the volume name you specified when running the utility.
Tip: When naming the volume, include a revision number or date (e.g.,
maxmind-db-volume-2026-04ormaxmind-db-volume-v2). This simplifies future updates: create a new volume with an updated name, update thevalues.yamlto reference the new volume, and delete the old volume after verification.
Step 7: Deploy the Manager Helm Chart
Deploy the CDN Manager application:
helm install acd-manager /mnt/esb3027/helm/charts/acd-manager --values ~/values.yaml
Monitor the deployment progress:
kubectl get pods
Wait for all pods to show Running status before proceeding.
Note: The default Helm timeout is 5 minutes. If the installation fails due to a rollout timeout, retry with a larger timeout value:
helm install acd-manager /mnt/esb3027/helm/charts/acd-manager --values ~/values.yaml --timeout 10m
If a previous installation attempt failed and you receive an error that the release name is already in use, uninstall the previous release before retrying:
helm uninstall acd-manager
helm install acd-manager /mnt/esb3027/helm/charts/acd-manager --values ~/values.yaml
Step 8: Verify Deployment
Verify all application pods are running:
kubectl get pods
Expected output for a single-node deployment (pod names will vary):
NAME READY STATUS RESTARTS AGE
acd-manager-5b98d569d9-abc12 1/1 Running 0 3m
acd-manager-confd-6fb78548c4-xnrh4 1/1 Running 0 3m
acd-manager-gateway-8bc8446fc-chs26 1/1 Running 0 3m
acd-manager-kafka-controller-0 2/2 Running 0 3m
acd-manager-metrics-aggregator-76d96c4964-lwdcj 1/1 Running 0 3m
acd-manager-mib-frontend-7bdb69684b-6qxn8 1/1 Running 0 3m
acd-manager-postgresql-0 1/1 Running 0 3m
acd-manager-redis-master-0 2/2 Running 0 3m
acd-manager-redis-replicas-0 2/2 Running 0 3m
acd-manager-selection-input-5fb694b857-qxt67 1/1 Running 0 3m
acd-manager-zitadel-8448b4c4fc-2pkd8 1/1 Running 0 3m
acd-manager-zitadel-init-hh6j7 0/1 Completed 0 4m
acd-manager-zitadel-setup-nwp8k 0/2 Completed 0 4m
alertmanager-0 1/1 Running 0 3m
grafana-6d948cfdc6-77ggk 1/1 Running 0 3m
victoria-metrics-agent-dc87df588-tn8wv 1/1 Running 0 3m
victoria-metrics-alert-757c44c58f-kk9lp 1/1 Running 0 3m
victoria-metrics-longterm-server-0 1/1 Running 0 3m
victoria-metrics-server-0 1/1 Running 0 3m
Note: Init pods (such as zitadel-init and zitadel-setup) will show Completed status after successful initialization. This is expected behavior.
Post-Installation
After installation completes, proceed to the Next Steps guide for:
- Initial user configuration
- Accessing the web interfaces
- Configuring authentication
- Setting up monitoring
Accessing the System
Refer to the Accessing the System section in the Getting Started guide for service URLs and default credentials.
Note: A self-signed SSL certificate is deployed by default. You will need to accept the certificate warning in your browser.
Troubleshooting
If pods fail to start:
- Check pod status:
kubectl describe pod <pod-name> - Review logs:
kubectl logs <pod-name> - Verify resources:
kubectl top pods
See the Troubleshooting Guide for additional assistance.
Next Steps
After successful installation:
- Next Steps Guide - Post-installation configuration
- Configuration Guide - System configuration
- Operations Guide - Day-to-day operations
Appendix: Example Configuration
The following values.yaml provides a minimal working configuration for lab deployments:
# Minimal lab configuration for single-node deployment
global:
hosts:
manager:
- host: manager.local
routers:
- name: default
address: 127.0.0.1
# Single-node: Disable Kafka replication
kafka:
replicaCount: 1
controller:
replicaCount: 1
Customization notes:
- Replace
manager.localwith your desired hostname - The
routersentry specifies CDN Director instances. The placeholder127.0.0.1may be used if a Director instance isn’t available, or specify actual Director hostnames for production testing - For air-gapped deployments, see Step 4: Air-Gapped Deployments