Air-Gapped Deployment Guide
Page not available in that version
The current page Air-Gapped Deployment Guide doesn't exist in version v1.4.1 of the documentation for this product.
Overview
This guide describes the installation of the AgileTV CDN Manager in air-gapped environments (no internet access). Air-gapped deployments require additional preparation compared to connected deployments.
Key differences from connected deployments:
- Both Installation ISO and Extras ISO are required
- OS installation ISO must be mounted on all nodes
- Container images must be loaded from the Extras ISO on each node
- Additional firewall considerations for OS package repositories
Prerequisites
Required ISOs
Before beginning installation, obtain the following:
| ISO | Filename | Purpose |
|---|---|---|
| Installation ISO | esb3027-acd-manager-X.Y.Z.iso | Kubernetes cluster and Manager application |
| Extras ISO | esb3027-acd-manager-extras-X.Y.Z.iso | Container images for air-gapped environments |
| OS Installation ISO | RHEL 9 or compatible clone | Operating system packages (required on all nodes) |
Single-Node vs Multi-Node
Air-gapped procedures apply to both deployment types:
- Lab/Single-Node: Follow Single-Node Installation with additional air-gapped steps below
- Production/Multi-Node: Follow Multi-Node Installation with additional air-gapped steps below
Network Configuration
Air-gapped environments may have internal network mirrors for OS packages. If no internal mirror exists, the OS installation ISO must be mounted on each node to provide packages during installation.
Air-Gapped Installation Steps
Step 1: Prepare All Nodes
On each node (primary server, additional servers, and agents):
Mount the OS installation ISO:
mkdir -p /mnt/os mount -o loop,ro /path/to/rhel-9.iso /mnt/osConfigure local repository (if no internal mirror):
cat > /etc/yum.repos.d/local.repo <<EOF [local] name=Local OS Repository baseurl=file:///mnt/os/BaseOS enabled=1 gpgcheck=0 EOFVerify repository is accessible:
dnf repolist
Step 2: Mount Installation ISOs
On the primary server node first, then each additional node:
# Mount Installation ISO
mkdir -p /mnt/esb3027
mount -o loop,ro esb3027-acd-manager-X.Y.Z.iso /mnt/esb3027
# Mount Extras ISO
mkdir -p /mnt/esb3027-extras
mount -o loop,ro esb3027-acd-manager-extras-X.Y.Z.iso /mnt/esb3027-extras
Step 3: Install Kubernetes Cluster
Primary Server Node
/mnt/esb3027/install
Wait for the installer to complete and verify system pods are running:
kubectl get nodes
kubectl get pods -n kube-system
kubectl get pods -n longhorn-system
Additional Server Nodes (Multi-Node Only)
On each additional server node:
/mnt/esb3027/join-server https://<primary-server-ip>:6443 <node-token>
Agent Nodes (Optional)
On each agent node:
/mnt/esb3027/join-agent https://<primary-server-ip>:6443 <node-token>
Step 4: Load Container Images
On each node in the cluster:
/mnt/esb3027-extras/load-images
This script loads all container images from the Extras ISO into the local container runtime.
Important: This step must be performed on every node (primary server, additional servers, and agents) before deploying the Manager application.
Step 5: Create Configuration File
Create a Helm values file for your deployment. At minimum, configure the manager hostname and router addresses:
# ~/values.yaml
global:
hosts:
manager:
- host: manager.local
routers:
- name: default
address: 127.0.0.1
# Single-node: Disable Kafka replication
kafka:
replicaCount: 1
controller:
replicaCount: 1
For multi-node deployments, see the Multi-Node Installation Guide for complete configuration requirements.
Step 6: Deploy the Manager
Deploy the CDN Manager Helm chart:
helm install acd-manager /mnt/esb3027/helm/charts/acd-manager --values ~/values.yaml
Monitor the deployment progress:
kubectl get pods --watch
Wait for all pods to show Running status before proceeding.
Step 7: Verify Deployment
Verify all application pods are running:
kubectl get pods
All pods should show Running status (except init pods which show Completed).
Post-Installation
After installation completes:
- Access the system via HTTPS at
https://<manager-host> - Configure authentication via Zitadel at
https://<manager-host>/ui/console - Set up monitoring via Grafana at
https://<manager-host>/grafana
See the Next Steps Guide for detailed post-installation configuration.
Updating MaxMind GeoIP Databases
If using GeoIP-based routing, load the MaxMind databases:
/mnt/esb3027/generate-maxmind-volume
The utility will prompt for the database file locations and volume name. Reference the volume in your values.yaml:
manager:
maxmindDbVolume: maxmind-geoip-2026-04
See the Operations Guide for database update procedures.
Troubleshooting
Image Pull Errors
If pods fail with image pull errors:
- Verify the
load-imagesscript completed successfully on all nodes - Check container runtime image list:
crictl images | grep <image-name> - Ensure image tags in Helm chart match tags on the Extras ISO
OS Package Errors
If the installer reports missing OS packages:
- Verify OS ISO is mounted on the affected node
- Check repository configuration:
dnf repolist dnf info <package-name> - Ensure the ISO matches the installed OS version
Longhorn Volume Issues
If Longhorn volumes fail to mount:
- Verify all nodes have the
load-imagesscript completed - Check Longhorn system pods:
kubectl get pods -n longhorn-system - Review Longhorn UI via port-forward:
kubectl port-forward -n longhorn-system svc/longhorn-frontend 8080:80
Next Steps
After successful installation:
- Next Steps Guide - Post-installation configuration
- Operations Guide - Day-to-day operational procedures
- Troubleshooting Guide - Common issues and resolution