Air-Gapped Deployment Guide

Installation procedures for air-gapped environments
You're viewing a development version of manager, the latest released version is v1.4.1

The current page Air-Gapped Deployment Guide doesn't exist in version v1.4.1 of the documentation for this product.
We can take you to the closest parent section instead: /docs/acd/components/manager/v1.4.1/installation/

Overview

This guide describes the installation of the AgileTV CDN Manager in air-gapped environments (no internet access). Air-gapped deployments require additional preparation compared to connected deployments.

Key differences from connected deployments:

  • Both Installation ISO and Extras ISO are required
  • OS installation ISO must be mounted on all nodes
  • Container images must be loaded from the Extras ISO on each node
  • Additional firewall considerations for OS package repositories

Prerequisites

Required ISOs

Before beginning installation, obtain the following:

ISOFilenamePurpose
Installation ISOesb3027-acd-manager-X.Y.Z.isoKubernetes cluster and Manager application
Extras ISOesb3027-acd-manager-extras-X.Y.Z.isoContainer images for air-gapped environments
OS Installation ISORHEL 9 or compatible cloneOperating system packages (required on all nodes)

Single-Node vs Multi-Node

Air-gapped procedures apply to both deployment types:

Network Configuration

Air-gapped environments may have internal network mirrors for OS packages. If no internal mirror exists, the OS installation ISO must be mounted on each node to provide packages during installation.

Air-Gapped Installation Steps

Step 1: Prepare All Nodes

On each node (primary server, additional servers, and agents):

  1. Mount the OS installation ISO:

    mkdir -p /mnt/os
    mount -o loop,ro /path/to/rhel-9.iso /mnt/os
    
  2. Configure local repository (if no internal mirror):

    cat > /etc/yum.repos.d/local.repo <<EOF
    [local]
    name=Local OS Repository
    baseurl=file:///mnt/os/BaseOS
    enabled=1
    gpgcheck=0
    EOF
    
  3. Verify repository is accessible:

    dnf repolist
    

Step 2: Mount Installation ISOs

On the primary server node first, then each additional node:

# Mount Installation ISO
mkdir -p /mnt/esb3027
mount -o loop,ro esb3027-acd-manager-X.Y.Z.iso /mnt/esb3027

# Mount Extras ISO
mkdir -p /mnt/esb3027-extras
mount -o loop,ro esb3027-acd-manager-extras-X.Y.Z.iso /mnt/esb3027-extras

Step 3: Install Kubernetes Cluster

Primary Server Node

/mnt/esb3027/install

Wait for the installer to complete and verify system pods are running:

kubectl get nodes
kubectl get pods -n kube-system
kubectl get pods -n longhorn-system

Additional Server Nodes (Multi-Node Only)

On each additional server node:

/mnt/esb3027/join-server https://<primary-server-ip>:6443 <node-token>

Agent Nodes (Optional)

On each agent node:

/mnt/esb3027/join-agent https://<primary-server-ip>:6443 <node-token>

Step 4: Load Container Images

On each node in the cluster:

/mnt/esb3027-extras/load-images

This script loads all container images from the Extras ISO into the local container runtime.

Important: This step must be performed on every node (primary server, additional servers, and agents) before deploying the Manager application.

Step 5: Create Configuration File

Create a Helm values file for your deployment. At minimum, configure the manager hostname and router addresses:

# ~/values.yaml
global:
  hosts:
    manager:
      - host: manager.local
    routers:
      - name: default
        address: 127.0.0.1

# Single-node: Disable Kafka replication
kafka:
  replicaCount: 1
  controller:
    replicaCount: 1

For multi-node deployments, see the Multi-Node Installation Guide for complete configuration requirements.

Step 6: Deploy the Manager

Deploy the CDN Manager Helm chart:

helm install acd-manager /mnt/esb3027/helm/charts/acd-manager --values ~/values.yaml

Monitor the deployment progress:

kubectl get pods --watch

Wait for all pods to show Running status before proceeding.

Step 7: Verify Deployment

Verify all application pods are running:

kubectl get pods

All pods should show Running status (except init pods which show Completed).

Post-Installation

After installation completes:

  1. Access the system via HTTPS at https://<manager-host>
  2. Configure authentication via Zitadel at https://<manager-host>/ui/console
  3. Set up monitoring via Grafana at https://<manager-host>/grafana

See the Next Steps Guide for detailed post-installation configuration.

Updating MaxMind GeoIP Databases

If using GeoIP-based routing, load the MaxMind databases:

/mnt/esb3027/generate-maxmind-volume

The utility will prompt for the database file locations and volume name. Reference the volume in your values.yaml:

manager:
  maxmindDbVolume: maxmind-geoip-2026-04

See the Operations Guide for database update procedures.

Troubleshooting

Image Pull Errors

If pods fail with image pull errors:

  1. Verify the load-images script completed successfully on all nodes
  2. Check container runtime image list:
    crictl images | grep <image-name>
    
  3. Ensure image tags in Helm chart match tags on the Extras ISO

OS Package Errors

If the installer reports missing OS packages:

  1. Verify OS ISO is mounted on the affected node
  2. Check repository configuration:
    dnf repolist
    dnf info <package-name>
    
  3. Ensure the ISO matches the installed OS version

Longhorn Volume Issues

If Longhorn volumes fail to mount:

  1. Verify all nodes have the load-images script completed
  2. Check Longhorn system pods:
    kubectl get pods -n longhorn-system
    
  3. Review Longhorn UI via port-forward:
    kubectl port-forward -n longhorn-system svc/longhorn-frontend 8080:80
    

Next Steps

After successful installation:

  1. Next Steps Guide - Post-installation configuration
  2. Operations Guide - Day-to-day operational procedures
  3. Troubleshooting Guide - Common issues and resolution