Shared Interface Network Setup

Network configuration for standard single-NIC deployments where all traffic shares a single interface.
You're viewing a development version of manager, the latest released version is v1.4.1

The current page Shared Interface Network Setup doesn't exist in version v1.4.1 of the documentation for this product.
We can take you to the closest parent section instead: /docs/acd/components/manager/v1.4.1/networking/

Overview

This guide covers network configuration for standard single-NIC deployments. In this architecture, all traffic—including internal cluster communication (East-West) and external internet access (North-South)—is routed through a single network interface.

Security Warning: Because all traffic shares the same interface and firewall zone, there is no physical or logical isolation between cluster management traffic and public-facing service traffic. For production environments requiring security isolation, see Configuring Segregated Networks.

Note: The installer script automatically detects if firewalld is enabled. If so, it will verify that the required inter-node ports are open through the firewall in the default zone before proceeding. If any required ports are missing, the installer will report an error and exit. Application service ports (such as Kafka, VictoriaMetrics, and Telegraf) are not checked by the installer as they are configurable.

For network architecture, port requirements, and general information, see the Network Architecture Overview section in the main Networking Guide.

firewall Configuration

Assign Interface to Default Zone

Assign your primary network interface to the default zone:

firewall-cmd --permanent --zone=public --change-interface=<interface>
firewall-cmd --reload

Replace <interface> with your actual interface name (e.g., eth0).

Configure Firewall Rules

In a shared interface setup, you must manually configure firewall rules for both internal cluster traffic and external access, as K3s does not automatically manage the public zone.

# 1. Allow pod and service networks (Internal CIDRs)
firewall-cmd --permanent --zone=public --add-source=10.42.0.0/16
firewall-cmd --permanent --zone=public --add-source=10.43.0.0/16

# 2. Kubernetes and Cluster Infrastructure (East-West Traffic)
# These ports must be opened manually for the cluster to function on a single interface.
firewall-cmd --permanent --zone=public --add-port=2379-2380/tcp
firewall-cmd --permanent --zone=public --add-port=6443/tcp
firewall-cmd --permanent --zone=public --add-port=8472/udp
firewall-cmd --permanent --zone=public --add-port=10250/tcp
firewall-cmd --permanent --zone=public --add-port=5001/tcp
firewall-cmd --permanent --zone=public --add-port=9500-9503/tcp
firewall-cmd --permanent --zone=public --add-port=8500-8504/tcp
firewall-cmd --permanent --zone=public --add-port=10000-30000/tcp
firewall-cmd --permanent --zone=public --add-port=3260/tcp
firewall-cmd --permanent --zone=public --add-port=2049/tcp

# 3. External Access Ports (North-South Traffic)
firewall-cmd --permanent --zone=public --add-port=80/tcp
firewall-cmd --permanent --zone=public --add-port=443/tcp
firewall-cmd --permanent --zone=public --add-port=9095/tcp
firewall-cmd --permanent --zone=public --add-port=6379/tcp
firewall-cmd --permanent --zone=public --add-port=8125/tcp
firewall-cmd --permanent --zone=public --add-port=8125/udp

# Apply changes
firewall-cmd --reload

Verification

Verify all port rules are applied:

firewall-cmd --zone=public --list-all

Expected output:

public (active)
  target: default
  icmp-block-inversion: no
  interfaces: eth0
  sources: 10.42.0.0/16 10.43.0.0/16
  services: dhcpv6-client ssh
  ports: 80/tcp 443/tcp 9095/tcp 6379/tcp 8125/tcp 8125/udp
  protocols: 2379-2380/tcp 6443/tcp 8472/udp 10250/tcp 5001/tcp 9500-9503/tcp 8500-8504/tcp 10000-30000/tcp 3260/tcp 2049/tcp
  masquerade: no
  forward-ports:
  source-ports:
  icmp-blocks:
  rich-rules:

Note: Additional interfaces may appear in the zone (e.g., eth0 eth1) if firewalld auto-assigned them based on network configuration. This is expected and does not affect functionality.

Verify the interface is correctly assigned to the public zone:

firewall-cmd --get-active-zones

Expected output will show eth0 listed under the public zone:

public (active)
  interfaces: eth0

Troubleshooting

Expected output will show eth0 listed under the public zone:

public (active)
  interfaces: eth0

Troubleshooting

Nodes Cannot Communicate

Verify firewall rules allow inter-node traffic in the public zone:

firewall-cmd --list-all

Test basic connectivity between nodes:

ping <node-ip>

Post-Installation Troubleshooting

Once the cluster is installed, if you encounter issues with pod-to-pod communication or service access, verify the following:

  1. Flannel Interface: Ensure the flannel.1 interface is up and has the correct IP addresses.
  2. Network Routes: Verify that the pod and service CIDR routes are present in the routing table.
  3. Firewall Rules: Ensure all required Kubernetes and cluster ports are allowed in the public zone.

For detailed troubleshooting of Kubernetes-specific components (like Ingress or Pod connectivity), please refer to the Kubernetes Troubleshooting Guide.