This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Networking Guide

Network architecture and configuration guides

Network Architecture

Physical Network

Each cluster node must have at least one network interface card (NIC) configured as the default gateway. If the node lacks a pre-configured default route, it must be established prior to installation.

K3s requires a default route to auto-detect the node’s primary IP and for kube-proxy ClusterIP routing to function properly. If no default route exists, create a dummy interface as a workaround:

ip link add dummy0 type dummy
ip link set dummy0 up
ip addr add 203.0.113.254/31 dev dummy0
ip route add default via 203.0.113.255 dev dummy0 metric 1000

Overlay Network

Kubernetes creates virtual network interfaces for pods that are typically not associated with any specific firewalld zone. The cluster uses the following network ranges:

NetworkCIDRPurpose
Pod10.42.0.0/16Inter-pod communication
Service10.43.0.0/16Kubernetes service discovery

Firewall regulations should target the primary physical interface. The overlay network traffic is handled by Flannel VXLAN.

Port Requirements

Inter-Node Communication

The following ports must be permitted between all cluster nodes for Kubernetes and cluster infrastructure:

PortProtocolSourceDestinationPurpose
2379-2380TCPServer nodesServer nodesetcd cluster communication
6443TCPAll nodesServer nodesKubernetes API server
8472UDPAll nodesAll nodesFlannel VXLAN overlay network
10250TCPAll nodesAll nodesKubelet metrics and management
5001TCPAll nodesServer nodesSpegel registry mirror
9500-9503TCPAll nodesAll nodesLonghorn management API
8500-8504TCPAll nodesAll nodesLonghorn agent communication
10000-30000TCPAll nodesAll nodesLonghorn data replication
3260TCPAll nodesAll nodesLonghorn iSCSI
2049TCPAll nodesAll nodesLonghorn RWX (NFS)

Application Services Ports

The following ports must be accessible for application services within the cluster:

PortProtocolService
6379TCPRedis
9093TCPAlertmanager
9095TCPKafka
8086TCPTelegraf (InfluxDB v2 listener)

External Access Ports

The following ports must be accessible from external clients to cluster nodes:

PortProtocolService
80TCPHTTP ingress (Optional, redirects to HTTPS)
443TCPHTTPS ingress (Required, all services)
9095TCPKafka (external client connections)
6379TCPRedis (external client connections)
8125TCP/UDPTelegraf (metrics collection)

Network Configuration Guides

Deployment Type

Choose the guide that matches your deployment architecture:

GuideDescriptionWho Should Use This
Configuring Segregated NetworksMulti-NIC deployments with air-gapped cluster backplaneMost users - If you have separate interfaces for cluster traffic and external internet access
Shared Interface SetupSingle-NIC deployments where all traffic shares one interfaceUsers with a single network interface for both cluster traffic and external access

Not sure which to use? If you have explicitly separate interfaces for cluster communication and external access, start with Configuring Segregated Networks. Only use the shared interface guide if your hardware is limited to a single NIC.

1 - Shared Interface Network Setup

Network configuration for standard single-NIC deployments where all traffic shares a single interface.

Overview

This guide covers network configuration for standard single-NIC deployments. In this architecture, all traffic—including internal cluster communication (East-West) and external internet access (North-South)—is routed through a single network interface.

Security Warning: Because all traffic shares the same interface and firewall zone, there is no physical or logical isolation between cluster management traffic and public-facing service traffic. For production environments requiring security isolation, see Configuring Segregated Networks.

Note: The installer script automatically detects if firewalld is enabled. If so, it will verify that the required inter-node ports are open through the firewall in the default zone before proceeding. If any required ports are missing, the installer will report an error and exit. Application service ports (such as Kafka, VictoriaMetrics, and Telegraf) are not checked by the installer as they are configurable.

For network architecture, port requirements, and general information, see the Network Architecture Overview section in the main Networking Guide.

firewall Configuration

Assign Interface to Default Zone

Assign your primary network interface to the default zone:

firewall-cmd --permanent --zone=public --change-interface=<interface>
firewall-cmd --reload

Replace <interface> with your actual interface name (e.g., eth0).

Configure Firewall Rules

In a shared interface setup, you must manually configure firewall rules for both internal cluster traffic and external access, as K3s does not automatically manage the public zone.

# 1. Allow pod and service networks (Internal CIDRs)
firewall-cmd --permanent --zone=public --add-source=10.42.0.0/16
firewall-cmd --permanent --zone=public --add-source=10.43.0.0/16

# 2. Kubernetes and Cluster Infrastructure (East-West Traffic)
# These ports must be opened manually for the cluster to function on a single interface.
firewall-cmd --permanent --zone=public --add-port=2379-2380/tcp
firewall-cmd --permanent --zone=public --add-port=6443/tcp
firewall-cmd --permanent --zone=public --add-port=8472/udp
firewall-cmd --permanent --zone=public --add-port=10250/tcp
firewall-cmd --permanent --zone=public --add-port=5001/tcp
firewall-cmd --permanent --zone=public --add-port=9500-9503/tcp
firewall-cmd --permanent --zone=public --add-port=8500-8504/tcp
firewall-cmd --permanent --zone=public --add-port=10000-30000/tcp
firewall-cmd --permanent --zone=public --add-port=3260/tcp
firewall-cmd --permanent --zone=public --add-port=2049/tcp

# 3. External Access Ports (North-South Traffic)
firewall-cmd --permanent --zone=public --add-port=80/tcp
firewall-cmd --permanent --zone=public --add-port=443/tcp
firewall-cmd --permanent --zone=public --add-port=9095/tcp
firewall-cmd --permanent --zone=public --add-port=6379/tcp
firewall-cmd --permanent --zone=public --add-port=8125/tcp
firewall-cmd --permanent --zone=public --add-port=8125/udp

# Apply changes
firewall-cmd --reload

Verification

Verify all port rules are applied:

firewall-cmd --zone=public --list-all

Expected output:

public (active)
  target: default
  icmp-block-inversion: no
  interfaces: eth0
  sources: 10.42.0.0/16 10.43.0.0/16
  services: dhcpv6-client ssh
  ports: 80/tcp 443/tcp 9095/tcp 6379/tcp 8125/tcp 8125/udp
  protocols: 2379-2380/tcp 6443/tcp 8472/udp 10250/tcp 5001/tcp 9500-9503/tcp 8500-8504/tcp 10000-30000/tcp 3260/tcp 2049/tcp
  masquerade: no
  forward-ports:
  source-ports:
  icmp-blocks:
  rich-rules:

Note: Additional interfaces may appear in the zone (e.g., eth0 eth1) if firewalld auto-assigned them based on network configuration. This is expected and does not affect functionality.

Verify the interface is correctly assigned to the public zone:

firewall-cmd --get-active-zones

Expected output will show eth0 listed under the public zone:

public (active)
  interfaces: eth0

Troubleshooting

Expected output will show eth0 listed under the public zone:

public (active)
  interfaces: eth0

Troubleshooting

Nodes Cannot Communicate

Verify firewall rules allow inter-node traffic in the public zone:

firewall-cmd --list-all

Test basic connectivity between nodes:

ping <node-ip>

Post-Installation Troubleshooting

Once the cluster is installed, if you encounter issues with pod-to-pod communication or service access, verify the following:

  1. Flannel Interface: Ensure the flannel.1 interface is up and has the correct IP addresses.
  2. Network Routes: Verify that the pod and service CIDR routes are present in the routing table.
  3. Firewall Rules: Ensure all required Kubernetes and cluster ports are allowed in the public zone.

For detailed troubleshooting of Kubernetes-specific components (like Ingress or Pod connectivity), please refer to the Kubernetes Troubleshooting Guide.

2 - Configuring Segregated Networks

Multi-NIC deployment guide for air-gapped or segregated network setups

Overview

This guide covers configuring a cluster with separate interfaces for internal cluster communication and external internet access (also known as segregated or dual-homed deployments). In this setup, eth1 handles the internal cluster traffic (pod-to-pod, control plane) while eth0 provides public internet access.

Security Benefit: This configuration provides physical isolation between East-West (cluster) and North-South (external) traffic. The trusted zone allows unrestricted internal communication, while the public zone handles external access with controlled port exposure.

When configuring segregated networks with K3s, proper interface binding is essential. K3s uses the --flannel-iface flag to ensure pod traffic stays on the private network, and the --node-external-ip flag to advertise the public address for external access.

Important: K3s manages pod masquerading and service routing automatically. You only need to configure firewalld zones correctly and pass the proper flags to the K3s installer.

Complete, step-by-step instructions follow.

Prerequisites

Before starting, ensure:

  • Operating system is installed and updated on all nodes
  • Network connectivity between nodes is available
  • SSH access is configured for all cluster nodes

Configure Firewalld Zones

This guide configures separate zones for internal cluster traffic and external access.

Assign Interfaces to Zones

K3s uses trusted zone for the internal network to allow unrestricted pod-to-pod and control plane traffic:

# Assign eth0 (external/internet) to public zone
firewall-cmd --permanent --zone=public --change-interface=eth0

# Assign eth1 (internal/cluster) to trusted zone
firewall-cmd --permanent --zone=trusted --change-interface=eth1

# Allow pod and service CIDRs in trusted zone (required for pod communication)
firewall-cmd --permanent --zone=trusted --add-source=10.42.0.0/16
firewall-cmd --permanent --zone=trusted --add-source=10.43.0.0/16

# Reload firewall
firewall-cmd --reload

Configure Firewall Ports

Open the necessary ports on the public zone for external access:

# External access ports
firewall-cmd --permanent --zone=public --add-port=80/tcp
firewall-cmd --permanent --zone=public --add-port=443/tcp
firewall-cmd --permanent --zone=public --add-port=9095/tcp
firewall-cmd --permanent --zone=public --add-port=6379/tcp
firewall-cmd --permanent --zone=public --add-port=8125/tcp
firewall-cmd --permanent --zone=public --add-port=8125/udp

# Apply changes
firewall-cmd --reload

Note: K3s automatically creates iptables rules for internal cluster ports (6443, 10250, 2379-2380, 8472, 5001, 9500-9503, 8500-8504, 10000-30000, 3260, 2049) when using --flannel-iface=eth1. Pod and service CIDRs (10.42.0.0/16 and 10.43.0.0/16) are already allowed in the trusted zone via the --add-source commands above.

Verify Zone Configuration

firewall-cmd --zone=public --list-all
firewall-cmd --zone=trusted --list-all

Expected output for public zone:

public (active)
  target: default
  icmp-block-inversion: no
  interfaces: eth0 eth2
  sources: 
  services: dhcpv6-client ssh cockpit
  ports: 80/tcp 443/tcp 9095/tcp 6379/tcp 8125/tcp 8125/udp
  protocols: 
  forward: yes
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules:

Expected output for trusted zone:

trusted (active)
  target: ACCEPT
  icmp-block-inversion: no
  interfaces: eth1
  sources: 10.42.0.0/16 10.43.0.0/16
  services: ssh mdns
  ports: 
  protocols: 
  forward: yes
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules:

Note: Additional interfaces may appear in a zone (e.g., eth0 eth2) if firewalld auto-assigned them based on network configuration. This is expected and does not affect functionality.

Single-NIC Alternative

If you only have a single network interface, see the Shared Interface Setup guide instead. This guide is specifically for multi-NIC deployments with separate interfaces for cluster and external traffic.

Troubleshooting

Verify Zone Configuration

If pods cannot communicate with services, verify the trusted zone has the correct sources configured:

firewall-cmd --zone=trusted --list-all

Expected output:

trusted (active)
  target: ACCEPT
  icmp-block-inversion: no
  interfaces: eth1
  sources: 10.42.0.0/16 10.43.0.0/16
  services: ssh mdns
  ports: 
  protocols: 
  forward: yes
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules:

Ensure both 10.42.0.0/16 (pod network) and 10.43.0.0/16 (service network) are listed under sources. If missing, re-run:

firewall-cmd --permanent --zone=trusted --add-source=10.42.0.0/16
firewall-cmd --permanent --zone=trusted --add-source=10.43.0.0/16
firewall-cmd --reload