Setup SDN in Proxmox – 2
SDN The Software-Defined Network enables the creation of virtual zones and networks (VNets) including a DHCP. This functionality simplifies advanced network configurations for home labbing. The benefits in a more responsive and adaptable network infra. #pve #sdn
What is SDN? The Software-Defined Network (SDN) feature in Proxmox VE enables the creation of virtual zones and networks (VNets). This functionality simplifies advanced networking configurations and multi tenancy setup.
Introduction
The Proxmox SDN implementation allows for separation and control of virtual guest networks, using software-controlled configurations.
Separation is managed through zones, virtual networks (VNets), and subnets.
- A zone is its own virtually separated network area.
- VNet is a virtual network that belongs to a zone.
Configured in Datacenter and available as a Linux bridge, on each node - A subnet is an IP range inside a VNet.
Depending on the type of the zone, the network behaves quite differently.
Use cases for SDN range from an isolated private network on each individual node to complex overlay networks across multiple PVE clusters on different locations.
Current Status
The current support status for the various layers of our SDN installation is as follows:
- Core SDN, which includes VNet management and its integration with the Proxmox VE stack, is fully supported.
- IPAM, including DHCP management for virtual guests, is in tech preview.
- Complex routing via FR Routing and controller integration are in tech preview.
Prerequisites
Installed from Proxmox version 8.1 will have some of the SDN components preloaded, and the integration is available.
On older versions or upgraded from one to 8.1, you need to load the SDN package on each of the nodes in the cluster, and you need to ensure that the following line is present at the end of the /etc/network/interfaces
file on all nodes:
apt update && apt install libpve-network-perl
source /etc/network/interfaces.d/*
DHCP IPAM
The DHCP integration in PVE uses the built-in IP Address Management stack, currently using DNSmasq for giving out DHCP leases. This is currently opt-in.
Proxmox requires the dnsmasq package for SDN functionality to enable features like DHCP management and network addressing. To install the DNSmasq packages:
apt update && apt install dnsmasq && systemctl disable --now dnsmasq
FR Routing
The Proxmox VE SDN stack uses the FR Routing project for advanced setups. This is currently opt-in. To use the SDN routing integration, you need to install the frr-pythontools
package on all nodes:
apt update && apt install frr-pythontools
PowerDNS
PowerDNS solutions are focused on large-scale DNS service providers, including mobile and fixed-line broadband operators, and hosting and cloud service providers. Can run in Docker.
If you wish to utilize the integration, it is necessary to install it on a server.
NetBox
Combining the traditional disciplines of IP address management (IPAM) and datacenter infrastructure management (DCIM) with powerful APIs and extensions. Can run in Docker.
If you wish to utilize the integration, it is necessary to install it on a server.
IPAM
IP Address Management is a methodology implemented in computer software for planning and managing the assignment and use of IP addresses and closely related resources of a computer network.
Setting up a DDI
A DDI solution (DNS, DHCP and IPAM) is an essential tool for the enterprise. This is because as enterprises grow, they continue to add new IP addresses to their network at an ever-increasing pace. The DDI solution should provide the enterprise with the necessary tools to configure, automate, integrate and administer the IP addresses and related services. These tools may be provided in a variety of formats.
A fully integrated DDI solution is typically comprised of three basic elements:
- IPAM, IP address management
- DNS services
- DCHP services
Proxmox SDN have integration into IPAM (NetBox or phpIPAM), PowerDNS (DNS) and DHCP (dnsmaq).
Setting Up Proxmox SDN
Setting up a software defined networking (SDN) on a Proxmox Cluster Proxmox host and enabling an existing local Linux machine to connect. In this overview, we will enable automatic DHCP on the network interface, so the machine can pull an IP from the IP range.
A good starting point is the Simple Zone. It will create an isolated VNet bridge on each cluster node. This bridge is not linked to a physical interface, therefore traffic is only local to the node they are running on. It can be used in NAT or routed setups, but that is not in the scope of this introduction to Proxmox SDN.
Simple Zone Setup Example
ℹ️ Connection between all VM/CT on the same node is possible
ℹ️ Connection to VM/CT on other nodes cannot be achieved
Go to Datacenter → SDN and start the installation
- Zone
Hit Add and select Simple- ID: Give the network an ID
sdn100
- MTU: auto (check documentation)
- Nodes: All
- IPAM: is
pve
- Advanced
☑️
to open - Leave all blank except, automatic DHCP:
check
- ID: Give the network an ID
- Create a VNet
Hit Create- ID: Give the VNet a name
vnet100
, up to 8 characters - Alias: Assign an optional comment
- ZONE: Select
net100
- TAG: Leave as
blank
- Can't be used with a Simple Zone. It's a unique VLAN or VXLAN ID.
- VLAN Aware:
blank
- Enables VLAN-aware option on the interface,
enabling configuration in the guest.
- Enables VLAN-aware option on the interface,
- ID: Give the VNet a name
- Create a Subnet
Still in the VNet tab. Select the VNet and in the Subnet Section hit Create- General
- Subnet: a CIDR address
10.10.100.0/24
- Gateway:
10.10.100.1
- SNAT:
☑️
see not about SNAT - DNS Zone Prefix:
lab.example.com
- Subnet: a CIDR address
- DHCP Ranges, hit Add
- Start Address | End Address
-
10.10.100.100
|10.10.100.150
- General
- Apply the SDN configuration
- Go to the SDN section and hit Apply to create VNets locally on each node.
After a VM/CT, it'll grab a DHCP address from the range configured. Also, we can ping the gateway that was established in the configuration. We now have an internal network with total separation from the other physical networks for VM/CT traffic defined in software.
SNAT
To allow SNAT, edit the /etc/network/interfaces.d/sdn
file and add allow-hotplug enp1s0
to the beginning, enp1s0 is just the example NIC name.
SNAT should be check
. The node itself will join this network with the Gateway IP 172.16.0.1 and function as the NAT gateway for guests within the subnet range.
VLAN Zone Setup Example
When VMs on different nodes need to communicate through an isolated network, the VLAN zone allows network level isolation using VLAN tags.
ℹ️ Node 1 VM ⇆ Node 2 VM communication
- Create a ZONE:
vlanNET
- Create a VNet:
vnetVLAN
, Zone:vlanNET
, and Tag:100
- Go to SDN and hit Apply to create VNets locally on each node.
How to use a VLAN-zone
Create a Debian-based VM (vm10000) on node pve-10, with a vNIC on vnet100
.
Use the following network configuration for this VM:
auto eth0
iface eth0 inet static
address 10.0.100.100/24
Create a second virtual machine (vm10010) on node pve-12, with a vNIC on the same VNet vlannet1 as vm10000.
Use the following network configuration for this VM:
auto eth0
iface eth0 inet static
address 10.0.100.101/24
You should now be able to ping between both VMs using that network.
VXLAN Zone
The example assumes a cluster with the node IP addresses: 10.100.30.1, 10.100.30.2 and 10.100.30.3.
Create a VXLAN zone named vxLAN30 and add all IPs of the nodes to the address list. Use the default MTU of 1450 or configure accordingly.
VXLAN Zone Setup Example
When VMs on different nodes need to communicate through an isolated network, the VLAN zone allows network level isolation using VLAN tags.
- Create a ZONE:
netLAN30
- MTU:
1450
or Auto. (the default MTU) - Peer Address List:
10.100.30.1, 10.100.30.2 and 10.100.30.3
- MTU:
- Create a VNet:
vxLAN30
, Zone:netLAN30
, and Tag:100000
- Go to SDN and hit Apply to create VNets locally on each node.
How to use a VXLAN-zone
Create two (or more) VM on different nodes, but using the same VXLAN.
Create the first Debian VM (vm10000) on node pve-11, with a vNIC on vxLAN30.
VM10000 network configuration (note the lower MTU).
auto eth0
iface eth0 inet static
address 10.0.30.100/24
mtu 1450
Create a second Debian VM (vm10010) on node pve-12, with a vNIC on vxLAN30.
VM10010 network configuration (note the lower MTU).
auto eth0
iface eth0 inet static
address 10.0.30.101/24
mtu 1450
Now, you should be able to ping between vm10010 and vm10020.
All Zones Type
There are many types of Zones. Some might need additional software.
- Simple: The simple configuration is an Isolated Bridge that provides a simple layer 3 routing bridge (NAT)
- VLAN: enables the traditional method of dividing up a LAN into several VLAN.
The VLAN zone uses an existing local Linux or OVS bridge to connect to the Proxmox host’s NIC - QinQ: Stacked VLAN (IEEE 802.1ad)
- VXLAN: Layer 2 VXLAN network that is created using a UDP tunnel
- EVPN (BGP EVPN): VXLAN that uses BGP to create Layer 3 routing.
In this config, you can replace a load balancer by create exit nodes to force traffic through a primary exit node.
References
SDN [1] PowerDNS [2] Dnsmasq [3] NetBox [4] phpIPAM [5] IPAM [6] IEEE [7] VXLAN [8] EVPN [9] Private IPv4 [10]