Home Lab Plans

1. Dedicated Proxmox cluster network (“corosync link”)

  • Fit second NICs to the other two nodes.
  • Use a separate physical network (separate switch or crossover cables) for Proxmox cluster traffic only (corosync, live migration).
  • Example:
    • Port 1 = Production LAN (VMs, Internet, users).
    • Port 2 = Private cluster network (just between nodes).

Benefits:

  • Cluster communication becomes faster, more reliable.
  • Avoids cluster split/brain if production network glitches.
  • Improves live migration speed because data doesn’t fight for bandwidth with normal traffic.

This is highly recommended once a cluster gets beyond hobby size.


2. Storage replication traffic isolation

  • If you use ZFS replication or PBS backups between nodes, you can route replication traffic over the second (private) network too.
  • Keeps heavy backup and snapshot traffic off your LAN.

Benefits:

  • Faster, smoother storage sync.
  • Protects normal user-facing services from backup-induced slowdowns.

3. LACP Bonding for node resilience

  • If you can’t create a dedicated private network (because switch cost, etc.), you could bond the two NICs per node into an LACP group.
  • Provides failover (if cable/switch port fails, node stays online).
  • Some throughput gain (depends on traffic patterns and switch).

Benefit: Proxmox will survive NIC failure without losing the node.


4. Out-of-band management

  • Use one NIC only for management (SSH, Web GUI, etc.).
  • Keep user VM traffic separate from Proxmox administrative tasks.

Benefit: If a VM network misbehaves, you still have reliable management access.


5. Multi-VLAN setup over fewer cables

  • Fit second NICs but run both NICs into a small managed switch.
  • Define VLANs:
    • VLAN 10 = Cluster traffic
    • VLAN 20 = Storage traffic
    • VLAN 30 = User/VM traffic

You can then “slice and dice” traffic logically without needing three cables per node.

Benefit: Flexibility without a physical cabling mess.


Rough Hardware Needs

  • Two extra NICs (PCIe or USB 3.0 1GbE adapters are <$50 each).
  • One small managed switch (8-port is fine; ~$70 AUD).
  • Some patch cables (Cat5e/6).

Simple High-Value Plan

(minimal cost, big benefit)

  1. Add second NICs to the two single-port nodes.
  2. Wire the second NICs to a cheap switch (dedicated private switch OK).
  3. Use that second network only for:
    • Proxmox cluster communication (corosync)
    • (Optional) Backup replication traffic (PBS).

Main NICs stay connected to your regular LAN for normal VM access.


In short:

🔵 With second NICs you can massively harden and stabilize your Proxmox cluster with very little money.
I would strongly recommend setting up a dedicated cluster network if you go to the effort of fitting second ports.


📈 Cluster Network Layout: Two NICs per Node

Internet
   │
WAN modem/router
   │
Production LAN switch (cheap or existing switch)
   │
+------------------+-------------------+------------------+
|                  |                   |                  |
| Node 1 (MiniPC)  |   Node 2           |   Node 3          |
| (4 ports)        |  (now 2 ports)     |  (now 2 ports)    |
|                  |                   |                  |
| eth0 (LAN)       | eth0 (LAN)         | eth0 (LAN)        |
|  ↕               |  ↕                |  ↕               |
| Production LAN   | Production LAN     | Production LAN    |
+------------------+-------------------+------------------+

Separately:
+----------------------------------------------+
| Private "Cluster network" (new small switch) |
|  ↕                                           |
| eth1 (cluster)   eth1 (cluster)   eth1 (cluster) |
| (on all nodes)                                 |
+----------------------------------------------+

Two independent networks:

  • eth0 = LAN (VM traffic, Internet access, Proxmox GUI for users)
  • eth1 = Cluster-only traffic (corosync, PBS replication)

✅ Even if LAN glitches, cluster heartbeat survives.
✅ Backup traffic (PBS) can be pushed onto the private network later if you want.


🛠️ Quick Proxmox Setup

1. Create a new network interface on each node

  • In the Proxmox web GUI:
    • Datacenter → Node → System → Network
    • Add the new interface (eth1)
    • Give it a static IP address, e.g.:
      • Node 1: 10.10.10.1/24
      • Node 2: 10.10.10.2/24
      • Node 3: 10.10.10.3/24
    • No gateway needed (not routed).

Example /etc/network/interfaces snippet:

auto eth1
iface eth1 inet static
    address 10.10.10.1/24

(Repeat with .2, .3 addresses on other nodes.)


2. Reconfigure Corosync to use private IPs

  • On the Proxmox Web UI:
    • Datacenter → Cluster → Corosync
    • Edit the Ring0 address to use 10.10.10.x instead of 192.168.x.x.

If you prefer manually:

  • Edit /etc/pve/corosync.conf, change ring0_addr to the 10.10.10.x address.

Example:

nodelist {
  node {
    name: node1
    nodeid: 1
    ring0_addr: 10.10.10.1
  }
  node {
    name: node2
    nodeid: 2
    ring0_addr: 10.10.10.2
  }
  node {
    name: node3
    nodeid: 3
    ring0_addr: 10.10.10.3
  }
}

Then restart corosync service on all nodes (systemctl restart corosync) or just reboot.


3. (Optional) Tell PBS to use private network

  • If you backup nodes to your Proxmox Backup Server (PBS):
    • In the Storage settings, use 10.10.10.x IP of the PBS server.
  • Speeds up backup a lot on busy LANs.

🛒 Shopping List (Minimal)

ItemNotesEst. Price
2x 1GbE NICsPCIe, USB 3.0 is OK for cluster traffic~$40–$50 ea
1x 5-port or 8-port unmanaged switchCheap (TP-Link, Netgear)~$30–$50
3x Cat5e/6 cables1–2m each~$5 total

Total: Under $150 AUD
(Maybe half that if you find good used parts.)


Summary of benefits

  • Faster and safer cluster sync and quorum stability.
  • No impact on user traffic if cluster replicates or hiccups.
  • Live migration between nodes is faster.
  • Storage replication (PBS, ZFS) traffic won’t clog up user network.
  • Management GUI for Proxmox stays reachable even if something breaks.

BONUS TIP

If you ever add more nodes or another PBS, you already have a clean private network ready to handle that — no rework needed.


🖥️ Proxmox Cluster Cabling Layout

 [Internet] 
     │
[Main Router]──┬──────────> [Main LAN Switch (existing)]
                │
                ├──> Node 1 eth0 (LAN)
                │
                ├──> Node 2 eth0 (LAN)
                │
                └──> Node 3 eth0 (LAN)


(Separately)

[Private Cluster Switch (new 5/8 port)]  
                │
                ├──> Node 1 eth1 (Private Cluster IP 10.10.10.1)
                │
                ├──> Node 2 eth1 (Private Cluster IP 10.10.10.2)
                │
                └──> Node 3 eth1 (Private Cluster IP 10.10.10.3)

📋 Connection Summary

NodePortConnects ToIP Address
Node 1eth0Main LAN switch192.168.x.x
Node 1eth1Private Cluster Switch10.10.10.1
Node 2eth0Main LAN switch192.168.x.x
Node 2eth1Private Cluster Switch10.10.10.2
Node 3eth0Main LAN switch192.168.x.x
Node 3eth1Private Cluster Switch10.10.10.3

🔌 Equipment needed

  • Main LAN switch — already existing (could be any standard switch/router combo).
  • Private cluster switch — cheap 5-port Gigabit switch is enough.
  • Patch cables — Cat5e or Cat6.
  • Two new NICs for the single-port nodes.

🛡️ Security Note

  • The private switch does not need DHCP or Internet — purely static addresses.
  • Could even leave it physically disconnected from your LAN if you want an air-gapped cluster network.
  • OPNsense firewall rules (optional) could further protect if you VLAN later.

📈 Simple Visual Summary

                         Main LAN
                    +----------------+
Internet --- Router |   LAN Switch    |--- VM access, Proxmox GUI, Internet
                    +----------------+

                         Cluster Private Net
                    +----------------+
                    | Private Switch  |--- Corosync, Backup, Migrations
                    +----------------+

Next Steps after cabling

  1. Set static IPs on eth1 on all nodes (10.10.10.x range).
  2. Tell Corosync to use eth1 addresses.
  3. Optionally adjust PBS and replication to use 10.10.10.x.
  4. Enjoy a faster, more resilient cluster!

🛡️ Backup Plan if Private Switch Fails

If your private cluster switch fails (unlikely but possible), you can quickly re-cable using direct Ethernet crossover cables between nodes.
This maintains cluster quorum and live migration even without a switch!


📈 Crossover Cable Topology (No Switch)

Instead of a switch, just cable the nodes directly:

 Node 1 eth1  ───▶ Node 2 eth1
        │
        ▼
 Node 3 eth1
  • One cable from Node 1 to Node 2.
  • One cable from Node 1 to Node 3.

Node 1 acts as the “hub”.
Cluster heartbeat survives even if the private switch dies!

(Technically, this is a simple “star” topology.)


🛠️ How to Prepare

  • Buy 2x Gigabit crossover cables now (or make them if you can crimp your own).
  • They cost only a few dollars — standard Ethernet cables may work for gigabit because modern NICs are auto-MDI/MDI-X capable, but real crossover is safer.
  • Label them and store them with your mini-PC.

If the private switch fails:

  1. Disconnect all eth1 cables from the private switch.
  2. Connect:
    • Node 1 eth1 → Node 2 eth1
    • Node 1 eth1 → Node 3 eth1
  3. No IP changes needed. Cluster keeps working!

🔥 Bonus Option: Prewire it Now (Dual NICs)

  • Connect Node 1 to Node 2 and Node 1 to Node 3 now as well as the private switch.
  • Set both interfaces (switch and direct cables) with bonded interfaces (Linux “ifenslave” or Proxmox “bond”) as active-backup mode.
  • If the switch fails, the direct cables auto-takeover without you even touching anything.

(More complicated, but very cool if you like high availability.)


Summary

OptionProsCons
Keep spare crossover cablesVery fast recovery if neededManual action needed
Prewire bondingInstant failover, no manual workMore complex to configure

🛒 Tiny shopping list addition

ItemNotesEst. Price
2x Gigabit crossover cables0.5–1.0m ideal~$10–$20 total

WordPress Appliance - Powered by TurnKey Linux