A Nutanix environment should use DC switches designed for transmitting large amounts of server and storage traffic at low latency. Normal access switches may have 10 Gbps ports like datacenter switches, but they are not usually made to transport a large amount of bidirectional storage replication traffic, Campus switches are not made for storage replication traffic at same time.

Datacenter switches have the following characteristics:

  • Line rate: Ensures that all ports can simultaneously achieve advertised throughput.
  • Low latency: Minimizes port-to-port latency, measured in microseconds or nanoseconds.
  • Large per-port buffers: Handle speed mismatch from uplinks without dropping frames.
  • Nonblocking, with low or no oversubscription: Reduces chance of drops during peak traffic periods.
  • 10 Gbps or faster links for Nutanix CVM traffic: Only use 1 Gbps links either for additional user VM traffic or when 10 Gbps connections are not available, such as in a ROBO deployment. Limit Nutanix clusters using 1 Gbps links to eight nodes.

The following list is not exhaustive, but it gives some examples of model lines that meet the above requirements for high-performance or large clusters. Models similar to the ones listed are also great choices.

  • Arista 7050X3, 7170, 7280: Larger buffer models
  • Cisco Nexus 9000, 7000, and 5000
  • DellEMC S5200-ON
  • HPE FM3810, FM3132Q
  • Juniper QFX-5100
  • Lenovo NE1032T,NE2580O
  • Mellanox SN2410, SN2100, and SN2010
Cisco TOR Switches
Cisco TOR Switches

The following are examples of switches that do not meet the high-performance DC switch requirements but are acceptable for ROBO clusters and clusters with fewer than eight nodes or low performance requirements:

  • Arista 7050 and 7150s: Smaller buffer models
  • Cisco Nexus 3000: Smaller buffer model
  • Cisco Catalyst 9300: Campus access switch
  • Cisco Catalyst 3850: Stackable multigigabit switch
  • HPE FM2072

The following are examples of switches that are never acceptable for any Nutanix deployment:

  • 10 Gbps expansion cards in a 1 Gbps physical access switch: Not recommended These 10 Gbps expansion cards provide uplink bandwidth for the switch, not server connectivity.

Each Nutanix node also has an out-of-band connection for IPMI, iLO, iDRAC,XCC, or similar management. Because out-of-band connections do not have the latency or throughput requirements of VM networking or storage connections, they can use any access layer switch.

Tip: Nutanix recommends an out-of-band management switch network separate from the primary network for high availability, We can use 1Gbps for BMC

Leave a Reply