HOW TO ADD OR REMOVE NICS FROM OVS BRIDGES ON NUTANIX AHV
Issue:
Whenever we deploy new AHV cluster all physical interfaces will be part of default bridge br0. For best practice we need to change the interface assignment, so we can see only the connected uplinks from the open switch.
Explanation:
There are some requirements for keeping the separate as follows :
- If we want to keep two 10g Nics in default br0 and create separate bridge br1 with two 1Gig Nics
- If we want to keep two 10g Nics in default br0 and create separate bridge br1 with two 10Gig Nics
- if we want to unused nics remove or add from open vswitch

Solution:
Understand the default or existing network configuration
- Connect to any CVM via SSH or putty.
- Run the following commands on terminal of any CVM.
nutanix@prod-cvm1$ manage_ovs show_uplinks ----- To see the uplinks Bridge: br0 Bond: br0-up bond_mode: active-backup interfaces: eth4 eth5 lacp: off lacp-fallback: true lacp_speed: off nutanix@prod-cvm1$ manage_ovs show_interfaces --- To See the Interfaces name mode link speed eth0 1000 True 1000 eth1 1000 True 1000 eth4 10000 True 10000 eth5 10000 True 10000
Step-by-step configuration
It is recommended to put hosts in maintenance mode for any network related activity as it as it could be system disturbance
Please perform following steps to prepare the host:
- Follow post to verify cluster health.
- Do not proceed if the data resilience is not “OK”.
- Put the node and CVM in the maintenance mode using acli command utility
1 Enter this command to see the status of host.
nutanix@prod-cvm1$ acli host.list
2. Check whether the host can enter in maintenance mode.
nutanix@prod-cvm1$ acli host.enter_maintenance_mode_check <host ip> --- Enter the host IP address
3.Put the host into maintenance mode – This will trigger live migration of the guest VMS:
nutanix@prod-cvm1$ acli host.enter_maintenance_mode <host ip>
4. Using NCLI enable maintenance mode for the CVM on the target host.
Follow this article for step by step :- CVMS AND HOSTS IN MAINTENANCE
nutanix@prod-cvm1$ ncli host edit id=<host ID> enable-maintenance-mode=true
Add new bridge
Run the following command on any CVM to create new bridge :
nutanix@prod-cvm1$ manage_ovs --bridge_name <bridge name> create_single_bridge
Add or remove NIC to bridge
Update bridge to contain all NICs with same speed:
nutanix@prod-cvm1$ manage_ovs --bridge_name <bridge name> --interfaces <all,40g,10g, 1g are possible options> update_uplinks Or you can specify nic names like eth0,eth1
For example, if br0 contains eth0, eth1, eth2, eth3 and you want to remove eth2 and eth3 from it, run following command:
nutanix@prod-cvm1$manage_ovs --bridge_name br0 --interfaces eth0,eth1 update_uplinks
If br1 does contain eth2, eth3 interfaces and you want to replace them with eth4, eth5 run the following command:
nutanix@prod-cvm1$ manage_ovs --bridge_name br1 --interfaces eth4,eth5 update_uplinks
Delete existing bridge
Run the following command on any CVM to list available virtual networks created with prism or acli:
nutanix@prod-cvm1$ acli net.list
Run the following command on any CVM to list virtual networks and associated bridges:
nutanix@prod-cvm1$ acli net.get <network name> ex. DMZ,Prod
Example
nutanix@prod-cvm1$ acli net.get DMZ DMZ { identifier: 0 logical_timestamp: 6 name: "DMZ" type: "kBridged" uuid: "32445-4556-453-3b435c-844666741e446e" vswitch_name: "br1" <---- bridge name }
Run the following command on CVM to delete the bridge:
nutanix@prod-cvm1$ manage_ovs --bridge_name <bridge> delete_single_bridge
1.Exit CVM from maintenance mode NCLI
nutanix@prod-cvm1$ ncli host edit id=<host ID> enable-maintenance-mode=false
Exit host from maintenance mode – This will restore VM locality:
2. Exit host from maintenance mode – this will restore VM locality:
nutanix@prod-cvm1$ acli host.exit_maintenance_mode <host ip>
3. Enter following command to check the host is showing schedulable as true
nutanix@prod-cvm1$ acli host.list
And, also verify the status on prism to see if its showing data resiliency is ok.
Reference :-
See also
Hi – When I recently deployed AHV it only added the two active interfaces to br0. Is there any reason I should change that configuration if I only intend to use 2 interfaces?
This is new AHV default configuration. Keep this as it is if you intend to use only 2 interfaces.