Cisco Nexus 1000v Recommendation

1. We recommend deploying VSMs in an HA pair, use anti-affinity rule to reside on different hosts and/or disable DRS for VSMs, and recommended to put on different datastores for VSMs when using Storage DRS

2. We also recommend deploying the Nexus 1000v VSMs outside of the cluster that it is managing. This can be done with the Nexus 1010 appliance.

3. If the Nexus 1000v will be hosted on the same cluster that it is managing, we recommend placing the VSMs on a standard vSwitch.

4. If the Nexus 1000v must be hosted on a cluster that it is managing and there are only two 10GbE ports available on the hosts, there is a way that the VSMs can reside on top of the VEMs that they are managing. Once the VLANs are created, the VLANs that will be used for VSM connectivity must be marked as System VLANs in the Nexus 1000v configu-ration using the following command:

system vlan <VLAN ID’s separated by a comma>


For example:
system vlan 100,101,102,103


Any VMkernel interfaces and the vCenter VLAN should also be marked as system VLANs. This tells the Nexus 1000v to always forward traffic and not wait for VSM  communication.

The process for moving the VSMs to a VEM that they are managing is:

1.Start with a standard vSwitch on the host.

2.Deploy the VSM virtual machines on the standard vSwitch.

3.Deploy the Nexus 1000v configuration on the VSMs and install the VEMs on the ESX/ESXi hosts.

4.Move one physical NIC to act as a system uplink on the Nexus 1000v while still keeping at least one physical NIC as an uplink on the standard vSwitch.

5.Once the network connectivity is verified on the Nexus 1000v, migrate the VSM virtual machines’ networking to the Nexus 1000v.

6.Move the remaining physical NICs to act as system uplinks on the Nexus 1000v.

 

 

 

Example:

Let's say my Management uses VLAN 10, and my VMs also use VLAN 20 for their data traffic.

Having to define the system VLAN in "two places" would allow you to treat ONLY your "Management" traffic as a system traffic, and still enforce programming/security for your "VLAN Data" traffic.  Following a reboot, your Management traffic would flow immediately, but your VM Data would not until the VEM had pulled programming from the VSM.

port-profile type ethernet system-uplink
  vmware port-group
  switchport mode trunk
  switchport trunk allowed vlan 10,20,3001-3002
  channel-group auto mode active
  no shutdown
  system vlan 10,3001-3002 <== System VLAN 10 Defined
  state enabled
 
port-profile type vethernet dvs_Management
  vmware port-group
  switchport mode access
  switchport access vlan 10
  no shutdown
  system vlan 10  <== Defined as System VLAN
  state enabled


port-profile type vethernet dvs_VM_Data_VLAN20
   vmware port-group
   switchport mode access
   switchport access vlan 20 <== No System VLAN
   no shutdown
   state enabled

 

 

 

 

 

你可能感兴趣的:(1000V)