Nexus1000v常用命令 及文档链接。。。自留备用。。。

Nexus1000v简易命令不包含高级功能 及文档链接。。。自留备用。。。



hostname [hostname]  ###设置hostname



svs-domain

no control vlan

no packet vlan

svs mode L3 interface mgmt0  ###设置MGMT0 为三层管理模式  上面两句不知道啥作业,附上原注解。。。

###

In this step, you set the transport mode of the VSM Layer 3.

When setting up the Layer 3 control mode you have two options:

  L ayer 3 packet transport through the VSM mgmt0 interface

  L ayer 3 packet transport through the VSM control0 interface

Setup the Layer 3 packet transport to use the VSM mgmt0 interface.

###




vrf context management

 ip name-server  10.4.48.10   ###貌似设置域名解析,附上原注解

###

Configure the ip name-server  command with the IP address of

the DNS server for the network. At the command line of a Cisco NX-OS

device, it is helpful to be able to type a domain name instead of the IP

address.

###




ntp server 10.4.48.17  ###设置ntp服务器

clock timezone  PST 8 0  ###设置时区 如果是西区为 -X

clock summer-time PDT 2 Sunday march 02:00 1 Sunday nov 02:00

60



snmp-server community  cisco  group network-operator  ###设置community cisco 用户组为network-operator只读

snmp-server community  cisco123  group network-admin  ###设置community cisco123 用户组为network-admin读写,用时admin用户,密码也是




svs connection  [name]     ###链接vsm到vcenter,vsm就是1000v的引擎。vem就是在esxi上的板卡。。。

 protocol vmware-vim

 remote ip address  [vCenter Server IP address]  port 80  ###

 vmware dvs datacenter-name  [Datacenter name in vCenter   ###

Server]

 connect


N1kvVSM# show svs connections     ###显示链接情况

connection vcenter:

   ip address: 10.4.48.11

   remote port: 80

   protocol: vmware-vim https

   certificate: default

   datacenter name: 10k

   admin:  

   max-ports: 8192

   DVS uuid: ca 56 22 50 f1 c5 fc 25-75 6f 4f d6 ad 66 5b 88

   config status: Enabled                  ###这是重点

   operational status: Connected           ###这是重点

   sync status: Complete

   version: VMware vCenter Server 5.0.0 build-804277

   vc-uuid: E589E160-BD5A-488A-89D7-E8B5BF732C0C

N1kvVSM# system redundancy role primary   ###设置为 主 交换机


copy running-config startup-config   ###保存,这句可以再任何模式下输入


N1kvVSM# show system redundancy status     ###show 双机 状态  以下为备机没配所以是HA


Redundancy role

---------------

     administrative:   primary

        operational:   primary

Redundancy mode

---------------

     administrative:    HA

        operational:    HA





DC-N1kv-VSM# configure terminal

DC-N1kv-VSM(config)# vlan 148                  ###设置VLAN

DC-N1kv-VSM(config-vlan)# name Servers_1       ###添加描述

DC-N1kv-VSM(config-vlan)# vlan 149-157

DC-N1kv-VSM(config-vlan)# vlan 160

DC-N1kv-VSM(config-vlan)# name 1kv-Control

DC-N1kv-VSM(config-vlan)# vlan 161

DC-N1kv-VSM(config-vlan)# name vMotion

DC-N1kv-VSM(config-vlan)# vlan 162

DC-N1kv-VSM(config-vlan)# name iSCSI

DC-N1kv-VSM(config-vlan)# vlan 163

DC-N1kv-VSM(config-vlan)# name DC-Management




port-profile type ethernet System-Uplink         ###创建上联口,命名为system-iplink

 vmware port-group

 switchport mode trunk

 switchport trunk allowed vlan 148-157,160-163

 channel-group auto mode on mac-pinning

 no shutdown

 system vlan 160,162-163

 description B-Series Uplink all traffic

 state enabled




port-profile type ethernet  ESXi-Mgmnt-Uplink   ###创建用于management console traffic的port-profile.

 vmware port-group

 switchport mode access

 switchport access vlan 163

 channel-group auto mode on mac-pinning

 no shutdown

 system vlan 163

 description C-Series {Uplink for ESXi Management}

 state enabled



port-profile type vethernet Servers_Vl148     ###设置的端口组,虚机提供服务用的,vlan148

 vmware port-group

 switchport mode access

 switchport access vlan 148

 no shutdown

 state enabled



port-profile type vethernet vMotion          ###设置端口组,提供vmtion用的,vlan161

 vmware port-group

 switchport mode access

 switchport access vlan 161

 no shutdown

 state enabled



port-profile type vethernet iSCSI            ###设置端口组,提供iscsi用的,vlan162

 vmware port-group

 switchport mode access

 switchport access vlan 162

 no shutdown

 system vlan 162

 state enabled



 port-profile type vethernet n1kv-L3        ###VSM和VEM三层通信用

 capability l3control

 vmware port-group

 switchport mode access

 switchport access vlan 163

 no shutdown

 system vlan 163

 state enabled



n1000v(config-port-prof)#  show module vem mapping  

n1000v#  show port-profile virtual usage



ESXI

- vemcmd show card

- vemcmd show port

~ # vemcmd show trunk

~ # vemcmd show port vlans

vem reload   移除1000v端口组方可使用

vem restart



1000v注意事项:

Cisco recommends that you migrate the following from the VMware vSwitch to the Cisco Nexus

1000V:

�C uplinks

�C virtual switch interfaces

�C vmkernel NICs (including the management ports)

�C VSM VM



When installing the Cisco Nexus 1000 in a VMware cluster with DRS enabled, all ESX hosts must be migrated to the Cisco Nexus 1000 DVS.  If only some hosts are migrated it is possible that VMs

could be installed or moved to hosts in which th e vSwitch is missing VLANs, physical adapters, or

both.




No Spanning Tree Protocol

The Nexus 1000V does not run STP because it will deactivate all but one uplink to an upstream

switch, preventing full utilization of uplink bandwidth. Instead, each  VEM is designed to prevent

loops in the network topology.



VMware Fault Tolerance is not supported for the VSM VM. It is supported for other VMs connected

to Cisco Nexus 1000V.




Using a VSM VM snapshot is not recommended. VSM VM snapshots do not contain unsaved

configuration changes.



The following must be in place if you migrate the host and adapters:

�C The host must have one or more physical NICs on each vSwitch in use.

�C The vSwitch does not have any active VMs.

To prevent a disruption in connectivity during migration, any VMs that share a vSwitch with

port groups used by the VSM must be powered off.

�C The host must use a VUM-enabled vCenter server.

�C You must also configure the VSM connection to the vCenter server datacenter where the host

resides.





Virtual Ethernet Module (VEM)

The VEMs are physical ESX/ESXi servers that will become like an Ethernet modular line card. The VEM is capable of locally switching between VM virtual network interface cards (vNICs) within the VEM. The VSM runs the control plane protocols and configures the state of each VEM, but it never takes part in the actual forwarding of packets. For the ESX/ESXi server to become a VEM and be managed by the Cisco Nexus 1000V VSM, it is critical that the VEM is able to communicate with the VSM. There are Layer 2 or Layer 3 methods for this setting up this communication.

Over Layer 2: The control interface from the VSM communicates to the VEM through a VLAN designated as the control VLAN. This control VLAN needs to exist through all the network switches along the path between the VSM and the VEM.

Over Layer 3 (recommended): Communication between the VSM and the VEM is done through Layer 3, using the management interface of the VSM and a VMkernel interface of the VEM. Layer 3 connectivity mode is the recommended mode.

The Layer 3 mode encapsulates the control and packet frames through User Datagram Protocol (UDP). This process requires configuration of a VMware vmkernel interface on each VMware ESX host, ideally the service console of the VMware ESX server. Using the ESX/ESXi management interface alleviates the need to consume another vmkernel interface for Layer 3 communication and another IP address. Configure the VMware VMkernel interface and attach it to a port profile with the l3control option.
Nexus1000V(config)# port-profile type vethernet L3vmkernel
Nexus1000V(config-port-profile)# switchport mode access
Nexus1000V(config-port-profile)# switchport access vlan <X>
Nexus1000V(config-port-profile)# vmware port-group
Nexus1000V(config-port-profile)# no shutdown
Nexus1000V(config-port-profile)# capability l3control
Nexus1000V(config-port-profile)# system vlan <X>
Nexus1000V(config-port-profile)# state enable

Note: <X> is the VLAN number that will be used by the VMkernel interface.

The l3control configuration sets up the VEM to use this interface to send Layer 3 packets, so even if the Cisco Nexus 1000V Series is a Layer 2 switch, it can send IP packets.
Layer 3 (L3) mode is the recommended option, in part for simplicity in troubleshooting communication problems between the VSM and VEM. Communication between the VSM and VEM is crucial, and use of Layer 3 is simpler for troubleshooting purposes. If the VMware ESXi (VEM) vmkernel interface cannot ping the management interface of the VSM, then it is easier to troubleshoot Layer 3 routing problems. With Layer 2 (L2) mode, all switches between the VEM and VSM must have the control VLAN in place. Troubleshooting Layer 2 mode can become cumbersome because after the physical network switches are configured, the server administrator needs to troubleshoot the VEM to verify that the appropriate VLANs and MAC addresses of the VSM are seen. This additional process can make troubleshooting VSM-to-VEM communication difficult. Therefore, the recommended approach is to enable Layer 3 mode.

Figure 5 illustrates the use of the same fabric interconnect for Layer 3 VSM to VEM communication. In this configuration, both Server 1 and Server 3 are using vmnic0 as the primary interface that allows the management interface of the VSM and the VMkernel management interface of the VEM. The VSM to VEM communication will need to flow only to the fabric interconnect.

Figure 6 illustrates using a different fabric interconnects for the VSM to VEM communication. Server 1 (vmnic0) has the primary interface that allows the VSM management interface and Server 3 (vmnic1) has the primary interface that allows the VMkernel interface of the VEM on a different fabric interconnects. The VSM to VEM communication will need to flow to the Cisco Nexus 5000 Series Switch.


Port-Profiles

Port-profiles are network configuration containers that allow the networking team to build network attributes for particular types of VM traffic. Used in this way, port-profiles are a networking concept within the Cisco Nexus 1000V that are mapped to the vCenter port-group. So when a server administrator attaches a port-group to a particular VM, the Cisco Nexus 1000V port-profiles will be part of the drop-down list.


J05-UCSB-N1KV# show running-config port-profile system-uplink


port-profile type ethernet system-uplink
vmware port-group
switchport mode trunk
switchport trunk allowed vlan 1, 51-53, 80, 172
channel-group auto mode on mac-pinning
no shutdown
system vlan 80,172
description "Uplink Profile for standard UCS Blade Servers"
state enabled

The port-profile of Ethernet type is critical in that it has to allow the management interface VLAN for the communication between VSM and VEM to be a part of this configuration. Another critical configuration for this Ethernet port-profile is to define the management interface VLAN as a "system VLAN."




System VLANs

System VLANs are VLANs that are used for critical communication between the VSM and VEMs and the bring up of the Cisco Nexus 1000V system. These critical VLANs are control VLAN (if using Layer 2 mode), packet VLAN, management VLAN (if using Layer 3 mode), and VLANs that are used by VMware's VMkernel (that is, NAS and iSCSI storage VMkernel and management interface). The VMkernel for vMotion is not critical in the process since it is not critical to bring up the Cisco Nexus 1000V system. Once the Cisco Nexus 1000V system is online, the rest of the communication of the other port-profiles and VLANs are then brought up. We recommend that you use these system VLANs for the particular interfaces, as described earlier.
The following is a sample configuration of a port-profile of type vEthernet with a system VLAN:
J05-UCSB-N1KV(config-port-prof)# show running-config port-profile ESXi-Management
!Command: show running-config port-profile ESXi-Management
!Time: Mon Feb 13 23:48:06 2012
version 4.2(1)SV1(5.1)
port-profile type vethernet ESXi-Management
capability l3control
vmware port-group
switchport mode access
switchport access vlan 172
no shutdown
system vlan 172
state enabled


你可能感兴趣的:(命令,nexus1000v)