Nexus1000v简易命令不包含高级功能 及文档链接。。。自留备用。。。
hostname [hostname] ###设置hostname
svs-domain
no control vlan
no packet vlan
svs mode L3 interface mgmt0 ###设置MGMT0 为三层管理模式 上面两句不知道啥作业,附上原注解。。。
###
In this step, you set the transport mode of the VSM Layer 3.
When setting up the Layer 3 control mode you have two options:
L ayer 3 packet transport through the VSM mgmt0 interface
L ayer 3 packet transport through the VSM control0 interface
Setup the Layer 3 packet transport to use the VSM mgmt0 interface.
###
vrf context management
ip name-server 10.4.48.10 ###貌似设置域名解析,附上原注解
###
Configure the ip name-server command with the IP address of
the DNS server for the network. At the command line of a Cisco NX-OS
device, it is helpful to be able to type a domain name instead of the IP
address.
###
ntp server 10.4.48.17 ###设置ntp服务器
clock timezone PST 8 0 ###设置时区 如果是西区为 -X
clock summer-time PDT 2 Sunday march 02:00 1 Sunday nov 02:00
60
snmp-server community cisco group network-operator ###设置community cisco 用户组为network-operator只读
snmp-server community cisco123 group network-admin ###设置community cisco123 用户组为network-admin读写,用时admin用户,密码也是
svs connection [name] ###链接vsm到vcenter,vsm就是1000v的引擎。vem就是在esxi上的板卡。。。
protocol vmware-vim
remote ip address [vCenter Server IP address] port 80 ###
vmware dvs datacenter-name [Datacenter name in vCenter ###
Server]
connect
N1kvVSM# show svs connections ###显示链接情况
connection vcenter:
ip address: 10.4.48.11
remote port: 80
protocol: vmware-vim https
certificate: default
datacenter name: 10k
admin:
max-ports: 8192
DVS uuid: ca 56 22 50 f1 c5 fc 25-75 6f 4f d6 ad 66 5b 88
config status: Enabled ###这是重点
operational status: Connected ###这是重点
sync status: Complete
version: VMware vCenter Server 5.0.0 build-804277
vc-uuid: E589E160-BD5A-488A-89D7-E8B5BF732C0C
N1kvVSM# system redundancy role primary ###设置为 主 交换机
copy running-config startup-config ###保存,这句可以再任何模式下输入
N1kvVSM# show system redundancy status ###show 双机 状态 以下为备机没配所以是HA
Redundancy role
---------------
administrative: primary
operational: primary
Redundancy mode
---------------
administrative: HA
operational: HA
DC-N1kv-VSM# configure terminal
DC-N1kv-VSM(config)# vlan 148 ###设置VLAN
DC-N1kv-VSM(config-vlan)# name Servers_1 ###添加描述
DC-N1kv-VSM(config-vlan)# vlan 149-157
DC-N1kv-VSM(config-vlan)# vlan 160
DC-N1kv-VSM(config-vlan)# name 1kv-Control
DC-N1kv-VSM(config-vlan)# vlan 161
DC-N1kv-VSM(config-vlan)# name vMotion
DC-N1kv-VSM(config-vlan)# vlan 162
DC-N1kv-VSM(config-vlan)# name iSCSI
DC-N1kv-VSM(config-vlan)# vlan 163
DC-N1kv-VSM(config-vlan)# name DC-Management
port-profile type ethernet System-Uplink ###创建上联口,命名为system-iplink
vmware port-group
switchport mode trunk
switchport trunk allowed vlan 148-157,160-163
channel-group auto mode on mac-pinning
no shutdown
system vlan 160,162-163
description B-Series Uplink all traffic
state enabled
port-profile type ethernet ESXi-Mgmnt-Uplink ###创建用于management console traffic的port-profile.
vmware port-group
switchport mode access
switchport access vlan 163
channel-group auto mode on mac-pinning
no shutdown
system vlan 163
description C-Series {Uplink for ESXi Management}
state enabled
port-profile type vethernet Servers_Vl148 ###设置的端口组,虚机提供服务用的,vlan148
vmware port-group
switchport mode access
switchport access vlan 148
no shutdown
state enabled
port-profile type vethernet vMotion ###设置端口组,提供vmtion用的,vlan161
vmware port-group
switchport mode access
switchport access vlan 161
no shutdown
state enabled
port-profile type vethernet iSCSI ###设置端口组,提供iscsi用的,vlan162
vmware port-group
switchport mode access
switchport access vlan 162
no shutdown
system vlan 162
state enabled
port-profile type vethernet n1kv-L3 ###VSM和VEM三层通信用
capability l3control
vmware port-group
switchport mode access
switchport access vlan 163
no shutdown
system vlan 163
state enabled
n1000v(config-port-prof)# show module vem mapping
n1000v# show port-profile virtual usage
ESXI
- vemcmd show card
- vemcmd show port
~ # vemcmd show trunk
~ # vemcmd show port vlans
vem reload 移除1000v端口组方可使用
vem restart
1000v注意事项:
Cisco recommends that you migrate the following from the VMware vSwitch to the Cisco Nexus
1000V:
�C uplinks
�C virtual switch interfaces
�C vmkernel NICs (including the management ports)
�C VSM VM
When installing the Cisco Nexus 1000 in a VMware cluster with DRS enabled, all ESX hosts must be migrated to the Cisco Nexus 1000 DVS. If only some hosts are migrated it is possible that VMs
could be installed or moved to hosts in which th e vSwitch is missing VLANs, physical adapters, or
both.
No Spanning Tree Protocol
The Nexus 1000V does not run STP because it will deactivate all but one uplink to an upstream
switch, preventing full utilization of uplink bandwidth. Instead, each VEM is designed to prevent
loops in the network topology.
VMware Fault Tolerance is not supported for the VSM VM. It is supported for other VMs connected
to Cisco Nexus 1000V.
Using a VSM VM snapshot is not recommended. VSM VM snapshots do not contain unsaved
configuration changes.
The following must be in place if you migrate the host and adapters:
�C The host must have one or more physical NICs on each vSwitch in use.
�C The vSwitch does not have any active VMs.
To prevent a disruption in connectivity during migration, any VMs that share a vSwitch with
port groups used by the VSM must be powered off.
�C The host must use a VUM-enabled vCenter server.
�C You must also configure the VSM connection to the vCenter server datacenter where the host
resides.
Virtual Ethernet Module (VEM)
Over Layer 2: The control interface from the VSM communicates to the VEM through a VLAN designated as the control VLAN. This control VLAN needs to exist through all the network switches along the path between the VSM and the VEM.
Over Layer 3 (recommended): Communication between the VSM and the VEM is done through Layer 3, using the management interface of the VSM and a VMkernel interface of the VEM. Layer 3 connectivity mode is the recommended mode.
Note: <X> is the VLAN number that will be used by the VMkernel interface.
Figure 5 illustrates the use of the same fabric interconnect for Layer 3 VSM to VEM communication. In this configuration, both Server 1 and Server 3 are using vmnic0 as the primary interface that allows the management interface of the VSM and the VMkernel management interface of the VEM. The VSM to VEM communication will need to flow only to the fabric interconnect.
Figure 6 illustrates using a different fabric interconnects for the VSM to VEM communication. Server 1 (vmnic0) has the primary interface that allows the VSM management interface and Server 3 (vmnic1) has the primary interface that allows the VMkernel interface of the VEM on a different fabric interconnects. The VSM to VEM communication will need to flow to the Cisco Nexus 5000 Series Switch.
Port-Profiles
J05-UCSB-N1KV# show running-config port-profile system-uplink
The port-profile of Ethernet type is critical in that it has to allow the management interface VLAN for the communication between VSM and VEM to be a part of this configuration. Another critical configuration for this Ethernet port-profile is to define the management interface VLAN as a "system VLAN."
System VLANs