故障状态:
vCenter Server 和 Microsoft Internet Information Service (IIS) 都将端口 80 用作直接 HTTP 连接的默认端口。该冲突会导致安装 vSphere Authentication Proxy 后 vCenter Server 无法重新启动。在 vSphere Authentication Proxy 安装完成后,vCenter Server 无法重新启动。
故障分析:
如果安装 vSphere Authentication Proxy 时未安装 IIS,则安装程序会提示您安装 IIS。因为 IIS 使用端口 80,这是用于 vCenter Server 直接 HTTP 连接的默认端口,所以 vCenter Server 在 vSphere Authentication Proxy 安装完成后无法重新启动。请参见vCenter Server 所需的端口。
解决方案:
要为端口 80 解决 IIS 和 vCenter Server 之间的冲突,请执行以下操作之一。
如果在安装 vCenter Server 之前已安装 IIS |
将 vCenter Server 直接 HTTP 连接的端口由 80 更改为其他值。 |
如果在安装 IIS 之前已安装 vCenter Server |
重新启动 vCenter Server 之前,将 IIS 默认网站的绑定端口由 80 更改为其他值。 |
・ HP ProLiant BL260c G5 � No vSphere ESXi 5.1 Support
・ HP ProLiant BL460C � No vSphere ESXi 5.1 Support
・ HP ProLiant BL460c G5 � No vSphere ESXi 5.1 Support
・ HP ProLiant BL465c G5 � No vSphere ESXi 5.1 Support
・ HP ProLiant BL480C � No vSphere ESXi 5.1 Support
・ HP ProLiant BL495c G5 � No vSphere ESXi 5.1 Support
・ HP ProLiant DL360 G5 � No vSphere ESXi 5.1 Support
・ HP ProLiant DL365 G5 � No vSphere ESXi 5.1 Support
・ HP ProLiant DL380 G5 � No vSphere ESXi 5.1 Support
・ HP ProLiant DL385 G5 � No vSphere ESXi 5.1 Support
・ HP ProLiant DL385 G5p � No vSphere ESXi 5.1 Support
・ HP ProLiant DL585 G5 � No vSphere ESXi 5.1 Support
・ HP ProLiant DL785 G5 � No vSphere ESXi 5.1 Support
・ HP ProLiant ML350 G5 � No vSphere ESXi 5.1 Support
・ HP ProLiant ML370 G5 � No vSphere ESXi 5.1 Support
・ IBM System x3850 M2 � No vSphere ESXi 5.1 Support
・ IBM System x3950 M2 � No vSphere ESXi 5.1 Support
・ IBM System x3950 M2 2 Node � No vSphere ESXi 5.1 Support
・ IBM System x3950 M2 4 Node � No vSphere ESXi 5.1 Support
・ Winchester Systems Inc FlashServer HA-4100 � No vSphere ESXi 5.1 Support
・ Cisco UCS � C200 M1 Rack Server � No vSphere ESXi 5.1 Support
・ Cisco UCS � C210 M1 Rack Server � No vSphere ESXi 5.1 Support
・ Cisco UCS � C250 M1 Rack Server � No vSphere ESXi 5.1 Support
%RUN - 这个是world已scheduled运行的总计时间百分比;
Q:%USED和%RUN有什么不同?
A:%USED = %RUN + %SYS - %OVRLP的值,而%RUN不是这样;
Q:当VM的%RUN值过高时,意味着什么?
A:此时,就意味着VM使用大量的CPU资源,当然,这并不意味着虚拟机的资源不足了,如果要确认是否虚拟机的CPU资源不足,则还需要去看看%RDY值,因为%RDY值才是评判CPU资源紧缺的参考依据;
%RDY - %RDY值时world等待被调度运行的时间百分比。通常情况下world等待被CPU调度机制调度到PCPU时,就会产生%RDY值,它的全称是CPU Ready Time。因此它通常情况下都是小于100%的,为什么呢?因为物理的CPU的资源是有限度的;
Q:作为一个管理员,如何得知CPU资源不足产生了争用的情况?
A:%RDY就是一个重要的标尺,当系统产生了%RDY值后,理论上都意味着CPU资源的不足导致了争用。但是,这不是绝对的,因为,如果管理员有针对虚拟机的vCPU设定Limit时,此时,虚拟机可以调度的CPU资源量将会被局限在手动设定Limits范围内,此时,即使有足够的PCPU资源,依然会在VM上产生%RDY。那么,如何鉴别这个问题呢?此时,就涉及到我们接着要说的另一个参数“%MLMTD”。注意,%RDY值会包含%MLMTD的。例如,当CPU发生争用情况时,我们可以使用 “%RDY - %MLMTD” 来鉴别真实的情况,如果 “%RDY - %MLMTD”的值较高,例如大于20%时,即可定性为CPU资源不足导致了CPU争用情况的出现。反之,如果这个 “%RDY - %MLMTD” 的值较小,例如5%,则意味着此时不一定会有物理CPU的资源不足情况,也就不存在CPU资源争用的情况;
相关的临界值大约是怎样的呢?个人认为20%左右差不多,如果VM的速度本身没啥问题,则这个 “%RDY - %MLMTD” 大点也没啥关系,反之自然就意味着我们需要这个值更小一些才能保障VM的CPU性能了;
Q:那么,如何能够让world的state times低于100%呢?
A:我们都知道,无论是处于scheduled状态、没有scheduled状态或者没有处于Ready状态下的每个world都代表着不同的state(这里的state姑且可以解释为状态吧,不过,猫猫建议大家直接用state来解读)。以PCPU的资源为100%作为单位来计算,这里的计算公式大约是:
100% = %RUN + %RDY + %CSTP + %WAIT
从上面的公式种可以看到PCPU的资源开销由几个部分构成,因此,想要world的state times小于100%,那自然是让后面几个参数的值能够小一些比较好啦。关于%RUN和%RDY的值前面我们已经介绍过了,而关于%CSTP和%WAIT的值下一篇文章里我们继续介绍;
Q:当VM的%RDY值较高时,意味着什么呢?
A:从前面对于%RDY值的用途描述,大家就该知道,此时一般都意味着CPU资源争用啦,当然,还需要检查下%MLMTD的值后再做最终定论,如果%MLMTD也同时很高则意味着管理员为VM设定了CPU Limits,反之,则的确是PCPU资源不足导致了CPU争用了。这里的衡量标志很简单,就是:%RDY - %MLMTD值是否大于20%,如果是则意味着CPU资源不足导致了CPU争用;
%MLMTD - 这个值表示CPU已经处于Ready状态但是由于为VM设定了CPU Limits导致了VM的CPU资源使用被局限到了一个范围之内而产生的计数值,它本身是%RDY的一个组成部分;
Q:当%MLMTD值较高时,通常意味着什么?
A:意味着虚拟机不能正常运行,因为设定了CPU Limits,如果要提升虚拟机的性能,则要麽放弃Limits要麽调整CPU Limits的值;
故障状态:
在安装 vCenter Server 后重新引导 vCenter Server 计算机时,VMware VirtualCenter Management Webservices 服务不启动。
故障分析:
当 vCenter Server 和数据库安装在同一计算机上时,可能会发生此问题。
解决方案:
选择设置 > 控制面板 > 管理工具 > 服务 > VMware VirtualCenter Management Webservices,然后启动该服务。计算机可能需要几分钟时间来启动该服务。
故障状态:
在 UEFI 模式下,在主机上安装 ESXi 后重新引导时,重新引导可能失败。出现此问题的同时,还显示一条类似于以下内容的错误消息:发生异常网络错误。无可用的引导设备 (Unexpected network error. No boot device available)。
故障分析:
主机系统无法识别作为引导磁盘在其上安装 ESXi 的磁盘。
解决方案
1 |
屏幕上显示错误消息时,按 F11 显示引导选项。 |
2 |
选择一个类似于添加引导选项的选项。该选项的文字可能有所不同,具体取决于您的系统。 |
3 |
在安装 ESXi 的磁盘上选择文件 \EFI\BOOT\BOOTx64.EFI。 |
4 |
更改引导顺序,以便主机从添加的选项引导。 |
故障状态:
1 、虚拟机性能较低;
2 、在执行备份时磁盘延时很厉害;
3 、虚拟磁盘 vmdk 延时较大;
故障分析:
这种问题基本都是由于虚拟机没有足够的io per second(iops),或者iops低于30;
解决方案:
在解决问题前,首先要解决以下两个问题:
问题 1 :什么是 iops ?
Iops 全称为 input/output per second , 它是衡量一个磁盘(虚拟磁盘和物理磁盘)、存储的基本也是十分重要的标准。不同的磁盘、 存储有着不同的 iops 。 Iops 的高低直接会影响着系统的性能。当前, vmware 虚拟化环境中最大的瓶颈也就在于这个 iops 。针对数据密集型业务的数据库业务和流媒体业务,由于它们的 iops 很大, 所以在虚拟化环境中部署它们一定要充分考量它们的负载, 也就是 iops 大小,结合实际的存储性能来查看是否符合业务标准。
问题 2 :如何计算 iops ?
当前的存储设备都有自己的基本 iops 标准,主流接口单盘的 iops 如下:
磁盘转数( rpm ) |
基本 iops |
7200 rpm |
100 |
10,000 rpm |
150 |
15,000 rpm |
230 |
在服务器领域,这列( raid ) 是最常见的一种基本设备, 基本所有服务器都会做阵列, 而服务的阵列通常会有多个磁盘构成,这些磁盘自身的 iops 组合在一起, 根据阵列卡的性能和磁盘本身的转数、 接口补丁, 基本可以起到 iops 的叠加效果, 就拿 7200rpm 的磁盘而言, 假设 10 个 7200rpm 的磁盘做 raid0 阵列, 那么, iops 理论上的 iops 值至少可达 100x10=1000 。
如何计算每台虚拟机的 iops ?
要想计算每个虚拟机的总的 iops 数量, 首先就要确认磁盘类型以及它们的 iops 是多少。 处于 raid 阵列里面的每个盘的 io 有利于增长整体存储的可用 iops 。 而位于这个存储上面的单台虚拟机的 iops ,则可以通过将这个存储的总 iops 除掉虚拟机数量即可基本得到单台虚拟机的 iops 。
现实案例:
假定有 6 颗 10000rpm 的磁盘,那么它们的总的可用 iops 大约为 150x6=900 。如果 lun 上面运行的虚拟机数量为 50 个, 则单台虚拟机的 iops 为 900/50=18 iops 。如果以这个为标准, 则意味着虚拟机的性能相对底下。 如果想要满足虚拟机的基本 iops 需求, 那么应该为 900/30=30 , 也就是说同一个 volume 里 iops 为 30 (最低要求)虚拟机需要数量控制在 30 台以下。
备注:备份存储会消耗更多的 iops , 同时也会给 volume 带来更多额外的符合。如果是这样, 则需要用额外手段解决备份时的额外资源消耗;
下载封装好的VIB格式的驱动包
需要的是RealTek 8139网卡的驱动,所以找到这个包net-8139-1.0.0.x86_64.vib,当然官方有最好。将驱动包传到vSphere服务器上如果启用了SSH可通过SFTP方式,或者直接在vSphere Client里传到相应的存储里面,或者用USB或者光驱。使用USB方式,可以在/vmfs/volumes找到挂载的卷名,使用光驱据说比较麻烦,需要以下步骤:
1 2 3 |
# vmkload_mod iso9660 # /sbin/vsish -e set /vmkModules/iso9660/mount mpx.vmhba32:C0:T0:L0 # ls /vmfs/volumes/CDROM |
进入vSphere的Shell
两种方式,一个是通过SSH,一个是通过在vSphere主机键盘上按组合键Ctrl+F1,输入ROOT密码即可。
查看设备是否被识别出来
1 2 3 4 |
# lspci ........ 00:03:00.0 Network controller: Realtek Realtek 8168 Gigabit Ethernet [vmnic0] 00:04:01.0 Network controller: Realtek RTL-8139/8139C/8139C+ |
执行以下命令进入维护模式并允许安装第三方包
1 2 |
# esxcli system maintenanceMode set -e true -t 0 # esxcli software acceptance set --level=CommunitySupported |
安装VIB格式驱动包
1 |
# esxcli software vib install -v /vmfs/volumes/datastore1/net-8139-1.0.0.x86_64.vib |
实际文件路径根据情况自行修改。
退出维护模式
1 |
# esxcli system maintenanceMode set -e false -t 0 |
重启,reboot
查看是否生效
1 2 3 4 5 6 7 8 |
# esxcfg-nics -l Name PCI Driver Link Speed Duplex MAC Address MTU Description vmnic0 0000:03:00.00 r8168 Up 1000Mbps Full 10:78:d2:XX:XX:XX 1500 Realtek Realtek 8168 Gigabit Ethernet vmnic1 0000:04:01.00 8139too Up 100Mbps Full 00:e0:4c:XX:XX:XX 1500 Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ # lspci ....... 00:03:00.0 Network controller: Realtek Realtek 8168 Gigabit Ethernet [vmnic0] 00:04:01.0 Network controller: Realtek RTL-8139/8139C/8139C+ [vmnic1] |
故障状态:
ESXi主机出现死机现象,可以ping通主机,等无法通过客户端登录,主机里面的虚拟机均无法使用,也未出现集群故障转移,在ESXiH主机的控制台界面登录后,无法关机,只能强制关机。
在VMkernel出现如下日志:
When using Interrupt Remapping on some servers, you may experience these symptoms on ESXi 5.x and ESXi/ESX 4.1 hosts:
・ ESXi hosts are non-responsive
・ Virtual machines are non-responsive
・ HBAs stop responding
・ Other PCI devices stop responding
・ You may receive Degraded path for an Unknown Device alerts in vCenter Server
・ You may see an illegal vector error in the VMkernel or messages logs shortly before an HBA stops responding to the driver. The error is similar to:
vmkernel: 6:01:34:46.970 cpu0:4120)ALERT: APIC: 1823: APICID 0x00000000 - ESR = 0x40
・ For systems with QLogic HBA cards, the VMkernel or messages logs show that a card has stopped responding to commands:
vmkernel: 6:01:42:36.189 cpu15:4274)<6>qla2xxx 0000:1a:00.0: qla2x00_abort_isp: **** FAILED ****
vmkernel: 6:01:47:36.383 cpu14:4274)<4>qla2xxx 0000:1a:00.0: Failed mailbox send register test
・ The VMkernel or messages logs show the QLogic HBA card is offline:
vmkernel: 6:01:47:36.383 cpu14:4274)<4>qla2xxx 0000:1a:00.0: ISP error recovery failed - board disabled
・ For systems with Emulex HBA cards, the VMkernel or messages logs show a card has stopped responding to commands:
vmkernel: 6:22:52:00.983 cpu0:4684)<3>lpfc820 0000:15:00.0: 0:(0):2530 Mailbox command x23 cannot issue Data: xd00 x2
vmkernel: 6:22:52:32.408 cpu0:4684)<3>lpfc820 0000:15:00.0: 0:0310 Mailbox command x5 timeout Data: x0 x700 x0x4100a2811820
vmkernel: 6:22:52:32.408 cpu0:4684)<3>lpfc820 0000:15:00.0: 0:0345 Resetting board due to mailbox timeout
vmkernel: 6:22:53:02.416 cpu2:4684)<3>lpfc820 0000:15:00.0: 0:2813 Mgmt IO is Blocked d00 - mbox cmd 5 still active
vmkernel: 6:22:53:02.416 cpu2:4684)<3>lpfc820 0000:15:00.0: 0:(0):2530 Mailbox command x23 cannot issue Data: xd00 x2
vmkernel: 6:22:53:33.833 cpu0:4684)<3>lpfc820 0000:15:00.0: 0:0310 Mailbox command x5 timeout Data: x0 x700
・ For systems with LSI1064E (LSI1064, LSI1064E) or LSI1068E series SCSI controllers, if the ESXi host is connected to internal disks, the/var/log/vmkernel.log file shows errors similar to:
ScsiDeviceIO: 2316: Cmd(0x41240074e3c0) 0x1a, CmdSN 0x12ee to dev "mpx.vmhba0:C0:T0:L0" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0.
ScsiDeviceIO: 2316: Cmd(0x41240074e3c0) 0x4d, CmdSN 0x12f1 to dev "mpx.vmhba1:C0:T8:L0" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x35 0x1.
・ For systems with Megaraid 8480 SAS SCSI controllers, the VMkernel or messages logs show the controller has stopped responding to commands:
vmkernel: 12:14:17:35.206 cpu15:4247)megasas: ABORT sn 94489613 cmd=0x2a retries=0 tmo=0
vmkernel: 12:14:17:35.206 cpu15:4247)<5>0 :: megasas: RESET sn 94489613 cmd=2a retries=0
vmkernel: 12:14:17:35.206 cpu4:4435)WARNING: LinScsi: SCSILinuxQueueCommand: queuecommand failed with status = 0x1055 Host Busy vmhba0:2:0:0 (driver name: LSI Logic SAS based Mega RAID driver)
・ As the messages log file rolls over quickly on an ESXi host, press Alt + F11 on the ESXi physical console. This error message appears in red:
ALERT: APIC: 1823: APICID 0x00000000 - ESR = 0x40
Note: This message is cleared after a reboot.
故障分析:
ESXi/ESX 4.1 and later introduced interrupt remapping code that is enabled by default. This code is incompatible with some servers. This technology has been introduced by the vendor for more efficient IRQ routing and which should improve performance.
Note: If this issue occurs in the PCI device from which the ESXi/ESX host boots (either locally or using SCSI/RAID), or when the host boots from SAN using iSCSI/FC HBA, the APIC error(s) is not logged. To troubleshoot the issue in this case, enable and configure remote syslog logging. For more information, see Configuring syslog on ESXi 5.0 (2003322). Alternatively, you can test this by disabling IRQ remapping.
解决方法:
Note: This issue only applies if you see this specific alert in the vmkernel/messages log files:
ALERT: APIC: 1823: APICID 0x00000000 - ESR = 0x40.
If you do not see this message, you are not experiencing this issue.
Several server vendors have released fixes in the form of Server BIOS updates. Contact your server vendor to see if they have a fix available. For IBM models, including but not limited to the IBM BladeCenter HS22 series and System x3400/x3500 and x3600 series systems, see the IBM Knowledge Base article MIGR-5086606 for a firmware update and additional information.
If a firmware fix is not available, work around this issue by disabling interrupt mapping on your ESXi/ESX 4.1 or ESXi 5.0 host and reboot the host to apply the settings.
Note: Disabling Interrupt Remapping also disables the VMDirectPath I/O Pass-through feature.
ESX/ESXi4.1
To disable interrupt remapping on ESXi/ESX 4.1, perform one of these options:
・ Run this command from a console or SSH session to disable interrupt mapping:
# esxcfg-advcfg -k TRUE iovDisableIR
・ To back up the current configuration, un this command twice:
# auto-backup.sh
Note: It must be run twice to save the change.
Reboot the ESXi/ESX host:
# reboot
To check if interrupt mapping is set after the reboot, run the command:
# esxcfg-advcfg -j iovDisableIR iovDisableIR=TRUE
・ In the vSphere Client:
Click Configuration > (Software) Advanced Settings > VMkernel.Click VMkernel.Boot.iovDisableIR, then click OK.
Reboot the ESXi/ESX host.
ESXi5.x
ESXi 5.x does not provide this parameter as a GUI client configurable option. It can only be changed using the esxcli command or via the PowerCLI.
・ To set the interrupt mapping using the esxcli command:
List the current setting by running the command:
# esxcli system settings kernel list -o iovDisableIR
The output is similar to:
Name Type Description Configured Runtime Default
iovDisableIR Bool Disable Interrupt Routing in the IOMMU FALSE FALSE FALSE
Disable interrupt mapping on the host using this command:
# esxcli system settings kernel set --setting=iovDisableIR -v TRUE
Reboot the host after running the command.
详见:http://kb.vmware.com/kb/1030265
详见KB1008524
VMware ACE (v2.x) |
Collecting diagnostic information for VMware ACE 2.x (1000588) |
VMware Capacity Planner |
Collecting diagnostic information for VMware Capacity Planner (1008424) |
VMware Consolidated Backup (v1.5) |
Collecting diagnostic information for VMware Virtual Consolidated Backup 1.5 (1006784) |
VMware Converter |
Collecting diagnostic information for VMware Converter (1010633) |
VMware Data Protection |
Collecting diagnostic information for VMware Data Protection (2033910) |
VMware Data Recovery |
Collecting diagnostic information for VMware Data Recovery (1012282) |
VMware ESXi/ESX |
Collecting diagnostic information for VMware ESXi/ESX using the vSphere Client (653) |
VMware Fusion |
Collecting diagnostic information for VMware Fusion (1003894) |
VMware Horizon Mirage |
Collecting diagnostic information for VMware Horizon Mirage (2045794) |
VMware Horizon Workspace |
Collecting logs from VMware Horizon Workspace vApp (2053549) |
VMware Infrastructure SDK |
Collecting diagnostic information for VMware Infrastructure SDK (1001457) |
VMware Lab Manager |
|
v2.x |
Collecting diagnostic information for VMware Lab Manager 2.x (4637378) |
v3.x |
Collecting diagnostic information for VMware Lab Manager 3.x (1006777) |
v4.x |
Collecting diagnostic information for Lab Manager 4.x (1012324) |
VMware Server |
Collecting diagnostic information for VMware Server (1008254) |
VMware Service Manager |
Collecting diagnostic information for VMware Service Manager (2012820) |
VMware Stage Manager (v1.0.x) |
Collecting diagnostic information for VMware Stage Manager 1.0.x (1005865) |
VMware Storage Appliance |
Collecting diagnostic information for a vSphere Storage Appliance cluster (2003549) |
VMware ThinApp (v4.0x) |
Collecting diagnostic information for VMware ThinApp (1006152) |
VMware Tools |
Collecting diagnostic information for VMware Tools (1010744) |
VMware Update Manager |
Collecting diagnostic information for VMware Update Manager (1003693) |
VMware vCenter AppSpeed |
Collecting diagnostic information for VMware vCenter AppSpeed (1012876) |
VMware vCenter CapacityIQ |
Collecting diagnostic information for VMware vCenter CapacityIQ (1022927) |
VMware vCenter Chargeback |
Collecting diagnostic information for vCenter Chargeback (1020274) |
VMware vCenter Configuration Manager (v4.x, 5.x) |
Collecting diagnostic information for VMware vCenter Configuration Manager (2001258) |
VMware vCenter Infrastructure Navigator |
Collecting diagnostic information for VMware vCenter Infrastructure Navigator (2040467) |
VMware vCenter Orchestrator (v4.0) |
Collecting diagnostic data for VMware Orchestrator APIs (1010959) |
VMware vCenter Operations vApp (v5.0) |
Collecting diagnostic information for VMware vCenter Operations 5.0 vApp (2013647) |
VMware vCenter Operations Enterprise |
Collecting diagnostic information for VMware vCenter Operations Enterprise (2006599) |
VMware vCenter Operations Standard |
Collecting diagnostic information for VMware vCenter Operations Standard (1036655) |
VMware vCenter Server (v4.x, 5.x) |
Collecting diagnostic information for VMware vCenter Server (1011641) |
VMware vCenter Server Heartbeat |
Retrieving the VMware vCenter Server Heartbeat Logs and other useful information for support purposes (1008124) |
VMware vCenter Site Recovery Manager, vSphere Replication |
Collecting diagnostic information for Site Recovery Manager (1009253) |
VMware vCloud Automation Center |
Retrieving logs from VMware vCloud Automation Center (2036956) |
VMware vCloud Connector (v1.0, 2.5) |
Collecting diagnostic information for vCloud Connector 1.0.x [v. 1.0 only] (1036378) |
VMware vCloud Director |
Collecting diagnostic information for VMware vCloud Director (1026312) |
VMware vCloud Usage Meter |
Collecting diagnostic information for VMware vCloud Usage Meter (2040496) |
VMware View Manager |
Collecting diagnostic information for VMware View (1017939) |
VMware VirtualCenter (v2.0, 2.5) |
Collecting diagnostic information for VMware VirtualCenter 2.x (1003688) |
VMware Virtual Desktop Manager (v2.x) |
Collecting diagnostic information for Virtual Desktop Manager (VDM) (1003901) |
VMware Virtual Infrastructure Client (v3.x) |
%UserProfile%\\Local Settings\Application Data\VMware\vpx\viclient*.log |
VMware vFabric Application Director Appliance |
Collect Logs from the vFabric Application Director Appliance 5.x (2057138) |
VMware vFabric Data Director (v2.7) |
Collecting diagnostic Information for VMware vFabric Data Director 2.7 (2057153) |
VMware vShield Manager (v4.x) |
Collecting diagnostic information for VMware vShield Manager (1029717) |
VMware vShield |
Obtaining logs and information needed for VMware vShield products (2012760) |
VMware vSphere 5.0 Client |
%userprofile%\AppData\Local\VMware\vpx\viclient*.log |
VMware vSphere Data Protection |
Collecting diagnostic information for VMware Data Protection (2033910) |
VMware Workstation (v6.0, Windows and Linux) |
See the Preface of the Workstation 6 User Manual |
VMware Workstation (v7.0, Windows and Linux) |
See the Running the Support Script section of the Workstation 7 User Manual |
VMware Workstation (v8.0, Windows and Linux) |
File menu Help > Support > Collect Support Data |
VMware Workstation (v9.0, Windows and Linux) |
File menu Help > Support > Collect Support Data |
VMware Workstation (v9.0, Windows and Linux) WSX |
WSX server logs: |
故障状态:
在Windows2003系统中,虚拟机网卡E1000无故丢失
故障分析:
在Windows2003系统中,使用E1000虚拟化网卡后,在虚拟化网卡流量较大时,可能会导致虚拟网卡的丢失或ESXi系统主机紫屏的情况,在ESXi5.0 update3之前 的版本均有可能存在此情况
解决方案:
更改虚拟网卡为VMXNET2(增强型)或VMXNET3,如必须使用E1000网卡,可以修改RSS值(Receive Side Scalling)。进入系统打开注册表(regedit),打开HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters,找到EnableRSS值,把该值该为0,如未发现该DWORD值,可手工创建。