Quantcast
Channel: VMware Communities : Discussion List - ESXi
Viewing all 8132 articles
Browse latest View live

Sharing a datastore as a NFS volume

$
0
0

Imagine an ESXi host with local disks used as VMFS datastores.

Is there any way to allow other ESXi hosts access the local VMFS datastores as NFS shares?

A consultant says it should be possible, but I was unable to locate the related documentation.

Can anybody please help?

Regards

marius


ESXI 6.5 Free License ?

ESXi / Dell Precision T7610 HW monitoring

$
0
0

I'm looking for a way to view hardware stats with ESXi 6.7 on a Dell Precision T7610.

 

I want to be able to use Open Manage Tools / Essentials, but having no luck. I installed the VIB from https://vinfrastructure.it/2018/08/dell-emc-omsa-vsphere-6-7-vib/ , but doesn't appear this enables anything for me.

 

Or do I need to install Open Manage on a guest OS and somehow pass through the T7610 to that VM? It's also possible that the T7610 isn't supported at all with OMSA.

 

Any advice is greatly appreciated.

Question on Raid controller

$
0
0

Hi All,

 

I have supermicro hardware and installed freshly ESXi 6.0.0 update 2.

 

When i check the raid controller version,im seeing two different drivers version. Why it is showing ?

 

Also i could not able to find out the RAID firmware version command. May i know what is the command?

 

 

Any help is much appreciated.

 

Thanks,

Manivel R

esxi 6 usb controller pcie passthrough

$
0
0

I have a windows 10 VM and I have passthough my usb controller to it ... I can connect a mouse and keyboard to it and it works great.

However when I try to connect a usb dvd writer to it... it cannot burn any dvds... and when I try to connect a usb harddrive to it... the harddrive lights up but it is not detected properly; the partitions do not show up.

 

Any idea what is wrong?

CentOS 7.4 - random reboot when VM under load from containers

$
0
0

After create and run 100 containers on 10 CentOS VMs for 8 hours, some of the VMs reboot itself with docker service in stopped state.

This was not happening when created VMs were installed with Ubuntu 16.04.

 

Below is the bare metal info and VM info:

- 1 bare metal with 80 CPU, 512G memory, 1TB disk

- 10 VMs on the bare metal

- each VM has 8 CPU, 32G memory, 100G thin provisioned disk

 

In /var/log/dmesg there's no error or shutdown log.

In /var/log/message there're error|shutdown|kernel logs.

 

Oct  6 05:57:41 my-test-vm dockerd: time="2018-10-06T05:57:41-07:00" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"

Oct  6 05:57:41 my-test-vm dockerd: time="2018-10-06T05:57:41-07:00" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found.\n": exit status 1"

Oct  6 05:57:41 my-test-vm dockerd: time="2018-10-06T05:57:41-07:00" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.zfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"

Oct  6 05:57:41 my-test-vm dockerd: time="2018-10-06T05:57:41-07:00" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"

Oct  6 05:57:41 my-test-vm dockerd: time="2018-10-06T05:57:41-07:00" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found.\n": exit status 1"

Observed several reboots.

last reboot

reboot   system boot  3.10.0-693.el7.x Sat Oct  6 18:08 - 19:09  (01:01)

reboot   system boot  3.10.0-693.el7.x Sat Oct  6 02:33 - 19:09  (16:35)

reboot   system boot  3.10.0-693.el7.x Sat Oct  6 02:33 - 19:09  (16:36)

reboot   system boot  3.10.0-693.el7.x Sat Oct  6 01:46 - 02:18  (00:31)

 

I googled around and found most solutions were to check hardware, but weird though with same hardware Ubuntu runs well.

 

Let me know if you experience the same and how you resolve the issue. Thanks!

MAC error after upgrading from vSphere 5.0 to 5.1

$
0
0

Trying to power on certain VMs (imported from Workstation) after this upgrade gives the following error:

 

An error was received from the ESX host while powering on VM Example VM.
Failed to start the virtual machine.
Module DevicePowerOn power on failed.
Could not set up "macAddress" for ethernet0.
Invalid MAC address specified.
00:0C:29:nn:nn:nn is not an allowed static Ethernet address. It conflicts with VMware reserved MACs.

 

(I've replaced the specific address with NNs.)

 

How do I override this?  It is not possible to change the MAC address of the VM.

Tech Support Mode

$
0
0

What is mean that Tech Support Mode (TSM) timeout is not enabled for host xxxxx . What should I do in my environment ? is it critical ?


eSXi 6.5 stuck at vmklinux_9

$
0
0

i got a esxi 6.5 server that is not booting . its stuck here. any idea ? ALT-F12 is not bring up anythong

 

Capture.PNG

ESXi 6.7 and BCM57810

$
0
0

Hi. I have an ESXi 6.7 host and I need to add BCM57810 card to it, but with the card installed I get pink screen with

MEM_ALLOC bora/vmkernel/main/memmap.c:3950

cr0=0x8001003f cr2=0x0 cr3=0x100168000 cr4=0x38

 

I updated firmware of the card to the latest I could find: 14.07.06

I installed 1.0.69.1 version of qfle3 driver.

Every time I try to boot with the card inserted it fails. Without the card the host is working without problems. What can I do?

do we have to concern about these logs?

$
0
0

Hi guys,

 

In esxi 6.7.0 9484548

we have below logs in vmkernel and vmkwarning. and we got strange datastore latency spike (over 10P sec) on some VMs.

is there any relationship between these latency and below logs?

in vmkernel.log

 

sizeof(struct fcoe_header) = 14, frame_len =32

2018-10-08T04:32:27.559Z cpu49:2097308)qfle3f:vmhba71:qfle3f_processL2FrameCompletion:704: ELS 0xf

2018-10-08T04:32:27.559Z cpu49:2097308)qfle3f:vmhba71:qfle3f_processL2FrameCompletion:739: sizeof(vmk_EthHdr) = 14, sizeof(vmk_VLANHdr) = 4,

sizeof(struct fcoe_header) = 14, frame_len =32

2018-10-08T04:32:27.559Z cpu1:2098144)qfle3f:vmhba65:qfle3f_processL2FrameCompletion:704: ELS 0xf

2018-10-08T04:32:27.559Z cpu1:2098144)qfle3f:vmhba65:qfle3f_processL2FrameCompletion:739: sizeof(vmk_EthHdr) = 14, sizeof(vmk_VLANHdr) = 4,

sizeof(struct fcoe_header) = 14, frame_len =32

2018-10-08T04:32:27.559Z cpu33:2123569)qfle3f:vmhba69:qfle3f_processL2FrameCompletion:704: ELS 0xf

2018-10-08T04:32:27.559Z cpu33:2123569)qfle3f:vmhba69:qfle3f_processL2FrameCompletion:739: sizeof(vmk_EthHdr) = 14, sizeof(vmk_VLANHdr) = 4,

sizeof(struct fcoe_header) = 14, frame_len =32

2018-10-08T04:32:52.914Z cpu17:2103237)qfle3f:vmhba67:qfle3f_processL2FrameCompletion:704: ELS 0xf

2018-10-08T04:32:52.914Z cpu17:2103237)qfle3f:vmhba67:qfle3f_processL2FrameCompletion:739: sizeof(vmk_EthHdr) = 14, sizeof(vmk_VLANHdr) = 4,

sizeof(struct fcoe_header) = 14, frame_len =32

2018-10-08T04:32:52.914Z cpu49:2104327)qfle3f:vmhba71:qfle3f_processL2FrameCompletion:704: ELS 0xf

2018-10-08T04:32:52.914Z cpu49:2104327)qfle3f:vmhba71:qfle3f_processL2FrameCompletion:739: sizeof(vmk_EthHdr) = 14, sizeof(vmk_VLANHdr) = 4,

sizeof(struct fcoe_header) = 14, frame_len =32

2018-10-08T04:32:52.914Z cpu1:2100996)qfle3f:vmhba65:qfle3f_processL2FrameCompletion:704: ELS 0xf

2018-10-08T04:32:52.914Z cpu1:2100996)qfle3f:vmhba65:qfle3f_processL2FrameCompletion:739: sizeof(vmk_EthHdr) = 14, sizeof(vmk_VLANHdr) = 4,

sizeof(struct fcoe_header) = 14, frame_len =32

2018-10-08T04:32:52.914Z cpu33:2102936)qfle3f:vmhba69:qfle3f_processL2FrameCompletion:704: ELS 0xf

2018-10-08T04:32:52.914Z cpu33:2102936)qfle3f:vmhba69:qfle3f_processL2FrameCompletion:739: sizeof(vmk_EthHdr) = 14, sizeof(vmk_VLANHdr) = 4,

sizeof(struct fcoe_header) = 14, frame_len =32

2018-10-08T04:32:57.560Z cpu17:2123573)qfle3f:vmhba67:qfle3f_processL2FrameCompletion:704: ELS 0xf

2018-10-08T04:32:57.560Z cpu49:2097308)qfle3f:vmhba71:qfle3f_processL2FrameCompletion:704: ELS 0xf

2018-10-08T04:32:57.560Z cpu17:2123573)qfle3f:vmhba67:qfle3f_processL2FrameCompletion:739: sizeof(vmk_EthHdr) = 14, sizeof(vmk_VLANHdr) = 4,

sizeof(struct fcoe_header) = 14, frame_len =32

2018-10-08T04:32:57.560Z cpu49:2097308)qfle3f:vmhba71:qfle3f_processL2FrameCompletion:739: sizeof(vmk_EthHdr) = 14, sizeof(vmk_VLANHdr) = 4,

sizeof(struct fcoe_header) = 14, frame_len =32

2018-10-08T04:32:57.560Z cpu1:2098145)qfle3f:vmhba65:qfle3f_processL2FrameCompletion:704: ELS 0xf

2018-10-08T04:32:57.560Z cpu1:2098145)qfle3f:vmhba65:qfle3f_processL2FrameCompletion:739: sizeof(vmk_EthHdr) = 14, sizeof(vmk_VLANHdr) = 4,

sizeof(struct fcoe_header) = 14, frame_len =32

2018-10-08T04:32:57.560Z cpu33:2124489)qfle3f:vmhba69:qfle3f_processL2FrameCompletion:704: ELS 0xf

2018-10-08T04:32:57.560Z cpu33:2124489)qfle3f:vmhba69:qfle3f_processL2FrameCompletion:739: sizeof(vmk_EthHdr) = 14, sizeof(vmk_VLANHdr) = 4,

 

 

and in vmkwarning.log

 

2018-10-06T21:47:06.121Z cpu6:2098395)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "naa.6000d310008315000000000000000026" state in doubt; requested fast path state update...

2018-10-07T06:16:30.708Z cpu18:2098397)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "naa.6000d31000831500000000000000001e" state in doubt; requested fast path state update...

2018-10-08T01:51:37.407Z cpu63:2097460)WARNING: qfle3: qfle3_apply_queue_mac_filter:1985: [vmnic0] NetQ could not add RX filter, no filters for queue 0

2018-10-08T01:51:37.407Z cpu63:2097460)WARNING: qfle3: qfle3_apply_queue_mac_filter:1985: [vmnic2] NetQ could not add RX filter, no filters for queue 0

2018-10-08T01:51:37.407Z cpu63:2097460)WARNING: qfle3: qfle3_apply_queue_mac_filter:1985: [vmnic3] NetQ could not add RX filter, no filters for queue 0

2018-10-08T01:52:02.410Z cpu59:2097460)WARNING: qfle3: qfle3_apply_queue_mac_filter:1985: [vmnic1] NetQ could not add RX filter, no filters for queue 0

2018-10-08T01:56:12.413Z cpu73:2097460)WARNING: qfle3: qfle3_apply_queue_mac_filter:1985: [vmnic1] NetQ could not add RX filter, no filters for queue 0

2018-10-08T02:01:37.407Z cpu60:2097460)WARNING: qfle3: qfle3_apply_queue_mac_filter:1985: [vmnic0] NetQ could not add RX filter, no filters for queue 0

2018-10-08T02:01:37.407Z cpu60:2097460)WARNING: qfle3: qfle3_apply_queue_mac_filter:1985: [vmnic2] NetQ could not add RX filter, no filters for queue 0

2018-10-08T02:01:37.407Z cpu60:2097460)WARNING: qfle3: qfle3_apply_queue_mac_filter:1985: [vmnic3] NetQ could not add RX filter, no filters for queue 0

2018-10-08T02:09:22.410Z cpu74:2097460)WARNING: qfle3: qfle3_apply_queue_mac_filter:1985: [vmnic1] NetQ could not add RX filter, no filters for queue 0

2018-10-08T02:10:57.410Z cpu64:2097460)WARNING: qfle3: qfle3_apply_queue_mac_filter:1985: [vmnic1] NetQ could not add RX filter, no filters for queue 0

2018-10-08T02:14:57.410Z cpu49:2097460)WARNING: qfle3: qfle3_apply_queue_mac_filter:1985: [vmnic1] NetQ could not add RX filter, no filters for queue 0

2018-10-08T02:24:27.410Z cpu56:2097460)WARNING: qfle3: qfle3_apply_queue_mac_filter:1985: [vmnic1] NetQ could not add RX filter, no filters for queue 0

2018-10-08T02:25:27.410Z cpu58:2097460)WARNING: qfle3: qfle3_apply_queue_mac_filter:1985: [vmnic1] NetQ could not add RX filter, no filters for queue 0

2018-10-08T02:26:57.346Z cpu17:2098397)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "naa.6000d310008315000000000000000022" state in doubt; requested fast path state update...

2018-10-08T02:26:57.346Z cpu14:2098401)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "naa.6000d310008315000000000000000008" state in doubt; requested fast path state update...

2018-10-08T02:35:57.412Z cpu67:2097460)WARNING: qfle3: qfle3_apply_queue_mac_filter:1985: [vmnic1] NetQ could not add RX filter, no filters for queue 0

2018-10-08T02:39:33.363Z cpu17:2098397)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "naa.6000d310008315000000000000000008" state in doubt; requested fast path state update...

2018-10-08T02:40:23.364Z cpu19:2098401)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "naa.6000d310008315000000000000000008" state in doubt; requested fast path state update...

2018-10-08T02:40:27.410Z cpu55:2097460)WARNING: qfle3: qfle3_apply_queue_mac_filter:1985: [vmnic1] NetQ could not add RX filter, no filters for queue 0

2018-10-08T02:41:36.366Z cpu0:2098401)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "naa.6000d310008315000000000000000008" state in doubt; requested fast path state update...

2018-10-08T02:44:12.410Z cpu50:2097460)WARNING: qfle3: qfle3_apply_queue_mac_filter:1985: [vmnic1] NetQ could not add RX filter, no filters for queue 0

2018-10-08T02:47:02.410Z cpu46:2097460)WARNING: qfle3: qfle3_apply_queue_mac_filter:1985: [vmnic1] NetQ could not add RX filter, no filters for queue 0

2018-10-08T02:52:12.409Z cpu47:2097460)WARNING: qfle3: qfle3_apply_queue_mac_filter:1985: [vmnic1] NetQ could not add RX filter, no filters for queue 0

2018-10-08T02:54:52.410Z cpu65:2097460)WARNING: qfle3: qfle3_apply_queue_mac_filter:1985: [vmnic1] NetQ could not add RX filter, no filters for queue 0

2018-10-08T02:55:02.409Z cpu66:2097460)WARNING: qfle3: qfle3_apply_queue_mac_filter:1985: [vmnic1] NetQ could not add RX filter, no filters for queue 0

2018-10-08T02:59:17.411Z cpu44:2097460)WARNING: qfle3: qfle3_apply_queue_mac_filter:1985: [vmnic1] NetQ could not add RX filter, no filters for queue 0

2018-10-08T03:13:42.409Z cpu82:2097460)WARNING: qfle3: qfle3_apply_queue_mac_filter:1985: [vmnic1] NetQ could not add RX filter, no filters for queue 0

2018-10-08T03:15:12.411Z cpu62:2097460)WARNING: qfle3: qfle3_apply_queue_mac_filter:1985: [vmnic1] NetQ could not add RX filter, no filters for queue 0

2018-10-08T03:21:12.410Z cpu47:2097460)WARNING: qfle3: qfle3_apply_queue_mac_filter:1985: [vmnic1] NetQ could not add RX filter, no filters for queue 0

2018-10-08T03:23:27.411Z cpu66:2097460)WARNING: qfle3: qfle3_apply_queue_mac_filter:1985: [vmnic1] NetQ could not add RX filter, no filters for queue 0

2018-10-08T03:26:37.409Z cpu77:2097460)WARNING: qfle3: qfle3_apply_queue_mac_filter:1985: [vmnic1] NetQ could not add RX filter, no filters for queue 0

2018-10-08T03:33:02.409Z cpu46:2097460)WARNING: qfle3: qfle3_apply_queue_mac_filter:1985: [vmnic1] NetQ could not add RX filter, no filters for queue 0

2018-10-08T03:42:37.412Z cpu47:2097460)WARNING: qfle3: qfle3_apply_queue_mac_filter:1985: [vmnic1] NetQ could not add RX filter, no filters for queue 0

2018-10-08T03:43:22.410Z cpu49:2097460)WARNING: qfle3: qfle3_apply_queue_mac_filter:1985: [vmnic1] NetQ could not add RX filter, no filters for queue 0

2018-10-08T03:43:57.410Z cpu78:2097460)WARNING: qfle3: qfle3_apply_queue_mac_filter:1985: [vmnic1] NetQ could not add RX filter, no filters for queue 0

2018-10-08T03:45:27.409Z cpu64:2097460)WARNING: qfle3: qfle3_apply_queue_mac_filter:1985: [vmnic1] NetQ could not add RX filter, no filters for queue 0

2018-10-08T03:47:02.409Z cpu74:2097460)WARNING: qfle3: qfle3_apply_queue_mac_filter:1985: [vmnic1] NetQ could not add RX filter, no filters for queue 0

2018-10-08T03:51:02.409Z cpu55:2097460)WARNING: qfle3: qfle3_apply_queue_mac_filter:1985: [vmnic1] NetQ could not add RX filter, no filters for queue 0

2018-10-08T03:51:40.138Z cpu1:2098395)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "naa.6000d310008315000000000000000011" state in doubt; requested fast path state update...

2018-10-08T03:52:52.409Z cpu67:2097460)WARNING: qfle3: qfle3_apply_queue_mac_filter:1985: [vmnic1] NetQ could not add RX filter, no filters for queue 0

2018-10-08T03:55:12.409Z cpu76:2097460)WARNING: qfle3: qfle3_apply_queue_mac_filter:1985: [vmnic1] NetQ could not add RX filter, no filters for queue 0

2018-10-08T03:57:52.409Z cpu86:2097460)WARNING: qfle3: qfle3_apply_queue_mac_filter:1985: [vmnic1] NetQ could not add RX filter, no filters for queue 0

2018-10-08T03:59:22.409Z cpu44:2097460)WARNING: qfle3: qfle3_apply_queue_mac_filter:1985: [vmnic1] NetQ could not add RX filter, no filters for queue 0

2018-10-08T04:07:12.409Z cpu46:2097460)WARNING: qfle3: qfle3_apply_queue_mac_filter:1985: [vmnic1] NetQ could not add RX filter, no filters for queue 0

2018-10-08T04:12:47.409Z cpu80:2097460)WARNING: qfle3: qfle3_apply_queue_mac_filter:1985: [vmnic1] NetQ could not add RX filter, no filters for queue 0

How to Find the RAM Serial number in ESXi 6.0 with Putty

$
0
0

Hello,

How can I find the Memory serial numbers of my server ? i can't Reboot it as Production is running .

ESXI 6.0 Version .

 

Thank you.

EXSi6.5中,如何取消文件复制任务

$
0
0

EXSi6.5,直接在web页面登录后复制虚拟机文件,发现文件很大复制文件耗时很长,如何取消?复制进度是灰色的,无法撤销复制!@

Unexplained ESXi CRASH

$
0
0

Hi Folks,

 

We have an ESXi host  that is running a single VM, piloting a VM for a phone system.  A few days ago the whole system crashed host and VM were unavailable and had to be powered off to restore any connectivity to it. We've gone through all the VMWare logs had a look in the HP iLO and cannot find any obvious reasons.

 

The only log that has some sort of indication of something up is the syslog.log file, but there are so many errors in this, as below, are these actual errors being reported?

 

Thanks in advance

 

Alex

 

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   In Failed Array'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   Rebuild/Remap in progress'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   Rebuild/Remap Aborted (was not completed normally)'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line=''

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   Correctable ECC/Other Correctable Memory Error'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='# Post Memory Resize'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:14:*:0::'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:14:*:1::'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line=''

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='# System Firmware Progress'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   Uncorrectable ECC/Other Uncorrectable Memory Error'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   Parity'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   Memory Scrub Failed (stuck bit)'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   Memory Device Disabled '

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:12:4:1::15'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   System Firmware Error (Post Error)'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   System Firmware Hang'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   System Firmware Progress'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:15:2:0::'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:15:2:1::'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line=''

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   Correctable ECC/Other Correctable Memory Error Logging Limit Reached'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   Presence Detected'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='# Event Logging Disabled'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   Correctable Memory Error Logging Disabled'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   Event 'Type' Logging Disabled'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   Log Area Reset/Cleared'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:12:6:0::'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:16:2:0::'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:12:6:1::'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:16:2:1::'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#  Controller access degraded or unavailable'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   All Event Logging Disabled'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   SEL Full'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   SEL Almost Full'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line=''

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#  Management controller off-line'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='# Watchdog 1'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#  Management controller unavailable'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#  Sensor Failure'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#  FRU Failure'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line=''

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   Config error'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='# Battery'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   Spare'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   Battery low (predicitive failure)'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   Battery failed'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:12:8:0::'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   Battery presence detected'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:41:2:0::'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:41:2:1::'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line=''

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   Redundancy degraded from non-redundant'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='# session audit'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   Session activated'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line=''

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='# Discrete'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:42:0:0::'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   D0 power state'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:42:0:1::'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   Session deactivated'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='12:*:0:0::2'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:42:1:0::'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='12:*:0:1::2'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:42:1:1::'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   D1 power state'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line=''

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='12:*:1:0::18'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='# Version Change'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='12:*:1:1::18'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:43:*:0::'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   D2 power state'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:43:*:1::'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='12:*:2:0::18'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line=''

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='12:*:2:1::18'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='# FRU state'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   D3 power state'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   Not installed'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='12:*:3:0::10'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:44:0:0::'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='12:*:3:1::10'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:44:0:1::'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line=''

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   Inactive'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:44:1:0::'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='# Sensor specific events follow - event reading type: 0x6f == 111'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:44:1:1::15'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   activation requested'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line=''

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:44:2:0::'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='# Temperature'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:44:2:1::8'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:1:*:0::'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   activation in progress'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:1:*:1::'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:44:3:0::'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line=''

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:44:3:1::8'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='# Voltage'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   active'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:2:*:0::'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:44:4:0::'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:2:*:1::'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:44:4:1::2'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line=''

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   deactivation requested'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='# Current'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:44:5:0::'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:3:*:0::'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:44:5:1::9'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:3:*:1::'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   deactivation in progress'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line=''

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:44:6:0::'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='# Fan'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:44:6:1::9'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:4:*:0::'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:12:8:1::'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   Automatically Throttled'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:17:*:0::'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:17:*:1::'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line=''

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='# System Event'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:4:*:1::'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line=''

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='# Physical Security'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line=''

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='# Platform Security'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   lost communication'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line=''

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='# Processor'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#  IERR'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#  Thermal Trip'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#  FRB1/BIST Failure'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#  FRB2/Hang in POST Failure'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   Undetermined System Hardware Failure'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#  FRB3/Processor Startup/Initialization Failure'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#  Configuration Error'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#  SM BIOS 'Uncorrectable CPU-complex Error''

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#  Processor Presence'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line=''

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:7:7:0::'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='# Critical Interrupt'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:7:7:1::'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   Front Panel NMI/Diagnostic Interrupt'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#  Processor Disabled'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:44:7:0::'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   Critical overtemp'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   Bus Timeout'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line=''

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:44:7:1::13'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line=''

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='# Drive Slot (Bay)'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   I/O Channel Check NMI'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   Software NMI'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   PCI PERR'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   PCI SERR'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   EISA Fail Safe Timeout'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   Bus Correctable Error'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   Drive Presence'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   Bus Uncorrectable Error'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   Fatal NMI (port 61h, bit 7)'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:13:0:0::'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:13:0:1::'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   Drive Fault'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   Predictive Failure'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   Bus fatal error'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   Hot Spare'

2018-10-01T08:58:11Z localcli: omc-ipmi: Read 754 lines from /etc/sfcb/omc/sensor_health, total entries 251

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='#   Bus degraded'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line=''

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line='# Button'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:20:*:0::'

2018-10-01T08:58:11Z localcli: Missing healthState value to report, line='111:20:*:1::'

2018-10-01T08:58:11Z localcli: Missing expected value to check for, line=''

ISCSI- ESXI

$
0
0

We have a requirement to connect ISCSI storage with several ESXi servers; Storage should be shared across all ESXi servers. My question is which of the following is the best practice for creating ISCSI targets

1) Create a single ISCSI target and add all the ESXi servers below.

2) Create different ISCSI targets for each ESXi server.


VLAN environment, no network for vms

$
0
0

I'm trying to replace my existing linux server/hipervisor/nas with esxi 6.5. I have implemented in my network VLANs for segregating IOT traffic, management and trusted network. The router is an opnsense bare metal machine that I also would like to virtualize. The switch is a L3 dlink and the ports for server and router are configured as trunk (as per dlink, admit all tagged and untaggged) and the server connects using two ports in LAG. This is the configuration for both LAG ports:

port.jpg

 

So I installed ESXI and added the second mobo nic for teaming and changed vlan to 4095 that I understand is for allowing all tags. Then on the GUI I added to the vswitch the port groups for my VLANs and tagged them. So I have this:

vswitch.jpg

*there is 666 VLAN because I plan on virtualize the firewall/router.

 

So there I'm creating VMs on their respective groups, but I can't connect them to the network. testnet there is a virtual machine with linux for testing purposes.

I had configured the management network VLAN to 1, as I wanted it to be on the management VLAN and it got an IP for the correct network. Now on 4095 it gets IP from the trusted lan, so that tells me that the router dhcp and the switch are working correctly. (I've put it back to 1)

Why machines don't get communication?

 

EDIT: I've tried adding another port group with vlan 0 and linking the testing vm with another vnic, and then it gets IP from dhcp and connects to the network. (Gets IP but still no networking)

I've taken a look at the DHCP server and seen that it has plenty of offline and expired leases for the testing vm and others. The MACs descriptions shows "VMware, Inc." so I don't know what may be happening.

ESXi/ESX host disconnects from vCenter Server after power up VM: "An error occured while communicating with remote host"

$
0
0

I encontered an strange error, when  i try to power on a VM (or change VM configuration)  through a vCenter the error message appears :An error occured while communicating with remote host

ESXi host disconnect immediately (5-10 sec)  after clicking PowerOn button. All VMs are work fine, only ESXi host disconnect from vCenter.

 

 

I try change config.vpxd.heartbeat.notrespondingtimeout=120.   

VMware Knowledge Base

But the problem still remains.

 

 

I had the same happen in vSphere 5.5 and 6.5.

I have meet this error every power up VM.

The vpxd.log is empty.

 

 

 

 

Perennial Devices - RDM

$
0
0

Hi Folks,

 

I am trying to understand the word Perennial devices and not getting the right information.

It would be great if some one help me to understand what is perennial and how it will impact the ESXi host.

 

Regards,

Shiva

Very slow acces to datastores on HP MIcroserver Gen8. Can't edit System Resource Reservation with vSphere client.

$
0
0

I performed a clean install of ESXi 6 with provided HP image (ESXi-6.0.0-2494585-HP-600.9.1.39-Mar2015).
Access to and from datastores over network through vSphere Client (6.0.0-2502222) is very slow - ~10 Mbps (  there are several hundreds under ESXi 5.5)
I'm suspecting that ESXi 6 limits CPU resources used by host system to 230 MHz.
I can't edit "System Resource Reservation" parameters (contrary to ESXi 5.5, where I can edit them and switch to Simple or Advanced view).

Failed disk on local datastore

$
0
0

Hello all,

 

I have/had a local datastore comprising of 3 disks. One of the disks have failed and fortunately I only lost 1 VM. The issue is that now I have the following:

 

[root@test-vmw:/dev/disks] vmkfstools -P -h /vmfs/volumes/datastore1

VMFS-5.61 (Raw Major Version: 14) file system spanning 3 partitions.

File system label (if any): datastore1

Mode: public

Capacity 2.7 TB, 1.1 TB available, file block size 1 MB, max supported file size 62.9 TB

Disk Block Size: 512/512/0

UUID: 5710114e-deec2c28-5dda-001018f08ddc

Partitions spanned (on "lvm"):

        t10.ATA_____Hitachi_HUA722010CLA330_______________________JPW9P0N03KWDKD:1

       (device t10.ATA_____ST31000528AS________________________________________6VPG53JQ:1 might be offline)

        t10.ATA_____WDC_WD10EZEX2D22RKKA0_________________________WD2DWMC1S5972516:1

        (One or more partitions spanned by this volume may be offline)

Is Native Snapshot Capable: YES

[root@test-vmw:/dev/disks]

 

Now, I am trying to add a new disk to this datastore and I am getting an error:

 

2018-10-10T09:47:48.872Z cpu2:2099193 opID=2f3a8b94)LVM: 10679: Error adding space (0) on device t10.ATA_____Hitachi_HDP725050GLA360_______________________GEA534RJ14BEDA:1 to volume 57101140-d0590300-8662-001018f08ddc: VMFS volume missing physical exten$

 

I guess I need to fix the offline issue before I can add any new disks?

 

Can someone guide me on what the steps are to fix the above issue.

 

Thanks in advance

Viewing all 8132 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>