Quantcast
Channel: VMware Communities : Discussion List - ESXi
Viewing all 8132 articles
Browse latest View live

VMDK / API Snapshot woes

$
0
0

Where do I begin.

 

I feel like I am always a newbie with VMware, despite working in it for a few years. We are running a VSAN environment on 6.5, performing backups with Veeam. About 2 weeks ago, some of our vm's in the backup started throwing errors. Due to some events outside of my control, I just started looking at this today. Veeam support said the error was because the VMX file was corrupt, recommended solution was to shutdown machine, remove from inventory, create new machine using the existing disks, bring it back up. We performed this solution on a non critical machine, and it worked great. Did it to a semi-critical machine, and worked great again. Did it to our Exchange server and.. it wasn't great.

 

The server came back up, however after a few hours of operation, a large amount of people reported missing about 2 weeks of email. We had the machine up for about 5 hours poking around at logs before I shut it down to focus on the VMware side of things. After a ton of digging on the guest as well as in the host environment, I figured out the root cause- despite there being no snapshots in the snapshot manager, the system was running off of a snapshot due to the failed backup. I made the mistake of mounting the original vmdk files on booting rather than the 000001.vmdk file. My own mistake of making assumptions, thinking those files were somehow orphaned since the snapshot manager listed no snapshots. The previous, successful machines either didn't have a snapshot file, or historical data didn't matter on that guest.

 

After talking with VMware support, they basically said since the original vmdk's were booted, the damage is done, consider the data lost. They did say I can try to remove the drives from the guest, and try to re-add the snapshot versions, but had little faith that it would work, and warned of a high chance of corruption of both the vmdk and the snapshot vmdk. Since the last shutdown, I've kept the server powered off and have been seeking any type of option to try and get this machine back to life with its current data, and have ran into a brick wall every time. Mostly being cautious on any steps tried from this point due to the corruption warnings, I've copied out all files save for the snapshot files from the original location of the datastore to a different location to mitigate risk of further corruption. The snapshot files however, will simply not budge. Web client copy, SSH copy, vmkfstools -i, nothing will get those files to somewhere else in their original size (though I can download what looks to be the header with WinSCP).

 

I'm desperately trying to safeguard the snapshot data before doing something that may corrupt the whole guest and get this thing back in an up to date, running condition. Since this is an Exchange server, the files are quite large. Just copying out the files took 3hrs. I'm now attempting a clone as I've read a clone may merge snapshot files automatically, with the hope that it won't impact the original files. If the clone doesn't work, I'd be at the last straw to try to boot off of the snapshots, knowing I may lose everything. Finally I've landed here, seeing some users get success by some of you truly amazing experts here. The final kick in the rear, is our management is getting ready to suffer the data loss just to get the server back on and email flowing, so their patience is thin. Casting out a bottle in the sea here, hoping it comes back with some much needed help in time. Attaching relevant info that I've seen requested in other posts:

 

Directory ls -lh of original files:

 

-rw-r--r--    1 root     root          92 Oct 24  2018 CAKEXK01-8d4db6ef.hlog

-rw-------    1 root     root       32.6K Nov 15 08:02 CAKEXK01-Snapshot557.vmsn

-rw-r--r--    1 root     root          13 May  8  2019 CAKEXK01-aux.xml

-rw-------    1 root     root        8.5K Nov 14 08:12 CAKEXK01.nvram

-rw-------    1 root     root          45 Nov 14 08:12 CAKEXK01.vmsd

-rwx------    1 root     root        4.6K Dec  6 21:22 CAKEXK01.vmx

-rw-------    1 root     root        3.3K May 17  2018 CAKEXK01.vmxf

-rw-------    1 root     root        5.0M Dec  6 21:22 CAKEXK01_3-000001-ctk.vmdk

-rw-------    1 root     root         408 Nov 15 08:02 CAKEXK01_3-000001.vmdk

-rw-------    1 root     root         600 Dec  7 04:12 CAKEXK01_3.vmdk

-rw-------    1 root     root        5.9M Dec  6 21:22 CAKEXK01_4-000001-ctk.vmdk

-rw-------    1 root     root         409 Nov 15 08:02 CAKEXK01_4-000001.vmdk

-rw-------    1 root     root         576 Dec  7 04:12 CAKEXK01_4.vmdk

-rw-------    1 root     root        2.0M Dec  6 21:22 CAKEXK01_5-000001-ctk.vmdk

-rw-------    1 root     root         407 Nov 15 08:09 CAKEXK01_5-000001.vmdk

-rw-------    1 root     root         598 Dec  7 04:12 CAKEXK01_5.vmdk

drwxr-xr-x    1 root     root         280 Dec  7 06:38 bak

-rw-------    1 root     root      299.5K May 17  2018 vmware-3.log

-rw-------    1 root     root       15.2M Sep 21  2018 vmware-4.log

-rw-------    1 root     root        3.0M Oct 18  2018 vmware-5.log

-rw-------    1 root     root      393.2K Oct 22  2018 vmware-6.log

-rw-------    1 root     root      467.3K Oct 24  2018 vmware-7.log

-rw-------    1 root     root      244.0K Oct 24  2018 vmware-8.log

-rw-------    1 root     root       45.4M Dec  6 21:22 vmware.log

 

Directory ls -lh of newly created machine that is pointing to the above vmdk's:

 

-rw-r--r--    1 root     root         295 Dec  6 21:35 CAKEXK01-35be335f.hlog

-rw-------    1 root     root        8.5K Dec  7 05:25 CAKEXK01.nvram

-rw-r--r--    1 root     root           0 Dec  6 21:35 CAKEXK01.vmsd

-rwxr-xr-x    1 root     root        3.8K Dec  7 05:25 CAKEXK01.vmx

-rw-------    1 root     root        3.1K Dec  6 21:45 CAKEXK01.vmxf

-rw-r--r--    1 root     root        1.0M Dec  7 03:08 vmware-1.log

-rw-r--r--    1 root     root      322.3K Dec  7 05:25 vmware.log


[msg.hbacommon.corruptredo] The redo log of 000001.vmdk is corrupted. If the problem persists, discard the redo log.

$
0
0

Error message when I try to start an old guest machine on ESXI 5.1 host

[msg.hbacommon.corruptredo] The redo log of xxx-000001.vmdk is corrupted. If the problem persists, discard the redo log

Below is the file list, I tired to consolidate and remove all snapshots but it failed and the snapshots disappeared/dimmed from the vShpere client so I restored it to its previous status before my consolidation attempt.

What to do now, knowing that removing snapshots/delete all and consolidation did not solve the problem before.

ESXi installations with one key

$
0
0

Hello.

I've been using ESXi 6.5.0 for about a year and a half and recently I've installed the same license on a new server, but running ESXi 6.7.0, which is working perfectly.

When I check the licensing information in any of these servers I get:

 

"Expiration date: never

Features: Up to 8-way virtual SMP"

 

2 Questions:

Can I still use the same license if I get a new server? We are a very small university department on a /dev/null budget - buying a license is out of the question.

Those 8-way virtual SMP mean that each ESXi installation can virtualize 8 CPUs, or let's say, if I have 2 installations, some data is sent vmware and it only allows 4 virtual CPS on each server?

Thanks and regards.

Dave

My esxi servers try to connect by ONC-RPC to old files server (alredy removed)

$
0
0

Hi

I can see on my firewall ONC-RPC session end with timeout.

I didn't find any place that the old server is use.

Any ideas?

Monitor Hardware, system sensors missing ESXI 6.7

$
0
0

Hello,

 

We have 3 Dell R440 servers running ESXI 6.7, two servers display normal system sensors values. On one though it only show CPU temp, and the value displayed is

-128c. I have tried many things I found from google, restarting services and deleting logs but still no luck. Does anyone have any idea what might cause this?

 

server1.PNG

server2.PNG

VMware ESXi 6.0.0 error

$
0
0

Hello

I have tried to install VMware ESXi 6.0.0 on HPE proliant DL120 Gen9 and it give me with this error " error : file:///Ks.cfg:line 3: install --firstdisk specified, but no suitable disk was found"

and what went wrong and its my first time to make this installation.

ESXi - esxtop shows high %RDY value fo service "system"

$
0
0

We have one ESXi on vCenter (ver. 6.7) host with cca. 35 Virtual Machines (Windows 10 / VDI / Horizon View / PCoIP). On this host, we have 5 additional VMs (WinSrv2016, vCenter Virtual Appliance, vSAN VM...).

For all Windows 10 VMs, we have 2 vCPUs (2 Cores per Socket / 1 Socket), and CPU Ready time in vCenter is ok (Average value is "52" for both vCPUs together). Also, on this host we have vGPU (Nvidia Tesla M10).

When I check performance with esxtop (SSH), for all Windows 10 VMs %RDY value is bellow "1.00". What bothers me is %RDY value for service "system" which is all of the time cca. "2000.00". In attachment is screenshot where you can see this. Is this normal, or what should we check in order to fix it?

ESXTOP-system.jpg

Does ESXi support VROC

$
0
0

I have an Intel server with 4 NVMe in a RAID 10 using VROC. When I load into VMWare ESXi and try to create a datastore, the drives are all listed separately, without my VMD. Are there drivers I need to load, or is this not possible because of the way VROC is designed?


How to share a SCSI device from one host to another

$
0
0

Hello,

I have 2 hosts ESXi01 & 02 running on VMware 5.5.0. There is a tape drive IBM TS2250 connected to ESXi02 and a virtual machine on ESXi01 which needs access to this tape drive. When I try to add it in VM settings it's grey and unavailable, but it's available for VMs on ESXi02 where it's connected.

 

Funny thing is that it worked before but after power failure, I had to start the VM quickly without any physical access to the server room, so I had to temporarily delete this SCSI device (which was off) to be able to start the VM and I can't add it back now. I'd be grateful for any tips.

ESXi 6.0 host - Some VMs go offline intermittently

$
0
0

Recently, like some of you, I've been going through the process of upgrading/migrating my services from Server 2008 to Server 2012-->2016. I also recently upgraded our ESX host to version 6.0, and will upgrade again to 6.5 (the newest version my server is compatible with) in the coming weeks when I can schedule a maintenance period.

 

Some of my VMs have now been going offline intermittently. I have to disable and reenable the NIC on the VM to bring it back online. I've been checking through windows logs but cannot find much. I've heard recommendations to upgrade VMware tools, which I've done for these servers.

 

What can I do to trouble shoot this?

 

Are there logs on the host that will give me data on this?

 

Thank you all.

ESXi 6.0 - Some VMs have black screen under "Console"

$
0
0

Just as the title says. This happens to some, but not all of my servers.

 

I tried increasing video memory to 32mb on one of the affected servers and now none of them will console. I have to reboot the host during downtime but won't be able to do it for a while (single production host)

 

Any advice would be appreciated.

Records with problems with lost datastore connection

$
0
0

Hi guys,

I have the following problem with the vmware esxi 6.0 version, in the event viewer I have the messages

 

Volume access lost 5bd84f9e-323a5e60-00b0-000af7e87034 (datastore1) due to connectivity issues.
Recovery attempt in progress.The result is will inform shortly.
information
12/09/2019 04:04:18 p.m.
datastore1


Volume access was restored successfully 5bd84f9e-323a5e60-00b0-000af7e87034 (datastore1) after the problems of connectivity

information
12/09/2019 04:04:29 p.m.
vmex

 

This message also appears for the datastore2 that the server has.

 

The server has the following features
Dell PowerEdge T440
VMWare Esxi 6.0.0 5050593
Datastore1 is a RAID5
Datastore2 is a RAID1


Some of you have had the same problem and why is it that the esxi shows those messages.

 

I appreciate the help and information you can give me to understand why messages in esxi events

 

Bryan A.

ESXi6.7- PCIe passThrough device not working with MSI interrupt mode.

$
0
0

hi all

     I have a PCIe device on ESXi6.7, and config it passThrough to guest operation system(ubuntu 16.04TLS).I refer to the following knowledge base pages:

https://kb.vmware.com/s/article/2142307

https://kb.vmware.com/s/article/1010789https://kb.vmware.com/s/article/2142307

The PCIe device driver under guest os must use MSI interrupt mode with 32 messages request.

Using command "lspci -xxx" got different MSI register value on host ESXi and guest operation system.

The "81  01" value for MSI means allows 1 messages request.So the driver not working.

why the register value are different in host ESXi and guest OS?

 

host ESXi  lspci -d

00: 57 1e 01 00 06 04 10 00 00 00 00 12 10 00 00 00

10: 00 00 30 f7 00 00 00 00 0c 00 e4 ff 2f 00 00 00

20: 0c 00 e0 ff 2f 00 00 00 00 00 00 00 57 1e 01 00

30: 00 00 00 00 40 00 00 00 00 00 00 00 0a 01 00 00

40: 01 50 c3 df 08 00 00 00 00 00 00 00 00 00 00 00

50: 05 70 8b 01 98 02 e0 fe 00 00 00 00 00 00 00 00

60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

70: 10 00 02 00 c2 8f 2c 11 30 20 10 00 01 6d 43 00

80: 40 00 81 10 00 00 00 00 00 00 00 00 00 00 00 00

90: 00 00 00 00 10 00 00 80 00 00 00 00 02 00 00 80

a0: 03 00 01 00 00 00 00 00 00 00 00 00 00 00 00 00

b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

 

guest lspci -xxx

00: 57 1e 01 00 07 04 10 00 00 00 00 12 10 40 00 00

10: 00 00 30 fd 00 00 00 00 0c 00 af e7 00 00 00 00

20: 0c 00 a8 e7 00 00 00 00 00 00 00 00 57 1e 01 00

30: 00 00 00 00 40 00 00 00 00 00 00 00 07 01 00 00

40: 01 50 c3 df 08 00 00 00 00 00 00 00 00 00 00 00

50: 05 70 81 0100 00 e0 fe 00 00 00 00 e5 40 00 00

60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

70: 10 00 02 00 00 00 00 00 00 00 00 00 02 06 00 00

80: 00 00 02 02 00 00 00 00 00 00 00 00 00 00 00 00

90: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

Create a backup from the whole server.

$
0
0

Hello,

How can I create a backup from ESXi host with its VMs?

 

Thank you.

E105: PANIC: PhysMem: creating too many Global lookups

$
0
0

Hi all,

 

 

After running for a long time without any problems. Some vms started to hang.

 

They're not reacting on anyting. The CPU usage is 0 Hz and if you try to take over the vmware console of the vm, the connection will be interupted.

So I started to dig in the logs of the virtual machine, and found a lot of log entries like this:

 

Log for VMware ESX version=6.7.0 build=build-14320388

 

2019-12-07T14:14:01.622Z| vcpu-2| W115: Memory regions  (0xfc000000, 0xfcfff000) and  (0xfc810000, 0xfc81f000) overlap (0x54f0024000 0x5520026000).vcs = 0xfff, vpcuId = 0xffffffff

2019-12-07T14:14:01.627Z| vcpu-0| W115: Memory regions  (0xfc000000, 0xfcfff000) and  (0xfc810000, 0xfc81f000) overlap (0x54f0024000 0x5520026000).vcs = 0xfff, vpcuId = 0xffffffff

2019-12-07T14:14:01.630Z| vcpu-0| W115: Memory regions  (0xfc000000, 0xfcfff000) and  (0xfc810000, 0xfc81f000) overlap (0x54f0024000 0x5520026000).vcs = 0xfff, vpcuId = 0xffffffff

2019-12-07T14:14:01.634Z| vcpu-4| W115: Memory regions  (0xfc000000, 0xfcfff000) and  (0xfc810000, 0xfc81f000) overlap (0x54f0024000 0x5520026000).vcs = 0xfff, vpcuId = 0xffffffff

2019-12-07T14:14:01.637Z| vcpu-0| W115: Memory regions  (0xfc000000, 0xfcfff000) and  (0xfc810000, 0xfc81f000) overlap (0x54f0024000 0x5520026000).vcs = 0xfff, vpcuId = 0xffffffff

2019-12-07T14:14:01.639Z| vcpu-0| W115: Memory regions  (0xfc000000, 0xfcfff000) and  (0xfc810000, 0xfc81f000) overlap (0x54f0024000 0x5520026000).vcs = 0xfff, vpcuId = 0xffffffff

2019-12-07T14:14:01.643Z| vcpu-3| W115: Memory regions  (0xfc000000, 0xfcfff000) and  (0xfc810000, 0xfc81f000) overlap (0x54f0024000 0x5520026000).vcs = 0xfff, vpcuId = 0xffffffff

2019-12-07T14:14:01.647Z| vcpu-2| W115: Memory regions  (0xfc000000, 0xfcfff000) and  (0xfc810000, 0xfc81f000) overlap (0x54f0024000 0x5520026000).vcs = 0xfff, vpcuId = 0xffffffff

2019-12-07T14:14:01.650Z| vcpu-0| W115: Memory regions  (0xfc000000, 0xfcfff000) and  (0xfc810000, 0xfc81f000) overlap (0x54f0024000 0x5520026000).vcs = 0xfff, vpcuId = 0xffffffff

2019-12-07T14:14:01.653Z| vcpu-2| W115: Memory regions  (0xfc000000, 0xfcfff000) and  (0xfc810000, 0xfc81f000) overlap (0x54f0024000 0x5520026000).vcs = 0xfff, vpcuId = 0xffffffff

2019-12-07T14:14:01.657Z| vcpu-2| W115: Memory regions  (0xfc000000, 0xfcfff000) and  (0xfc810000, 0xfc81f000) overlap (0x54f0024000 0x5520026000).vcs = 0xfff, vpcuId = 0xffffffff

2019-12-07T14:14:01.660Z| vcpu-2| W115: Memory regions  (0xfc000000, 0xfcfff000) and  (0xfc810000, 0xfc81f000) overlap (0x54f0024000 0x5520026000).vcs = 0xfff, vpcuId = 0xffffffff

2019-12-07T14:14:01.662Z| vcpu-2| W115: Memory regions  (0xfc000000, 0xfcfff000) and  (0xfc810000, 0xfc81f000) overlap (0x54f0024000 0x5520026000).vcs = 0xfff, vpcuId = 0xffffffff

2019-12-07T14:14:01.668Z| vcpu-2| W115: Memory regions  (0xfc000000, 0xfcfff000) and  (0xfc810000, 0xfc81f000) overlap (0x54f0024000 0x5520026000).vcs = 0xfff, vpcuId = 0xffffffff

2019-12-07T14:14:01.673Z| vcpu-2| W115: Memory regions  (0xfc000000, 0xfcfff000) and  (0xfc810000, 0xfc81f000) overlap (0x54f0024000 0x5520026000).vcs = 0xfff, vpcuId = 0xffffffff

2019-12-07T14:14:01.679Z| vcpu-4| W115: Memory regions  (0xfc000000, 0xfcfff000) and  (0xfc810000, 0xfc81f000) overlap (0x54f0024000 0x5520026000).vcs = 0xfff, vpcuId = 0xffffffff

 

 

And after this:

 

2019-12-07T14:14:01.679Z| vcpu-4| E105: PANIC: PhysMem: creating too many Global lookups.

2019-12-07T14:14:08.634Z| vcpu-4| W115: A core file is available in "/vmfs/volumes/5cdd51ee-fd4310f2-58c4-24b6fd652bce/0-pg-virtgpu008/vmx-zdump.000"

2019-12-07T14:14:08.634Z| mks| W115: Panic in progress... ungrabbing

2019-12-07T14:14:08.634Z| mks| I125: MKS: Release starting (Panic)

2019-12-07T14:14:08.634Z| mks| I125: MKS: Release finished (Panic)

2019-12-07T14:14:08.643Z| vcpu-4| I125: Writing monitor file `vmmcores.gz`

2019-12-07T14:14:08.722Z| vcpu-4| W115: Dumping core for vcpu-0

2019-12-07T14:14:08.722Z| vcpu-4| I125: VMK Stack for vcpu 0 is at 0x451ae7c93000

2019-12-07T14:14:08.722Z| vcpu-4| I125: Beginning monitor coredump

2019-12-07T14:14:09.115Z| vcpu-4| I125: End monitor coredump

2019-12-07T14:14:09.116Z| vcpu-4| W115: Dumping core for vcpu-1

2019-12-07T14:14:09.116Z| vcpu-4| I125: VMK Stack for vcpu 1 is at 0x451af3b13000

2019-12-07T14:14:09.116Z| vcpu-4| I125: Beginning monitor coredump

2019-12-07T14:14:09.510Z| vcpu-4| I125: End monitor coredump

2019-12-07T14:14:09.510Z| vcpu-4| W115: Dumping core for vcpu-2

2019-12-07T14:14:09.510Z| vcpu-4| I125: VMK Stack for vcpu 2 is at 0x451aebb13000

2019-12-07T14:14:09.510Z| vcpu-4| I125: Beginning monitor coredump

2019-12-07T14:14:09.904Z| vcpu-4| I125: End monitor coredump

2019-12-07T14:14:09.905Z| vcpu-4| W115: Dumping core for vcpu-3

2019-12-07T14:14:09.905Z| vcpu-4| I125: VMK Stack for vcpu 3 is at 0x451aeb713000

2019-12-07T14:14:09.905Z| vcpu-4| I125: Beginning monitor coredump

2019-12-07T14:14:10.300Z| vcpu-4| I125: End monitor coredump

2019-12-07T14:14:10.300Z| vcpu-4| W115: Dumping core for vcpu-4

2019-12-07T14:14:10.300Z| vcpu-4| I125: VMK Stack for vcpu 4 is at 0x451affd13000

2019-12-07T14:14:10.300Z| vcpu-4| I125: Beginning monitor coredump

2019-12-07T14:14:10.692Z| vcpu-4| I125: End monitor coredump

2019-12-07T14:14:10.693Z| vcpu-4| W115: Dumping core for vcpu-5

2019-12-07T14:14:10.693Z| vcpu-4| I125: VMK Stack for vcpu 5 is at 0x451af0313000

2019-12-07T14:14:10.693Z| vcpu-4| I125: Beginning monitor coredump

2019-12-07T14:14:11.085Z| vcpu-4| I125: End monitor coredump

2019-12-07T14:14:11.085Z| vcpu-4| W115: Dumping core for vcpu-6

2019-12-07T14:14:11.085Z| vcpu-4| I125: VMK Stack for vcpu 6 is at 0x451afcb13000

2019-12-07T14:14:11.085Z| vcpu-4| I125: Beginning monitor coredump

2019-12-07T14:14:11.474Z| vcpu-4| I125: End monitor coredump

2019-12-07T14:14:11.474Z| vcpu-4| W115: Dumping core for vcpu-7

2019-12-07T14:14:11.474Z| vcpu-4| I125: VMK Stack for vcpu 7 is at 0x451af2813000

2019-12-07T14:14:11.474Z| vcpu-4| I125: Beginning monitor coredump

2019-12-07T14:14:11.941Z| vcpu-4| I125: End monitor coredump

2019-12-07T14:14:11.941Z| vcpu-4| W115: Dumping core for vcpu-8

2019-12-07T14:14:11.941Z| vcpu-4| I125: VMK Stack for vcpu 8 is at 0x451af4913000

2019-12-07T14:14:11.941Z| vcpu-4| I125: Beginning monitor coredump

2019-12-07T14:14:12.334Z| vcpu-4| I125: End monitor coredump

2019-12-07T14:14:12.334Z| vcpu-4| W115: Dumping core for vcpu-9

2019-12-07T14:14:12.335Z| vcpu-4| I125: VMK Stack for vcpu 9 is at 0x451ae7293000

2019-12-07T14:14:12.335Z| vcpu-4| I125: Beginning monitor coredump

2019-12-07T14:14:12.728Z| vcpu-4| I125: End monitor coredump

2019-12-07T14:14:12.728Z| vcpu-4| W115: Dumping core for vcpu-10

2019-12-07T14:14:12.728Z| vcpu-4| I125: VMK Stack for vcpu a is at 0x451af4f93000

2019-12-07T14:14:12.728Z| vcpu-4| I125: Beginning monitor coredump

2019-12-07T14:14:13.121Z| vcpu-4| I125: End monitor coredump

2019-12-07T14:14:13.122Z| vcpu-4| W115: Dumping core for vcpu-11

2019-12-07T14:14:13.122Z| vcpu-4| I125: VMK Stack for vcpu b is at 0x451ae8913000

2019-12-07T14:14:13.122Z| vcpu-4| I125: Beginning monitor coredump

2019-12-07T14:14:13.514Z| vcpu-4| I125: End monitor coredump

2019-12-07T14:14:34.966Z| vcpu-4| I125: Printing loaded objects

 

So the vms has crached, and it sort of looks like memory related.

 

I have more vms like this.

 

Anyone any idea ??

 

Thanks!!


ESXi 5.5 Host Abruptly Restaring

$
0
0

I've got an ESXi 5.5 Host that is abruptly restarting.  It seems to run find for about 6 months but then without notice it restarts.  Has anyone else seen this issue?

Sudden VM console keyboard input failure of the 6.7.0 Update 3 hypervisor.

$
0
0

Not sure what went wrong, but all of a sudden, I can't use my keyboard to control any VMs via their web console.

Using the latest chrome normal keys (a-z,0-9) do not generate any response from the vm, whereas up and down keys scroll up and down on the console's history, even if the VM is showing a setup screen which normally would intercept these inputs.

While using non chromium MS edge, keyboard inputs work as they should.

 

Looking at the developer console (both edge & chrome), I see many errors for "Unknown class found TooManyWrites" that is triggered from main.js

 

Any idea what could have gone wrong? This setup was working before.

One host loses connectivity in a small (production) VMware 5 Essentials Plus Cluster - Help?

$
0
0

I am reaching out to the community to see if anyone can help me resolve a problem in a small production VMware vSphere 5 Essentials Plus HA Cluster.  Apparently our support has waned (news to me) and this version, while quietly working great for years, allows me to get no formal support.  Doing my best here, I am not a vmware guru and am out of my element.

 

Today for some reason unknown, one of the three identical hosts in the HA Cluster has lost network connectivity (partially, read on).  And has dropped out of control of the vSphere client.  vSphere says it cannot connect to the host and the vmotion will no longer work, but there are three virtual machines on this host that appear to be running fine and chugging along doing production work.

 

I can ssh into the shell on each of these three identical hosts and ping out to other hosts on the management lan, but not the one in question.  I am completely puzzled by this as to how I can ssh over the management network into the "unconnected" host but not ping out from the shell.  I can do this on the two others just fine.

 

Any suggestions or help would be greatly appreciated.  I am reluctant to do anything aggressive on the host as the real production from the VMs is working ok for now.  Any ideas and suggestions appreciated.

DS

$
0
0

Buenas tardes, tengo una duda, ¿puedo configurar la misma tarjeta de red de un ESX en 2 DS diferentes?.

 

Un saludo,

Enrique

Cannot deploy vmmark 3.1 OVA to vCenter 6.7 / ESXi 6.5

$
0
0

I downloaded from VMware VMmark 3.1.

I've tried deploying the OVA using vCenter (flash /chrome) kept getting this error:

The "Deploy OVF template" operation failed for the entity with the following error message. Unable to deploy template.

 

Tried connecting directly to ESXi host (6.5).

And error at final page in wizard was complaining about a missing disk.

 

Considering the OVA is directly from VMware I assume the 'missing disk' is a red herring.

 

Thanks

Viewing all 8132 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>