Quantcast
Channel: VMware Communities : Discussion List - ESXi
Viewing all 8132 articles
Browse latest View live

Compiling drivers for the Aquantia AQC 10GB NICs

$
0
0

Hi, I started this week an attempt of compiling the linux drivers for the AQC line of 10gb cards to the ESXi. This would allow ESXi to be more functional on hardware such as the Mac Mini.

 

I downloaded the 32 bit CentOS 5.6 and the 6.7U1 toolchain, and it is looking promising. I still get a lot of errors, but I guess they can be solved.

There's one thing, however, that I am failing to understand. On the AQC drivers, there are references to the `struct page` and to the `count` member of this structure.

However, on the mm.h file distributed on the source code from vmware, we learn that the structure page is actually empy:

 

```

#if defined(__VMKLNX__)

/**                                        

*  struct page - page handle structure

*                                         

*  ESX Deviation Notes:                   

*  As we don't support page handle, this should be an opaque structure. In vmklinux

*  a page handle represents the actual page number.

*  Such an handle should not be deferenced nor used in any form of pointer

*  arithmetic to obtain the page descriptor to any adjacent page. The pointer

*  should be treated as an opague handle and should only be used as argument to

*  other functions.

*                                         

*/                                        

/* _VMKLNX_CODECHECK_: page */

struct page {

};

```

Is there an approach to go around this, or is this a dead end?


VM memory must be less than or equal to 64 GB... Why?

$
0
0

Hello,

 

i'm very new on virtualization world.

 

Two days ago i installed vmware ESxi, 6.7, to see if the solution is sustainable for me, so i'm using the Evaluation mode and in my VM settings i can't put more than 64 GB of memory and I need at least 128 GB (my server have physical 512 GB ).

 

I thought that the ESxi Evaluation mode don't have limit for the VM memory.

 

Thanks for your help.

Missing Datastore after Reboot (EXSi 6.7)

$
0
0

After restarting the ESXi host that housed my VCSA the datastore that contained the vCenter VM was missing from the web UI. The disk still shows up but rescanning and refreshing does not mount the datastore.

 

The datastore is backed by a VMFS partition on the same disk as the ESXi installation. Was not able to find any snapshots of the VMFS. Running:

 

ls -lisa /vmfs/devices/disks/

I can see both the disk and a mapping for the partition.

 

partedUtil getptbl /dev/disks/naa.5000c5003b6c54af

Shows no issues with the partition table.

 

Only clue I can find is in the vmkernel.log.

2020-09-21T17:12:18.876Z cpu3:2097198)NMP: nmp_ThrottleLogForDevice:3802: Cmd 0x28 (0x459b7fbd93c0, 2476806) to dev "naa.5000c5003b6c54af" on path "vmhba1:C0:T1:L0" Failed: H:0x0 D:0x2 P:0x0 Valid sense data: 0x4 0x32 0x0. Act:NONE

Somewhat of a VMWare newbie, any suggestions for where I should go from here?

Root Account Lockout

$
0
0

I keep having my root account locked out.  It is not unlocking after the 900 second time out limit.  I thought it was an issue with my backup software.  I use Veeam Backup and Replication, but I use that for a lot of different client that I support and I am not having this issue anywhere else.  According to the Auth.log the issue seems to be cause by one server, which is an AD, but from random ports.  I have attached the Auth.log.

 

I am able to unlock the root account and everything works for a shot time then the issue is back.

ESXi 6.7 Can't reach WebUI

$
0
0

Hello,

we have a fresh (3 Month old) Dell Poweredge R740xd Server with an factory installed ESXi 6.7.

In the beginning the server worked fine, but now after a few weeks we can no longer access the Web UI. This error message appears:

 

 

503 Service Unavailable (Failed to connect to endpoint: [N7Vmacore4Http16LocalServiceSpecE:0x000000360c6ffd00] _serverNamespace = / action = Allow _port = 8309)

 

Veeam can no longer reach the server either, we can only access it via SSH. VCenter is not yet installed.

Dell support is unfortunately very slow and the solutions from them have not been good so far.

 

So, please help us!

 

---

 

Update:

 

I have now found out that the "hostd" service was not running.

After I started this via ssh (/etc/init.d/hostd start) everything is running fine.

The question is now, why does the service stop?

Can't install VIB

$
0
0

Hi all,

Im trying to install a NIC driver into my ESXi from ILO remote console with esxi shell as I don't have management connectivity to it.

I have created an ISO with the VIB files and mounted it into the ILO CDROM and ESXI with the command:

vsish -e set /vmkModules/iso9660/mount mpx.vmhba33:C0:T0:L0

 

The actual content of the ISO is:

 

iso.PNG

 

 

 

 

Everything works fine until I try to install the VIB, when I do an ls -l I cannot see any of the files in the ISO file, I just see this:

 

vib.PNG

 

I have tried the below commands without any luck:

 

esxcli software vib install -v net-nx-nic_6.0.643-1OEM.600.0.0.249.4585.vib

esxcli software vib install -v NET-NX-2

esxcli software vib install -v NX_NIC-3

 

 

I just keep getting this message for example for NX_NIC-3:

 

error.PNG

 

 

What is wrong here? any idea?

ESXi 6.0 and Emulex LPe11000 (possibly other Emulex cards that aren't working oob)

$
0
0

I'm posting this so others might not have to spend hours on the line with support figuring this out.

My environment is older and has the Emulex LPe11000 hba cards.  This card isn't discovered oob.

Obviously a problem right.  After spending hours on the phone with VMWare support they pointed me to Emulex.

The issue really is that some of VMWare's support personnel don't know where they keep all the VIB files for their OS.

I was able to find the attached VIB here.  Import this and apply it and your Emulex card will start working and you can take your hosts up to ESXi 6.

http://vibsdepot.hp.com/hpq/mar.30.2015/

Error loading /vsan.v00 - Fatal Error: 6 (Buffer too small)

$
0
0

ESXi 6.5 host is halting after reboot .  Server does boot from SAN LUN.   Any ideas on what could be causing this particular issue all of a sudden ?

 


VMWare ESXi accesing from other network

$
0
0

Hi, I've been trying to port forward my vmware esxi 6.5 server so I could acces it from other networks.

It doesn't work, I tried every port that a vmware document said, but no result....

Does anyone know how to do this?

Threadripper 3970x GPU and USB Passthrough, esxi 6.7 U3, nvidia RTX 2080, TRX40

$
0
0

Hi all-- wanted to describe progress on an update to my former threadripper system.

 

Starting point: 4 Vm's  on a threadripper 1950, each with GPU passthrough (1 x 2080, 3 x 2070).  64 GB RAM, esxi 6.7 U3. System was quite stable (see prior thread).

 

Target: Threadripper 3970x (double the cores), 128 GB RAM, on an Asrock Creator TRX40 motherboard.

 

I started by validating the new hardware under a temporary (non-virtualized) windows build.  Stuff worked.

 

BIOS settings used: defaults, except:

Changed some fan settings to make them quieter

Turned on XMP

Turned on SR-IOV

left PBO OFF (default, but I changed it to disabled. PBO sucks up huge amounts of power for little performance benefit, to say nothing of validation!)

Used current BIOS version, not the beta for 3990x.

 

esxi installation: Used previous installation.  This had passthru.map entries for AMD and NVIDIA as detailed in my last post. It also had the epyc-recommended configuration change previously recommended (which I removed) and preinstalled aquantia NIC driver.

 

Moved 2 x m.2 SSd's  from old into new system. System booted nicely into esxi. All hardware passthrough vanished as expected. Of note, *neither* of the NIC's on this board has a native driver. I used the aquantia driver and live off the 10Gb aquantia.  I have no idea if there is a realtek Dragon 2.5g driver out there.

 

Redid the hardware passthrough.  All GPU's passed back through to their VM's, yay.  Only two of them would boot, boo. Eventually, after much gnashing of teeth, remade three VM's from scratch: they would all keep crashing immediately upon booting windows.

 

This was interesting.  Esxi would report that the GPU's had violated memory access and advised adding a passthrough.map entry, which didn't fix the problem.  Changing BIOS on the host to remove CSM support and enable 4G support, and enabling 64 bit MMIO in the vm didn't fix it either.  A new vm with fresh windows install worked.

 

There were several other interesting changes from previous system:

 

disabling msi on the gpu made them keep crashing , unlike previously when it fixed stuttering

No cpu pinning or NUMA settings were used or needed

The mystical cpuid.hypervisor setting remains required to avoid error 43

 

With these caveats, I got 4 bootable VM's each using the Nvidia card's own USB-c connector for keyboard/mouse. 8 cpus/vm.  Which led to the next problem, which I haven't been able to solve:

 

The mice/keyboards would all intermittently freeze for moments to minutes, and sometimes not come back. Lots of testing inside of windows showed no cause. Interestingly, the problem was 1)worse with a highend G502 mouse, and 2) much worse inside of windows UI -- and never happened for example in demanding real time full screen apps.  I was sure it was going to be some bizarre windows problem. Rarely (every few hours) systems would crash completely (while idle!) with the same memory access violation.  Also rebooting one of the vm's would make other vm's momentarily stutter. None of this ever happened on the 1950x system where these controllers were reliable.

 

I eventually worked around the problem with the motherboard's USB controllers.  There are 5: 2 x Matisse, 2 x Starship, and 1 x asmedia.  The Matisse ones are lumped in the same IOMMU group and won't pass through (They are perpetually "reboot needed.") The Asmedia chip worked with no problems (usb-c port on the back of the motherboard).  The Starship USB 3.0 controllers both worked IF you had a passthru.map entry moving them to d3d0 reset method.  Otherwise, booting a VM with one of these controllers failed AND crashed a different VM with a GPU memory access violation and the controller then permanently disappeared until the system was then powered down (not just a reboot).  Wow, talk about bad crashes.

 

Using these 3 motherboard controllers an 3 vms appears rock stable (I haven't tested the fourth yet.) One of them has 64but mmo enabled which probably isn't needed.

 

Things I haven't gotten around to testing yet:

 

1. Does isolating the vm to one ccx fix anything?

2. If only one vm is running, does the usb-c nvidia controller become reliable?

3. Does turning off XMP or using the latest beta BIOS change anything?

 

Other advice -- I'm obviously waaay off of HCL here-- but don't even try DRAMless SSD's.  Datastore *vanishes* under high load. Bad. Same thing happens with my OEM samsung until I updated the firmware, but that's another story well documented elsewhere.

 

I'm really puzzled by the nvidia usb-c thing. Would also be nice if the Matisse controllers worked.  Otherwise mostly pleased-- many of the kludges needed on older esxi versions and the 1950x with its wacky NUMA configuration are no longer needed and the new system is *much* faster.

 

 

Hope this helps someone else.  If anyone can tell me what's going on (or at least that it's not just me) would be much appreciated. I speculate it's a BIOS bug.

 

Thanks LT

NVMe health monitoring

$
0
0

Hi,

 

i'm using a Samsung 1725b NVMe on ESXi 7.0 and wonder what are people using to

 

- monitor the health (tbw, errrors, temperature)

- predict failures based on these (few) data

 

For a normal SSD, i get a lot of information when using

 

#  esxcli storage core device  smart get -d  ID

Parameter                                                Value  Threshold  Worst  Raw
Health Status                OKN/A  N/AN/A
Media Wearout Indicator      995    99172
Write Error Count            10010   1000
Power-on Hours               920    92151
Power Cycle Count            990    9914
Reallocated Sector Count     10010   1000
Drive Temperature            690    6331
Write Sectors TOT Count      990    9939
Read Sectors TOT Count       990    9940
Initial Bad Block Count      10010   1000
Program Fail Count           10010   1000
Erase Fail Count             10010   1000
Uncorrectable Error Count    1000    1000
Pending Sector Reallocation Count  1000    1000

 

 

For the NVMe i only have this:

 

Parameter                                          Value      Threshold  Worst  Raw
Health Status       OKN/A  N/AN/A
Power-on Hours      1677  N/AN/AN/A
Power Cycle Count   3N/A  N/AN/A
Reallocated Sector Count  090   N/AN/A
Drive Temperature   3679   N/AN/A

 

 

There were some efforts to get smartctl up and running, but everything unofficial.

https://www.virten.net/2016/05/determine-tbw-from-ssds-with-s-m-a-r-t-values-in-esxi-smartctl/

 

Thanks for info.

 

     -Mark

ESXi Hardware RAID ISSUE.

$
0
0

Hello,

 

I have A Dedicated Server of Asus & Supermicro, I am trying to Configure Intel Hardware RAID on ESXi 7.0, Like I am trying to Configure RAID 0 With 2 Disk on 1TB After Configuring RAID From Intel Bios, and after that, I boot up to ESXi and during Installation, it shows 2 Separate Disk of 1TB but it should show 1Disk of 1TB, Any Solution For that?

ESXi 6.5 new VM will NOT boot iso

$
0
0

Hi,

 

I'm trying to build a new CentOS VM.  I downloaded the iso last night, moved it over to my server this morning and built the VM.  I choose the CD/DVD Drive as an ISO and connect at power on; however, every time I power on the VM it tries to boot from the LAN, but never seems to boot the ISO.  I've tried  multiple times(re-selecting the ISO), no luck.

 

help?

 

THanks

The error 43 that I get when I try to pass my graphic card in a ESXI vm can be caused by the audio sub device reset interrupts ?

$
0
0
Hello.

 

 

I'm trying to fix the error 43 that comes out when i try to passthrough my graphic card RTX 2080 ti as well as the GTX 1060 with esxi,doesn't matter which version,I tried a lot of versions,6.5,6.7,7 to Windows 10. I've investigated more and I tried to repeat the same error in different circumstances to understand what are the repetitive patterns,and I think that I found something interesting. I think that I have found a lot of similarities between a XEN vm where I passed the GTX 1060 and a ESXI vm where I passed the GTX 1060 as well as the RTX 2080 ti. But but let's go slow. As first thing I created a XEN vm (using the EFI boot loader) and as I said I have passed the GTX 1060 adding these parameters in the XEN cfg file :

 

 

nano win10-gtx-headless.cfg

builder='hvm'

bios='ovmf'

bios_override = '/usr/lib/xen-4.11/boot/ovmf.bin'

memory = 4096

 

 

this is what it says when it starts :

 

 

root@ziomario-z390aoruspro:/etc/xen# ./create-win10-gtx-headless.sh

 

Parsing config from win10-gtx-headless.cfg

got a tsc mode string: "default"

libxl: error: libxl_pci.c:1162:libxl__device_pci_reset: Domain 0:write to /sys/bus/pci/devices/0000:02:00.0/reset returned -1: Inappropriate ioctl for device

libxl: error: libxl_pci.c:1167:libxl__device_pci_reset: Domain 0:The kernel doesn't support reset from sysfs for PCI device 0000:02:00.1

 

lspci :

 

02:00.0 VGA compatible controller: NVIDIA Corporation GP106 [GeForce GTX 1060 3GB] (rev a1)

02:00.1 Audio device: NVIDIA Corporation GP106 High Definition Audio Controller

 

 

In addition to this,I have got an nvidia driver and I have applied the kvm patcher to it ; I have signed the driver again and I have disabled the driver signature enforcement. The first time that the vm starts,I don't see the error 43,BUT,the external monitor does not turn on.

 

 

11-2020-09-24_11-21-03.png

 

 

the driver is properly signed also with XEN and it does not show the error 43. But when I reboot the VM,it will give the error 43. Check below :

 

 

2-2020-09-24_11-15-34.png


 

ok. this is the same behaviour that I see when I turn on a VM with esxi ! Someone suggested that I should use EFI instead of the old legacy MBR. I did it,but the problem is still there. Good. Now,I did the same with my Geforce 2080 ti and,as u can see below,there isn't any error 43 :

 

 

3-2020-09-24_11-15-59.png


 

this is what happens when I boot the vm with the working 2080 ti :

 

 

root@ziomario-z390aoruspro:/etc/xen# ./create-win10-rtx-body.sh

 

Parsing config from win10-rtx-body.cfg

got a tsc mode string: "default"

libxl: error: libxl_pci.c:1167:libxl__device_pci_reset: Domain 0:The kernel doesn't support reset from sysfs for PCI device 0000:01:00.1

libxl: error: libxl_pci.c:1167:libxl__device_pci_reset: Domain 0:The kernel doesn't support reset from sysfs for PCI device 0000:01:00.2

libxl: error: libxl_pci.c:1167:libxl__device_pci_reset: Domain 0:The kernel doesn't support reset from sysfs for PCI device 0000:01:00.3

 

lspci :

 

01:00.0 VGA compatible controller: NVIDIA Corporation TU102 [GeForce RTX 2080 Ti] (rev a1)

01:00.1 Audio device: NVIDIA Corporation TU102 High Definition Audio Controller (rev a1)

01:00.2 USB controller: NVIDIA Corporation TU102 USB 3.1 Host Controller (rev a1)

01:00.3 Serial Bus controller [0c80]: NVIDIA Corporation TU102 USB Type-C UCSI Controller (rev a1)

 

 

It works for the 2080 ti (it means that I don't have any error 43 after the reboot and I can use an externals screen); it does not work for the 1060 and for esxi.

 

Hypothesis : the reasons why I can't pass my 1060 as well as my 2080 ti with esxi are the same of the reasons why the gtx 1060 does not work inside a XEN vm because I see an identical behavioral pattern.

 

But it's not ended here. I see another repetitive pattern.

 

Check the upper and right corner of the screenshot where it runs the VM with the 1060 and the right corner of the screenshot where it runs the VM with the 2080 ti. In the first one you will see a red exclamation mark. On the second one there isn't. Well,I see the red exclamation mark also on the upper and right corner of a windows 10 / esxi VM. And this is another identical behavior.

 

Final conclusion : since I see some relevant identical patterns between the two situations,I suspect that the technical reason why I get the error 43 on a Windows 10 + esxi VM could be included inside these error messages :

 

 

libxl: error: libxl_pci.c:1162:libxl__device_pci_reset: Domain 0:write to /sys/bus/pci/devices/0000:02:00.0/reset returned -1: Inappropriate ioctl for device

 

libxl: error: libxl_pci.c:1167:libxl__device_pci_reset: Domain 0:The kernel doesn't support reset from sysfs for PCI device 0000:02:00.1

 

 

I feel that the file rouge between the two situations is the audio device integrated with the other sub components of the graphic card. For some reason ESXI does not support  some feature of the audio sub-device included on the 1060 and on the 2080 ti and I think in a lot of different nvidia consumer graphic cards.

 

Take in consideration that I always pass all the sub-components of the graphic cards. I tried to don't pass the audio device,but I didn't fix anything.

 

 

Why I see the red exclamation mark on the VM with the 1060 and with the VM with ESXI and I don't see it on the VM with the 2080 ti ? Below u can see some screenshots that shows that the behavior of a windows 10 vm with esxi is the same :

 

19-Screenshot_1.png

20-Screenshot_1.png

21-Screenshot_1.png

Nvidia Tesla M60 GPU does not show passthrough capable.

$
0
0

I tried to find a video card that showed to be compatible to ESXi 6.7.  We purchased the Nvidia Tesla M60.  When I go into the hardware tab it sees the video card but passthrough is grayed out and not capable...  Really hoping I'm just missing something because we thought this was one of those most compatible cards we could get and it still doesn't want to work.

 

The computer has a Xeon E7 v3/Xeon E5 v3/Core i7.  I have 120 items on the hardware list and not sure what I could type to help.


ESXI 6 - PCI passthrough for Intel Sky Lake chipset onboard VGA, SATA

$
0
0

My Project goal is to build Home Server/Desktop with ESXi, where I could run my Windows, Ubuntu Linux and NAS VMs.

Build Specs

Intel i5-6500, Gigabyte H170 Mini-Itx motherboard, 16GB DDR4 2133 Memory

There is only 1 PCIe slot which is used by additional SATA card.(so cannot use additional VGA card)

 

I have installed ESXi 6.0 U2 and tried creating Windows 8.1 x 64 bit VM and Windows 10 VM.

VMkernel DEEP 6.0.0 #1 SMP Release build-3620759 Mar  3 2016 18:41:52x86_64 x86_64 x86_64 ESXi

 

So far I have successful at PCI passthrough of onboard SATA controller and Wireless Controller. Which tells me Vt-d is working without issues.


Intel Corporation Sunrise Point-H AHCI Controller

This needed addition of this line in /etc/vmware/passthrough.map

# INTEL Sunrise Point-H AHCI Controller

8086  a102  d3d0     false

 

But somehow when i assign onboard VGA(Intel HD 530) for passthrough; VMs get stuck after booting.

 

Everytime VMs hang right after

 

2016-04-05T22:36:50.967Z| vcpu-0| I120: pciBridge4:7: ISA/VGA decoding enabled (ctrl 0004)

2016-04-05T22:36:50.967Z| vcpu-0| I120: pciBridge4:6: ISA/VGA decoding enabled (ctrl 0004)

2016-04-05T22:36:50.967Z| vcpu-0| I120: pciBridge4:5: ISA/VGA decoding enabled (ctrl 0004)

2016-04-05T22:36:50.967Z| vcpu-0| I120: pciBridge4:4: ISA/VGA decoding enabled (ctrl 0004)

2016-04-05T22:36:50.967Z| vcpu-0| I120: pciBridge4:3: ISA/VGA decoding enabled (ctrl 0004)

2016-04-05T22:36:50.967Z| vcpu-0| I120: pciBridge4:2: ISA/VGA decoding enabled (ctrl 0004)

2016-04-05T22:36:50.967Z| vcpu-0| I120: pciBridge4:1: ISA/VGA decoding enabled (ctrl 0004)

2016-04-05T22:36:50.967Z| vcpu-0| I120: PCIBridge4: ISA/VGA decoding enabled (ctrl 0004)

2016-04-05T22:36:50.967Z| vcpu-0| I120: pciBridge7:7: ISA/VGA decoding enabled (ctrl 0004)

 

There is no errors shown, so I don't have much to go on here.

 

I have tried different combinations of below parameters but still nothing.

pciHole.start="2200"

pciPassthru1.opromEnabled=TRUE

pciPassthru1.msiEnabled = "FALSE"

smc.present = "TRUE"

 

I have even tried disabling Intel HD530 in Safe Mode and reboot. Enable it later from device manager, while i am in Remote Desktop. But VM just gets stuck right there and my Remote Desktop shows black screen. [Ping still responds]

Also tried disabling and uninstalling VMWare SVGA 3D,

 

It seems like ESXi is not letting full control of Intel HD530 graphics.


I know Integrated gfx passthrough is not recommended by Vmware; But ideally I would also like to use the same machine Graphics output, so that I don't have to access my VMs remotely when needed. (I could get rid of my old machines)


This setup works fine with XenServer and I am able to see my VM display from Host machine HDMI output.

[But again XenServer cannot do a USB or Bluray Drive passthrough]


I am attaching vmware logs here and my vmx file. Could anyone please see if you can find any error or settings that I could work with to fix the issue.

 

Note: VM works fine without graphics passthrough or in Safe mode.

 

Let me know if anything else is needed.

Appreciate your time.

 

Regards

Deep

Updating from ESXi 5.0 to 7.0?

$
0
0

I have an ESXi server that I use for development VM's, using the free license, because I don't need any fancy VM management software.  I'm going to be upgrading the hardware soon to make a bit more room, and need to update the ESXi version to take advantage of the increased memory (moving from 32GB to 128GB).

 

Storage on the system is in the form of three hardware RAID volumes.  ESXi is installed on a RAID1 array, and two separate RAID10 arrays are used for VM storage.

 

What's the best course of action to do an upgrade?

 

Is it feasible to upgrade in place from 5.0 to 6.0.0 U2, then to 6.7 U2, then to 7.0?  Do I need to somehow find these versions, or are they downloaded as part of the upgrade process?  Do I need to somehow get a license key for them?

 

Can I safely install 7.0 directly without wiping out my existing VM's?  I'm aware this would require re-configuring the host itself, which isn't a big deal.

 

I've already registered for the 7.0 key and have the ISO.

 

Thanks in advance for any help.

NVIDIA Tesla K80 - Pass Through

$
0
0

Hello.

I have a problem using a GPU hardware (NVIDIA Tesla K80) in an HP blade.

Since vGPU drivers are not available for this NVIDIA we tried the pass through configuration reported in this blog: https://cto.vmware.com/gpgpu-computing-with-the-nvidia-k80-on-vmware-vsphere-6/ however the full config is not working in our installation, when applied to a Windows 10 Pro, the VM no longer boots. When using just pciPassthru.use64bitMMIO="TRUE" config line, I can detect the new hardware in Win10 VM, but NVIDIA instalation of drivers "356.54-tesla-desktop-win10-64bit-international-whql.exe" never completes.

I also tried to install NVIDIA drivers in an ubuntu-16.04.1-server-amd64, but the VM was destroyed in this process (was unbootable after the installation).

 

I had also previous problems that can be related:

Our blade also has an "HP P440ar RAID controller" problem, https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2110557 and we opted to install ESXi software VMware-ESXi-6.0.0-2494585-HP-600.9.3.30.2-Jun2015.iso

since we don't have much experience in building custom images.

 

Tried also to file a VMWare support ticket, but the SnS is assigned to the institution that I work for. In My VMware account can't file the ticket because I don't have any product registrations.

I have all the SnS information (email info of Service Activation), but don't know how to use this info for my account.

 

Any help?

Avoiding virtual hard disk failure to stop the virtual machine

$
0
0

Hello,

 

I would like to know if some virtual hard disks of the virtual machine (vmdk or RDM) can be marked so that they don't stop the virtual machine if hey become unavailable.

 

I have a virtual machine with two additional virtual hard disks (in this case vmdk but it would be the same for RDM) from two LUNs located in two different storage systems. The virtual machine performs a software RAID-1 with these two virtual hard disks. So, I would like my virtual machine not to stop if one of these storage systems fails and thus the LUN where the hard disk resides becomes unavailable.

 

Is this possible?

 

Thanks in advance,

 

Christian

ESXi 6.5 5969303 update vsphere client access issue

$
0
0

Hello

I have upgraded my hypervisor to latest U1 release as in subject line. However,  when I try to connect via vsphere client I get error message as below

Viewing all 8132 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>