when connect VNC client to esxi 6 console, the connection keeps dropping once enter the password. Does any one know the reason? the same setting as esxi 5 for VNC client enabled option.
VMware KB: Using a VNC Client to connect to virtual machines
when connect VNC client to esxi 6 console, the connection keeps dropping once enter the password. Does any one know the reason? the same setting as esxi 5 for VNC client enabled option.
VMware KB: Using a VNC Client to connect to virtual machines
Hello All,
I have configured several virtual machines with vnc:
remotedisplay.vnc.port="5901"
remotedisplay.vnc.enabled="true"
remotedisplay.vnc.password = "secret"
This works great, however from my workstation when connecting via vnc to the guest, I cannot copy from my machine to the guest or vice versa. Is this a known issue, or is there something I am missing?
Thank you very much,
Gregory Durham
Does ESXi 6.7 U1 have better performance on 4Kn than 512e drives and how much better in performance?
If you are using 4Kn drives in esxi, are the virtual machines running 4Kn for the virtual disks? If virtual disks are 4Kn, does this mean Windows Server 2008 R2 is not supported as a guest OS?
If you are using 512e drives in esxi, are the virtual machines running 512n for the virtual disks?
There is no RDM (raw device mapping) support for 4Kn drives at this time but I don't think this is an issue at all.
FAQ: Support statement for 512e and 4K Native drives for VMware vSphere and vSAN (2091600)
https://kb.vmware.com/s/article/2091600
"In other words, even with 512e sectors, it is still preferable for the applications and the OS to perform 4KB aligned I/O for predicable performance. This is a general problem and not particular to any specific OS."
I have a INTEL S2600STBR Board with two 10GBE SFP+ or 10BASE-T 10GB Slots. I Can see them under "PCI-Devices", Hardware. I installed the vib from here: VMware Compatibility Guide - I/O Device Search
ESXi 7.0 | i40en version 1.10.9.0 | 4.11 |
With "esxcli network nic list" it shows nothing in the list. Also #esxcli system module set --enabled=false --module=i40en and #esxcli system module set --enabled=true --module=i40e is not helping, without function.
After that I have installed Windows 10 on the S2600STBR Board and there it works - but don´t help really.
Has anyone any idea that would help? Tx.
Hello! It's been a minute since I last posted here with my own topic. I now have a dedicated ESXi server in the works, and am planning to start using it 24/7 by the end of this year. Here are the specs for the hardware:
CSE :: HPE ProLiant DL580 G7
CPU :: 4x Intel Xeon E7-8870's (10c/20t each; 40c/80t total)
RAM :: 64GB (16x4GB) DDR3-1333 PC3-10600R ECC
STR :: 1x 250GB HITACHI HTS542525K9SA00 HDD (ESXi, VMware Linux Appliance, System ISOs) +
1x Western Digital WD Blue 3D NAND 500GB SATA SSD (vSphereFlash Read Cache) +
4x HGST NetApp X422A-R5 600GB 10K SAS 2.5" HDDs +
1x 500GB Seagate Video ST500VT003 HDD (Remote Development VM)
1x LSI SAS 9201-16e HBA SAS card (4-HDD DAS) +
1x Mini-SAS SFF-8088 to SATA Forward Breakout x4 cable +
4x HITACHI HUA722020ALA330 HDDs +
1x Rosewill RASA-11001 (4x 3.5in HDD cage)
PCIe :: 1x HP 512843-001/591196-001System I/O board +
1x HP 588137-B21; 591205-001/591204-001 PCIe Riser board
GPU :: 1x nVIDIA GeForce GTX 1060 6GB +
2x nVIDIA Tesla K10's
SFX :: 1x Creative Sound Blaster Audigy Rx
NIC :: 1x SolarFlare SFN5322F 10Gbps +
1x HP 491838-001 (NC375I) 1Gbps
FAN :: 4x Arctic F9PWM 92mm case fans **
PSU :: 4x 1200W Server PSU's (HP 441830-001/438203-001)
PRP :: 1x Dell MS819Wired Mouse *
Parts marked with * have not been purchased/sourced yet. Those marked with ** are already in-house, but require further planning/modification before they can be added to the server.
Here is the current software configuration plan for the server:
VMware Linux Appliance (for managing/monitoring ESXi 6.5)
Temporary/Test VM (for when I want to host a temporary role/application, without impacting mission-critical infrastructure)
Windows Server Datacentre (ActiveDirectory, SoftEther+, Technitium^, Server JRE, YaCy, hMailServer+, AOMEI PXE Boot^, etc.)
MacOS Mojave (Google Drive/MEGA sync*, PleX Server**, Handbrake, SVP4 Pro, iTunes, DaVinci Resolve, etc.)
Artix Linux (NextCloud+, OpenSSH, strongSwan+, Technitium^, VirtualGL^, ejabberd+, FreePBX+SMS+, CUPS^, F@H, sNTP^, etc.)
Windows 10 LTSB (front-end tasks - OBS Studio, Moonlight**, DesignWorks, ConsoleGameEmulators, modpack dev., etc.)
Remote Development (my build environment - Netbeans, MinGW/64, cmake, GNUWin, CUDA SDK, Mosh, nginx+SQLite+Php, etc.) ^
* Temporary task that will be replaced by a permanent, self-hosted solution
** Can benefit from port forwarding, but will be primarily tunnel-bound
^ Tunnel-bound (VPN/SSH) role - not port forwarded/exposed to the Internet
+ Active Directory enabled - Single Sign On (SSO)
Here is the current resource allocation plan for the server:
VMs marked with an * cannot be run at the same time. Only one of them can ever run at any given moment. MacOS and Linux would have gotten a Radeon/FirePro (ie., Rx Vega 64), for best compatibility and stability, but market forces have prevented this. Windows 10 gets the Creative Audigy Rx. The MacOS and Linux VMs get whatever audio the Tesla K10's provide (either that or a software solution). Windows 10, Remote Development, and the Temp/Testing VM will be put to sleep (or offed) until they are needed (Wake on LAN), since they don't host any essential services.
There are three other mirrors for this project, in case you're interested in following individual conversations from the other sites (in addition to this thread).
P.S. Out of all the sites that I've ever used, this forum has one of the best WYSIWYG editors I've used in a while Kudos to the devs!
Hello
Which ESXi OS version can support Intel 2nd gen Optane Persistent Memory 'Barlow Pass'
The KB67645only mentioned Optane persistent memory "100" series can support vSphere 6.7 EP10 and later versions.
How about 200 series? Does it also support on VMware vSphere 6.7 EP10 and later versions ?
Please refer to the information of Optane persistent memory 200 series at the following link.
Hi.
I had a licence key for version 6 in my account, have had it a long time. In January however I moved to Unraid but late July and I decided I far prefer ESXi.
The licence is now gone from my account page.
So I downloaded and got a licence for 7. But 7 seems to have a bug when I try an install it on on my t7500 dell, on the select drive for install nothing is shown. Refreshing has no effect. I tried the dell only image, no effect. It feels like a bug because it does not even try to scan for drives.
Hey ho I thought I'll install 6, and 6 installs just fine. But I have no licence, there does not appear to be a free licence for 6 anymore and I cannot install 7
Is this it for me then?
I have my laptop that has Visual Studio 2019. We also have an ESXI with many VMs. I want to setup the most efficient way to program on my laptop, compile, and then it'd be nice to just hit a button that pushes the newly compiled programs to 2 or more of the VMs on the ESXi for network testing.
The options I can think of.
Browser / Datastore / Upload then downloading them on the VMs... really not as efficient as I'd like. Especially with multiple files that I want to push.
Building a script / program to sftp all of them to the datastore, then also pull them from the datastore to the VM. A lot more work but then requires me to open up SSH and ESXi warning messages makes me wonder how vulnerable that would be having that on all of the time.
I guess I could develop and test all on my local machine but I was hoping to use the Vms.
Is there a better way I'm not thinking of?
I'm experimenting with comms between guests.
I've created a new virtual network, and a new virtual switch.
I have two Win 2012r2 guests. I have added a VMXNET3 adapter to both, and after installing VMware tools the OS's have loaded device drivers for them.
I have configured them to have fixed IPs of 192.168.0.1 and 192.168.0.2 respectively.
When I load up the Network page, I see this
so it looks like the the VMXNET3 adapters should be talking to each other, but... they can't ping each other.
I have not set a DNS on either machine, but I assume this is not necessary as I'm using their IP numbers, not hostnames.
The physical adapter (vmnic1) isn't connected to anything but again I assume this isn't necessary as I'm not trying to connect to anything external.
What am I missing here?
We have a 10gb card that is not working. It is installed in hpe dl 380 g5 on Vmware ESXi 5.5. On Vmware we can se the card configured but no link on switch. The fibre cable as light and the two gbics either. What could be the problem?
Thak you.
I have been using CentOS 7.3 that is bundled with my application in a form of .ova file that gets deployed in a vmWare Esxi client. The .ovf file has got the OperatingSystemSection that mentions ovf:id="107" and ovf:version="7".
<OperatingSystemSection ovf:id="107" ovf:version="7" vmw:osType="centos7_64Guest">
<Info>The kind of installed guest operating system</Info>
<Description>Centos 7 (64-bit)</Description>
</OperatingSystemSection>
After the deployment of the ova, the ESXi client detects the Guest OS to be Other Linux (64-bit). I expect it to show CentOS 7 (64-bit).
Tried modifying the ovf file by removing the ovf:osType="centos7_64Guest", it didn't help either.
Also tried changig the ovf:id="80", and ovf:osType="centos7_64Guest", it started showing RHEL 7 (64-bit). In a nutshell, tried several permutations, but couldn't achieve the right Guest OS version in the ESXi client.
I am not really sure if it's a bug or something, but I can't get it working at all. Any help would be appreciated. I tried with 3 ESXi versions 6.5, 6.7 and 7.0 without any luck.
Could someone please suggest what I am doing wrong?
Thanks,
Neeraj
Fixed a minor typo in the headline Message was edited by: Neeraj Kumar
Hi All,
We have 40 VM's running on 2 ESXi host 5.5. We are planning to migrate these VM's to Dell Hyper-Converge vxrail 35TB. We will build new vcenter 6.5. Just wanted to know what are points needs to consider before migrating from ESXi host to ESXi 6.5 and how it can be migrated ?
ESXi host version in vcenter 5.5. and HW HP DL 380 Gen8.
Thanks
V
vxrail 35TB
Experts,
I observed this issue on ESXi 6.7 while deploying an OVA(running an app on top of CentOS 7.3). The Ready to complete section doesn't seem to be fetching the right details(check screenshot below) and it looks like the GUI is just throwing the placeholder names instead of the correct values.
This is not recreate-able in every installation attempt, but getting it randomly is very annoying.
Sometimes in good lucky installation attempts, the window does appear carrying the right values(please check the Correct_completion.jpeg screenshot attached).
Please let me know what could have gone wrong. Should you need any more information, I would be glad to provide it.
Thanks in advance.
Hi all,
Just a quick post as a work in progress...
Now I know the first reply to this post will be something like, "Well it's unsupported hardware...". And I want to thank you in advance for all the help.
Ok, enough snarking for today.
I've noticed GPU passthrough issues with ESXI 6.7 when I had things working with 6.5+, and it seems others are seeing this as well.
I tried a couple different hosts with different NVIDIA cards (1030, 1080ti) and got the same results, after seeing the card initialize it would shut down/crash. From that point forward the dreaded error code 43 on device properties. I read many posts in the past that said 43 was an intentional disable in software when the driver detected it was running in a VM. So it got me thinking...
I always made sure that I had "hypervisor.cpuid.v0=FALSE" in my config before ever passing through the video card. But something changed in 6.7, or maybe Nvidia is looking for something else?
So I tried this:
Build a Windows10 VM (v1809)
don't install vmware tools
install Chocolatey (https:\\chocolatey.org) For easier install of teamviewer and nvidia drivers
install teamviewer via chocolatey (so I can connect remotely) *also I didn't have usb hardware that I could passthrough
Disable svga(vmware vga) adapter "svga.present=FALSE" in vm advanced config
set "hypervisor.cpuid.v0=FALSE"
Add 2 PCI devices: 1080TI, Audio Device
boot while simultaneously crossing fingers.
Ok so the start was not pretty as Windows detects the video card and you need to reboot after initial install and you don't have vmware tools installed so its a cringe-worthy moment.
but alas after the reboot and the driver took, the output looked stable.
after that I installed the latest nvidia drivers via chocolatey (geforce-game-ready-driver) and all seems ok.
Quick testing with Cinebench and Unigine Superposition seems to work fine with a 1080p medium windowed score of 18390.
So, what was it? svga present? vmware tools?
If I had to guess it might be svga, because I saw a similar issue with Ubuntu 18.10. I was having problems getting the nvidia driver to work in Ubuntu so on a fluke I disabled the svga device and voila! Ubuntu 18.10 on a GPU!
I may install vmware tools and see if that makes a difference.
Anyways, I wanted to throw this out there for all the folks suffering with this issue. If it's Nvidias doing then I suppose it'll just be a matter of time for them to find another way.
-m
Hello,
I am really stuck here. I have a ESXi 6.7 installed (without vCenter) on a Dell server with local storage only. It worked fine untill last week. Due to raid configuration reset the datastores has vanished and the VMs shows as invalid.
I tried to remount with the sanpshot from CLI (e.g: esxcli storage vmfs snapshot mount -l) - everything returned to normal - but not after I made a reboot.
I tried to resignature the datastores (e.g. esxcli storage vmfs snapshot resignature –volume-label) - after the resignature - I could see the datastores after reboot but the VMs were still invalid.
I tried to re-register each VM by right clicking the .vmx file from the web interface - this caused the VMs to be active - but after reboot they returned to be invalid.
I tried to unregister the VMs (with the web interface and with CLI - e.g. vim-cmd /vmsvc/unregister <id>) but they all come to hunt me after - reboot.
I tried editing the vmInventory.xml file with VI - removed all entries within <ConfigRoot> and </ConfigRoot> tags - but STILL - everything returned after reboot!
I don't know what to do next.... any help will be much appreciated.
Thank you!
Hello, I am new to ESXi so I’m trying to understand my environment and the best practices when building VM for my company. I have 8 total Nodes in the cluster with the below in each:
CPU
Cores 28 CPUs x 2.59 GHz Processor Type Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz
Sockets 2
Cores per Socket 14
Logical Processors 56
So the most I can give a heavily used VM is 14 cores and 14 CPU’s? They asked me to build a DB VM with 56 CPUs. Thanks in advance.
Hi,
I've been having issues with USB passthrough on my ESXi 7.0b setup. The USB devices I have connected to the USB controller are the following 3 devices:
USB 3.0 Hub
USB 56k conexant modem (CX93010 ACF)
Corsair Commander Pro (I use this to control the fans and get temperature reading of the system)
My hardware setup is consumer grade hardware (I know this is not a typical ESXi setup)
Core i5-8600
32GB RAM
Gigabyte Z370N Wifi (ITX Motherboard)
WD SN720 1TB NVMe Drive
Samsung 1TB SATA SSD
Seagate 240GB SATA SSD
Intel i350-T4 NIC
The one thing this setup cannot do is pci passthrough for the ONLY usb controller on the motherboard, so my only option is to use USB device passthrough which ESXi handles.
The system will PSOD when I try to check on the temperatures on the VM that is hosting all the USB devices from the above. The frequency of the PSOD is infrequent as I can't get it to PSOD on demand. Below is the screen shot from the PSOD which leads me to believe it's one of the USB devices since I had the server run solidly for a month when the USB devices aren't being passthrough'ed.
The one thing I am trying to do at the moment is turn off all power management on USB devices and see if that will help.
If anyone can confirm or help me decrypt the PSOD to help me narrow down why I am getting a PSOD that would be appreciated!
I apologize if this has been asked before, but does anyone know if it's possible to connect 2 ESXi hosts with a LAN cable to increase local speeds?
Hello,
I have a question related to how ESXi checks hardware health status. For example, I have a Synergy 660 Gen10 and Huawei E9000 CH121 V3, for Gen10 I see the following:
wbem and sfcbd is disabled but there is information about sensors for this ESXi host.
For CH121 V3:
For CH121 V3 wbem and sfcbd is enable and there is also information about sensors.Can someone explain to me what is the difference between CH121 V3 and Gen10? Does Gen10 use iLO and ipmi for hardware monitoring?
I downloaded the Windows 10 1809 and Server 2019 ISOs the day they became available so I can start working on my templates.
I built the templates with EFI, paravirtual for the C drive and vmxnet3 adapter. I've been using this combo for other versions of windows 10/8/7and windows server 2008r2/2012r2/1016 without issue.
So far Windows 2019 (with desktop experience) seems to be ok at least for a basic vm and guest customization. Haven't tried anything else yet.
Windows 10 1809 on the other hand is very, very slow to reboot after the initial install or even just rebooting after making some changes post install. After installing the OS it took 10-15 minutes for the initial windows setup stuff (user, security settings, etc) to appear . I tried a VM set to BIOS and it seemed faster but was still quite slow. Server 2019 and other versions of windows 10 have no issue.
The hosts are esxi 6.5 and 6.7.
I haven't had a chance to try every combo of BIOS/EFI/vmxnet3/e1000e/paravirtual/lsi sas to see if one is the cause of the issue but was wondering if anyone else had noticed any issues or if it was just me?
Thanks