Quantcast
Channel: VMware Communities : Discussion List - ESXi
Viewing all 8132 articles
Browse latest View live

Nvidia Telsa K10

$
0
0

Hello,

 

does the tesla k10 work in esxi 6.7? like grid K2?


Error when I try to modify /var/spool/cron/crontabs/root file...

$
0
0

Hello everybody,

I've some troubles configuring ghettoVCB for automatic backup in cron system of my VMware ESXi 6.5 machine...

Actually, I've configured ghettoVCB as good as well and I'm able to backup my virtual machines by starting ghettoVCB via cli but I'm not able to modify the file /var/spool/cron/crontabs/root in order to add instructions to start the shell command: I've change chmod +w to allow modifications but when I try to save the system answers "Operation not permitted"...

 

Someone else encountered this problem?

 

Let me know, maybe I'm doing something wrong but I'm not able to understand what...

 

Best

 

Max

ESXi NTP Servers?

$
0
0

Curious to know how everyone sets up NTP on their ESXi hosts and if my setup is acceptable.

 

I have the NTP service running on all hosts and pointing to NTP server 1.pool.ntp.org.

 

For my DC's only, in the VM settings I have checked Synchronize guest time with host.

Socket Error 10055: An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full.

$
0
0

We are operating some guest OS' (Windows Server 2012 R2 Standard ) on ESXi 5.5U1, and about 150 .NET applications are working on each guest OS.
But, the following error occured intermittently when the .NET application communicate another client.

 

"An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full."

 

So, we took actions below, but all failed.

 

a. TCP parameter modification
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters]
TcpTimedWaitDelay - 30
MaxUserPort - 65534
TcpNumConnections - 16777214
TcpMaxDataRetransmissions - 5

b. IPv6 disable

c. OS memory extension: 8GB → 16GB

d. NIC change: E1000 → VMXNET3

 

 

We would appreciate your help.

Unable to access ESXi web interface after updating to ESXi 6.7.0 Update 3 (Build 14320388)

$
0
0

I have 4 servers that I have updated to ESXi 6.7.0 Update 3 (Build 14320388) and on all 4 servers, the web interface now shows a timeout error when I attempt to login:

 

I can access the page with no problems.  I start by logging in:

1.png

I then get an error saying "Connection to ESXi host timed out:"

2.png

If I try to refresh the page after this, I just get the VMware logo:

3.png

Sometimes the page will load, but I see this in the title of the tab and I am not able to do anything inside the page:

4.png

 

I am experiencing this problem on two Dell R620 servers that are hosted in a data center and two Dell R420 servers that I have at home.  They are not on the same network, yet they are both exhibiting the same exact issue.  I have tried Chrome, Chrome Incognito, Microsoft Edge, and Opera.  If I reboot the servers, I am able to temporarily access the web interface, but I eventually start getting these timeouts again.  I am not able to SSH to the servers because SSH gets disabled on each reboot.  I have also found that if I connect to the management console via iDRAC, I can press F2 and get prompted for credentials, but after accepting the credentials, it just hangs and is no longer responsive to keyboard commands:

5.png

Does anybody have any suggestions on how to fix this?  The fact that it is happening on 4 different servers on 2 different networks leads me to believe there is a larger problem here and not something localized to my installs.

Can't add 10GBe physical nic

$
0
0

I've been having a dog of a time trying to add a dual 10GBe nic to ESXi.  I've tried installing the drivers and everything appears that it is installing but ESXi doesn't seem to be loading the drivers for some reason.  This is an Intel NUC NUC7i5BNK host with a OWC Mercury Helios 3  Thunderbolt 3 PCIe Expansion Chassis housing the NIC.  The NIC and the expansion chassis is known good on other OS's so everything points to a configuration issue on ESXi or in the bios of the NUC.  I've enabled thunderbolt at boot so I think that rules out the BIOS.

 

[root@esxi1:~] lspci -v | grep -A1 -i ethernet

0000:00:1f.6 Ethernet controller Network controller: Intel Corporation Ethernet Connection (4) I219-V [vmnic0]

         Class 0200: 8086:15d8

--

0000:06:00.0 Ethernet controller Network controller: Intel(R) 82599 10 Gigabit Dual Port Network Connection

         Class 0200: 8086:10fb

--

0000:06:00.1 Ethernet controller Network controller: Intel(R) 82599 10 Gigabit Dual Port Network Connection

         Class 0200: 8086:10fb

[root@esxi1:~] vmkload_mod -l |grep drivername

[root@esxi1:~] vmkload_mod -l |grep ixgben

[root@esxi1:~] esxcfg-nics -l

Name    PCI          Driver      Link Speed      Duplex MAC Address       MTU    Description

vmnic0  0000:00:1f.6 ne1000      Up   1000Mbps   Full   94:c6:91:15:0b:e4 9000   Intel Corporation Ethernet Connection (4) I219-V

[root@esxi1:~] esxcli software vib install -d "/vmfs/volumes/datastore1/VMW-ESX-

6.7.0-ixgben-1.7.10-offline_bundle-10105563.zip"

Installation Result

   Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.

   Reboot Required: true

   VIBs Installed: INT_bootbank_ixgben_1.7.10-1OEM.670.0.0.8169922

   VIBs Removed: INT_bootbank_ixgben_1.7.1-1OEM.670.0.0.7535516

   VIBs Skipped:

 

Still no joy after reboot

 

[root@esxi1:~] esxcli hardware pci list

0000:06:00.0

   Address: 0000:06:00.0

   Segment: 0x0000

   Bus: 0x06

   Slot: 0x00

   Function: 0x0

   VMkernel Name:

   Vendor Name: Intel Corporation

   Device Name: 82599EB 10-Gigabit SFI/SFP+ Network Connection

   Configured Owner: VMkernel

   Current Owner: VMkernel

   Vendor ID: 0x8086

   Device ID: 0x10fb

   SubVendor ID: 0x8086

   SubDevice ID: 0x7a11

   Device Class: 0x0200

   Device Class Name: Ethernet controller

   Programming Interface: 0x00

   Revision ID: 0x01

   Interrupt Line: 0xff

   IRQ: 255

   Interrupt Vector: 0x00

   PCI Pin: 0x00

   Spawned Bus: 0x00

   Flags: 0x3219

   Module ID: -1

   Module Name: None

   Chassis: 0

   Physical Slot: 4294967295

   Slot Description:

   Passthru Capable: true

   Parent Device: PCI 0:5:1:0

   Dependent Device: PCI 0:6:0:0

   Reset Method: Function reset

   FPT Sharable: true

 

 

0000:06:00.1

   Address: 0000:06:00.1

   Segment: 0x0000

   Bus: 0x06

   Slot: 0x00

   Function: 0x1

   VMkernel Name:

   Vendor Name: Intel Corporation

   Device Name: 82599EB 10-Gigabit SFI/SFP+ Network Connection

   Configured Owner: VMkernel

   Current Owner: VMkernel

   Vendor ID: 0x8086

   Device ID: 0x10fb

   SubVendor ID: 0x8086

   SubDevice ID: 0x7a11

   Device Class: 0x0200

   Device Class Name: Ethernet controller

   Programming Interface: 0x00

   Revision ID: 0x01

   Interrupt Line: 0xff

   IRQ: 255

   Interrupt Vector: 0x00

   PCI Pin: 0x01

   Spawned Bus: 0x00

   Flags: 0x3219

   Module ID: -1

   Module Name: None

   Chassis: 0

   Physical Slot: 4294967295

   Slot Description:

   Passthru Capable: true

   Parent Device: PCI 0:5:1:0

   Dependent Device: PCI 0:6:0:1

   Reset Method: Function reset

   FPT Sharable: true

 

[root@esxi1:~] esxcfg-nics -l

Name    PCI          Driver      Link Speed      Duplex MAC Address       MTU    Description

vmnic0  0000:00:1f.6 ne1000      Up   1000Mbps   Full   94:c6:91:15:0b:e4 9000   Intel Corporation Ethernet Connection (4) I219-V

numa.autosize.vcpu.maxPerVirtualNode lack of info

$
0
0

I have a VM with 64 vCPUs and 512GB RAM, it’s a massive db. ESXi 6.5 are HP DL560 with 4 sockets (22 cores each + HT) and 1.5 TB RAM. According to Frank Denneman's book vSphere 6.5 Host Resources Deep Dive I aimed at keeping the VPD onto as little psockets as possible. By disabling the Hot add CPU feature and by adding the preferHT set to True I increased performance quite a lot and I expected to see cores spread onto two physical sockets. However while the VMWare KB 2003582 states how to implement the preferHT setting it does not mention something Frank Denneman did say in his book:

Quote:

“Please remember to adjust the numa.autosize.vcpu.maxPerVirtualNode setting in the VM if it is already been powered-on once. This setting overrides the numa.vcpu.preferHT=TRUE setting”

End quote

 

I read the above after I did the initial changes to the VM and I have now noticed that its numa.autosize.vcpu.maxPerVirtualNode value is 11. According to Virtual NUMA Controls I should get 6 virtual nodes by dividing 64 by 11,but I see the VM has 7. This is another thing I don't understand.

following which criteria do I adjust numa.autosize.vcpu.maxPerVirtualNode value?

Shall I set it to 44 as it is the max number of logical cores in a physical socket? Or shall I disable it and let the system do its best decision? If yes how do I disable it? This is the current layout of the cpu resources of the vm:

 

although performances have improved I’m not happy with the distribution of the cores. Specially considering that homeNode 3 is not used at all.

 

 

So, to recap my question to the experienced admins are the following:

 

1. following which criteria do I adjust the numa.autosize.vcpu.maxPerVirtualNode value so that the preferHT setting is enforced correctly?

2. I knew that in 6.5 the coresPerSocket setting was decoupled from the Socket setting, so it does not really matter anymore if you set 12 sockets x 1 core or 1 Socket x 12 cores (unless licenses rectrictions are in place). However in Frank Denneman's book I read:

quote

"If preferHT is used, we recommend aligning the cores per socket to the physical CPU package layout. This leverages the OS and application LLC optimizations the most "

end quote

So, in this case the use of CoresPerSocket is effective? Then I should set 2 Sockets x 32 coresPerSocket? Option that frankly I haven't seen available in the VM Settings window

3. Why the VM has 7 virtual nodes instead of 6?

Cannot Launch vSphere Client (HTML) - This site can't be reached

$
0
0

I tried both Chrome and Firefox got the same results even on different PC. Cleared browsing history that didn't make any difference. It have been working from the first day of setup.


vSphere HA agent on this host cannot reach some of the management network addresses

$
0
0

     vSphere HA agent on this host cannot reach some of the management network addresses of other hosts, and HA may not be able to restart VMs if a host failure occurs

how to resolve thsi issue

ESX 6.7 - "Cannot Synchronize Host"

$
0
0

One of our ESXi hosts running version 6.7 keeps having a weekly issue where it will randomly lose contact to vSphere. The guest VMs are a basic domain controller, SQL server, and a couple of Citrix VDA servers as well as the VMware vCenter Server Appliance.

 

It starts out showing that the domain controller's CPU is spiking and alerts in a red status, then shows each VM disconnecting 1-by-1, after which the host becomes unreachable. Errors stating "cannot synchronize host" and "Host [hostname] is not responding." I'm sure there's some issue with the domain controller randomly having it's CPU spike, but that shouldn't cause the rest of the VMs and the entire host to become unreachable.

 

Typically the issue resolves itself after a few hours as it tends to happen around 3AM. The uptime on the host never indicates that it rebooted or shutdown.

 

Has anyone else ran into this issue? Any assistance is appreciated.

Mac Pro compatibility with installing ESXi 5.0

$
0
0

Starting a thread to discuss ESXi on Mac Pro hardware.

 

MacPro1,1
MacPro2,1
The first two generation Mac Pro's boot to black screen prompting to pick cd boot type 1 or 2. However no keyboard input is accepted. I suspect this may be solvable with some hacking.
MacPro3,1
MacPro4,1
MacPro5,1
The three most recent generation Mac Pro's install fine to standard sata drives.
I beleive that none of apple's hardware raid cards are workable for booting from. The most common fiber channel cards apple shiped with Mac Pro's from LSI also do not work. The Xraid still works if you connect it to a supported FC adapter however.

 

If you have other Mac + ESXi questions start your own thread Let's keep this one specific to installing ESXi 5.0 on Mac Pro hardware..

 

Blake

Upgrade from 6.5U3 to 6.7U3 does not offer preserve datastore option

$
0
0

I am attempting to migrate an ESXi server from 6.5U3 to 6.7U3 and I provided with an upgrade option, but not the preserve datastore option.  I see two options:

Upgrade

Install

 

I am uncertain if the upgrade option will preserve the VMFS5 datastore.

 

bootbank and altbook contain the same files

 

Anyone ever run into this before?

Import vm from USB drive plugged to ESXi host

$
0
0

Hello,

Can I plug a USB drive directly to my ESXi host and import a VM from there ? ESXi 6.5.

AutoStart: cannot get to work!

$
0
0

Right clicking on my VM but when the HV starts...the VM will not...I only have a single machine on this instance of 6.5.

 

Where to being?

 

Thx.

Shawn2019-12-05_20-45-46.jpeg

Network Adapters and VLAN (optional) greyed out in DCUI

$
0
0

I am studying for my VCP exam and I am using a nested setup in Workstation. Suddenly I don't have access to the network adapters in the DCUI. I have migrated both NICs used for management to a vmkernel management port group in a distributed switch.

Does someone know what is going on here? Google gave me this hint Configure Management Network Greyed out in DCUI

But in my case the hosts are still connected to vCenter and are working fine no issues anywhere.

 

2015-02-11 21_42_46-vlabdkcphesxi01 - VMware Workstation.png


ESXi 6.7 Qfle3 driver and firmware

$
0
0

Hey Guys,

 

We have upgrade from esxi 6.5 to 6.7u1 and experienced a PSOD a few days later on one host. VMware support suggested to update the qfle3 driver to 1.0.86.0 and the adapter firmware to 7.15.x

 

How do I upgrade the firmware to 7.15.x on a Dell PowerEdge R730 (broadcom corporation QLogic 57810 10 Gigabit Ethernet Adapter)

 

Thanks

Do we need the SLP Service on Port 427

$
0
0

Hi,

 

our penetration test team criticizes a running SLP Service on Port 427 tcp/udp on all our ESXi hosts 5.0 (HP380G6-G8).

Does someone know if this Service is needed on a standard ESXi host connectet to a vCenter (maby for the hardware tab)?

We are NOT running any third party tools to monitor the hosts (HP agent e.g). But we have installed the CIM Provider for the vCenter integration.

 

Just closing "CIM SLP" via firewall rules did not bring up any problems promptly as far as I see, but I want to be really sure.

 

Any help would be appreciated.

 

Chris

ESXi network firmware STORM vs MFW

$
0
0

Just logged into a newly built 6.7 host to verify the firmware versions after running HPE SPP to update all the device firmware... why server management keep getting more complicated when it should be as easy as taking candy from a baby? 

 

Here is what I see when I run 'esxcli network nic get -n vmnic0':   Firmware Version: Storm: 7.13.11.0 MFW: 7.15.56

 

In older versions of VMWare, when you ran this command this is what you would see in the output for firmware of the NIC:  Firmware Version: 11.1.183.23

 

OK, so now at this point I think you understand what my question is...   what the Hey is STORM vs MFW and which of these values am I concerned with when looking to verify compatibility of my NIC?   Ouput in the VMWare HCL \ IO devices doesn't produce output that shows Storm and MFW, it just simply reports a version that should be in place.. 

 

Can someone please unravel this I can't find anything published anywhere about this.. 

 

Thanks!

ESXi 6.7 U2 Receive Length errors

$
0
0

While monitoring one of our hosts we noticed network interface errors. Checked cabling but no problems found. Checked NIC stats and the errors are 'receive length errors' only. Tried updating the BIOS and NIC driver, to no avail.

 

Stats of the NIC after rebooting, receive length errors are slowly increasing:

[root@rdcvms01:~] esxcli network nic stats get -n vmnic0

NIC statistics for vmnic0

   Packets received: 177327

   Packets sent: 494927

   Bytes received: 30356313

   Bytes sent: 662112740

   Receive packets dropped: 0

   Transmit packets dropped: 0

   Multicast packets received: 5267

   Broadcast packets received: 8459

   Multicast packets sent: 503

   Broadcast packets sent: 1274

   Total receive errors: 1877

  Receive length errors: 1877

   Receive over errors: 0

   Receive CRC errors: 0

   Receive frame errors: 0

   Receive FIFO errors: 0

   Receive missed errors: 0

   Total transmit errors: 0

   Transmit aborted errors: 0

   Transmit carrier errors: 0

   Transmit FIFO errors: 0

   Transmit heartbeat errors: 0

   Transmit window errors: 0

 

 

Host: Proliant ML350 Gen10, latest SPP

HPE Customized Image ESXi 6.7.0 Update 2 version 670.U2.10.4.1

Single NIC in use at present.

Updated Intel NIC driver to the latest version: Download VMware vSphere , no change in status, errors still increasing.

Switch is a HPE 1920s, no errors seen on any of the interfaces.

All interfaces and connections are set to standard MTU size (1500).

 

Anyone has seen this before?

 

P.s this person has the same issue, and hasn't found a solution either:

Receive Length Errors on Host NIC : vmware

 

Thanks for any tips!

dead I/O on igb-nic (ESXi 6.7)

$
0
0

Hi,

 

I'm running a homelab with ESXi 6.7 (13006603). I got three nics in my host, two are onboard and one is an Intel ET 82576 dual-port pci-e card. All nics are assigned to the same vSwitch; actually only one is connected to the (physical) switch atm.

When I'm using one of the 82576 nics and put heavy load on it (like backing up VMs via Nakivo B&R) the nic stops workign after a while and is dead/Not responding anymore. Only a reboot of the host or (much easier) physically reconnecting the nic (cable out, cable in) solves the problem.

 

I was guessing there is a driver issue, so I updated to the latest driver by intel:

 

 

[root@esxi:~] /usr/sbin/esxcfg-nics -l

Name    PCI          Driver      Link Speed      Duplex MAC Address       MTU    Description

vmnic0  0000:04:00.0 ne1000      Down 0Mbps      Half   00:25:90:a7:65:dc 1500   Intel Corporation 82574L Gigabit Network Connection

vmnic1  0000:00:19.0 ne1000      Up   1000Mbps   Full   00:25:90:a7:65:dd 1500   Intel Corporation 82579LM Gigabit Network Connection

vmnic2  0000:01:00.0 igb         Down 0Mbps      Half   90:e2:ba:1e:4d:c6 1500   Intel Corporation 82576 Gigabit Network Connection

vmnic3  0000:01:00.1 igb         Down 0Mbps      Half   90:e2:ba:1e:4d:c7 1500   Intel Corporation 82576 Gigabit Network Connection

[root@esxi:~] esxcli software vib list|grep igb

net-igb                        5.2.5-1OEM.550.0.0.1331820            Intel   VMwareCertified   2019-06-16

igbn                           0.1.1.0-4vmw.670.2.48.13006603        VMW     VMwareCertified   2019-06-07

 

Unfortunately this didn't solve the problem.

 

However ... this behaviour doesn't occur, when I'm using one of the nics using the ne1000 driver.

 

Any idea how to solve the issue?

(... or at least dig down to it's root?)

 

Thanks a lot in advance.

 

Regards

Chris

 

PS: I found another thread which might be connected to my problem: Stopping I/O on vmnic0  Same system behaviour, same driver.

Viewing all 8132 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>