Quantcast
Channel: VMware Communities : Discussion List - ESXi
Viewing all 8132 articles
Browse latest View live

dead I/O on igb-nic (ESXi 6.7)

$
0
0

Hi,

 

I'm running a homelab with ESXi 6.7 (13006603). I got three nics in my host, two are onboard and one is an Intel ET 82576 dual-port pci-e card. All nics are assigned to the same vSwitch; actually only one is connected to the (physical) switch atm.

When I'm using one of the 82576 nics and put heavy load on it (like backing up VMs via Nakivo B&R) the nic stops workign after a while and is dead/Not responding anymore. Only a reboot of the host or (much easier) physically reconnecting the nic (cable out, cable in) solves the problem.

 

I was guessing there is a driver issue, so I updated to the latest driver by intel:

 

 

[root@esxi:~] /usr/sbin/esxcfg-nics -l

Name    PCI          Driver      Link Speed      Duplex MAC Address       MTU    Description

vmnic0  0000:04:00.0 ne1000      Down 0Mbps      Half   00:25:90:a7:65:dc 1500   Intel Corporation 82574L Gigabit Network Connection

vmnic1  0000:00:19.0 ne1000      Up   1000Mbps   Full   00:25:90:a7:65:dd 1500   Intel Corporation 82579LM Gigabit Network Connection

vmnic2  0000:01:00.0 igb         Down 0Mbps      Half   90:e2:ba:1e:4d:c6 1500   Intel Corporation 82576 Gigabit Network Connection

vmnic3  0000:01:00.1 igb         Down 0Mbps      Half   90:e2:ba:1e:4d:c7 1500   Intel Corporation 82576 Gigabit Network Connection

[root@esxi:~] esxcli software vib list|grep igb

net-igb                        5.2.5-1OEM.550.0.0.1331820            Intel   VMwareCertified   2019-06-16

igbn                           0.1.1.0-4vmw.670.2.48.13006603        VMW     VMwareCertified   2019-06-07

 

Unfortunately this didn't solve the problem.

 

However ... this behaviour doesn't occur, when I'm using one of the nics using the ne1000 driver.

 

Any idea how to solve the issue?

(... or at least dig down to it's root?)

 

Thanks a lot in advance.

 

Regards

Chris

 

PS: I found another thread which might be connected to my problem: Stopping I/O on vmnic0  Same system behaviour, same driver.


[SOLVED] Problems getting the IBM ServeRAID M1015 SAS/SATA Controller to work with a VM on ESXi 6.5

$
0
0

I had the IBM ServeRAID M1015 running OK with ESXi 5.1 until an extended power failure that outlasted the UPS backup.  The ESXi 5.1 on a USB got corrupted so I decided to install fresh using ESXi 6.5.

 

THe install went OK and I was able to get my other VMs going but the one that used the IBM ServeRAID M1015 is having the following error and won't start:

The systemId does not match the current system or the deviceId, and the vendorId does not match the device currently at 2:0.0.

 

I've read many posts and used Google but I don't seem to be able to figure out what is wrong.  I think the deviceID and vendorID are OK.

Screen Shot 2017-03-12 at 14.59.43.png

From the VM settings:

Screen Shot 2017-03-12 at 17.57.58.png

I've clicked the "Reserve all memory" but that hasn't helped at all.

 

lspci -v gives:

0000:02:00.0 Serial Attached SCSI controller Mass storage controller: LSI Logic / Symbios Logic LSI2008 [vmhba1]

     Class 0107: 1000:0072

 

The esx.conf shows:

/device/00000:002:00.0/owner = "passthru"

/device/00000:002:00.0/device = "72"

/device/00000:002:00.0/vendor = "1000"

/device/00000:002:00.0/vmkname = "vmhba1"

 

The VM.vmx shows:

pciPassthru0.deviceId = "0072"

pciPassthru0.vendorId = "1000"

pciPassthru0.systemId = "508e907b-f95c-2f44-9df2-6470021053df"

pciPassthru0.id = "02:00.0"

pciPassthru0.pciSlotNumber = "160"

 

I think the system ID is from: /vmfs/volumes/50901e07-4bc606a0-29b7-6470021053df/

 

The esx.conf, VM.vmx now all show 0x72; 0x1000 but the VM won't start.

 

Yes, the IBM ServeRAID M1015 is certified to work on 6.5.

 

What have I missed?  What should I be doing next?

Thank you in advance.

host hardware voltage

$
0
0

I have a error on my vsphere as host hardware voltage on one of my Esxi.

 

details :

 

vSphere Client version 6.7.0.40000

Hardware Model : UCSB-B200-M4

Is This ESXi Deployment Correct?

$
0
0

We're upgrading our servers from the traditional physical file server to virtual servers, our decision to go with Vmware ESXi rather than MS Hyper-V.

 

Current Environment:

1- One DC server with all business apps installed on it ERP & users backup/sync files, shares, etc.

2- Two virtual servers running on seperate old NAS device (one running MS-SQL with some apps, and the other server for LAB)

 

New Environment:

I've been advised to setup the new environment & HW as follows due to the limited budget:

-One rack-server with high specs with:

--2 x 300GB SAS HDDs, configured on RAID-1 -as datastore1- used to install all 3 VMs on them only

--4 x 1TB SAS HDDs, configured on RAID-10 -as datastore2- used for SQL-DB, apps, users backup/sync data, users share

 

We bought all HW and I setup one VM as DC running windows server 2019 Std, another VM with SQL 2019 and windows server 2019 OS, all on datastore1.

 

My questions now:

1- is this setup scenario correct, specially for the HDDs, RAID types and isolating the VMs OS from the apps and data?

2- I couldn't install the SQL [data] folder on datastore2 (RAID-10), because I couldn't connect to datastore2 since the SQL VM on datastore1, so I kept it on the same datastore1, but this will be problem with the DB increase, so how to move the [data] folder to datastore2?

3- Noticed now problem with datastore1, no more space available while I should have 100GB free not used, because 2 VMs created and each one 100GB!

 

Can anyone advise me with this please?

Thanks in advance

physical RDM LUNs pointer files migration

$
0
0

i have a MS cluster of 2 virtual machines and i need to migrate the physical RDM LUNs pointer files and VMDKs to some other datastore. [each VM is having 45 RDMs and 5 disks ( VMDKs)]

 

it is possible if i shutdown the VMs to move pRDM (pointer file ) but need some details.

 

Is it possible by storage vmotion and automatically my VMDKS and pRDM pointer files will be migrated to the newly pointed datastore.with out removing the pRDMs ? will it not convert my RDMs to VMDKs?

is there any screenshots/video i can see , i dont have any testing envirnemnt to look?

ESXi 6.7 - Active Directory accounts & Home Directories

$
0
0

VMware Knowledge Base

 

The above KB is an old article for ESXi 4.1 showing how to manually create home directories for Active Directory accounts. Is there a up to date KB article for ESXi 6.7, or how is this done in 6.7? Currently, Active Directory accounts are receiving the following error: Could not chdir to home directory /home/[username]: No such file or directory

ESXi vlans and relation to Switch

$
0
0

vlans.PNG

 

Hi everyone... this is what I have to work with...

I have a DL380 G10 host with esxi 6.7 hpe image.

I have a 3com switch 48 port with support for 802.1q and vlans in general...

 

The specifications for the project call for 4 vlans as described in the image and the presence of those vlans also in the vms inside the host.

 

Since I never treated vlans since I'm really kind of a noob when it comes to high level networking I thought I would ask you guys...

It's also for me for the future to understand what I'm doing...

 

I originally thought to do something like in the picture to have 2 uplinks to the switch for redundacy  for each vlan but it occurred to me that it's probably not needed...

if I only do 2 I should be able to tag the uplinks to work as trunks (forgive me if I use incorrect terms and please DO correct me) to carry all VLANs to the phisical switch...

 

What should I do? please help

 

thanks a lot! Fabio

Cannot boot from ESXi 7.0

$
0
0

Hi,

 

I tried to do a clean 7.0 install the first time and couldn`t boot after installation, the bios couln`t see the disk as bootable.

 

So I decided to install 6.7u3b and do an upgrade after but the problem still persist, I could boot fine uner 6.7 but as soon as I upgraded to 7.0 the disk was not bootable anymore.

 

No warnings during installation.

 

Any help will be very appreciated.

 

Thanks.


State Transition (VM_STATE_ON -> VM_STATE_CREATE_SNAPSHOT) not allowed for this Vm

$
0
0

Can't figure this one out and Google searching isn't coming up with much.

 

I have 13 virtual machines in my environment.  They all use the same datastore.  11 of them have no problems, but 2 of them fail with this error when trying to make a snapshot.

 

I tried restarting vmware services on both the hypervisor and vsphere.  I tried removing and re-adding the VMs from inventory.  None of the VMs directories have snapshot files already. 

ESXi 7.0: UEFI Network boot via IPv6 missing

$
0
0

Dear VMware community,

 

I have just installed VMware ESXi 7.0 to perform a PoC of running the hypervisor as well as VMs oIPv6-only network. Now I have come across the issue where when I start the virtual machine, the internal UEFI code just attempts to boot via IPv4: I only see >>Start PXE over IPv4 and not the typical >>Start PXE over IPv6.

 

Has the IPv6 code been omitted from ESXi UEFI boot code?

Is there a version of UEFI for ESXi which would enable this feature?

Or is there perhaps something I have missed in the ESXi configuration? (Note that the ESXi itself runs fine via IPv6-only access, so does any installed system such as Windows or Linux. It's just the UEFI boot code that lacks support for boot over an IPv6-only network.)

 

Thank you all for pointers on how to achieve this.

 

Regards,

Radek Zajic

ESXI 6.7 U3 The system is not responding

$
0
0

I found the following possible errors.

Is this a network card problem?

This should not be a storage problem, right ?

My ESXI network card model is Qlogic 57840

Thank you very much for your answer.

 

vmkernel.log

 

 

2020-05-21T12:03:25.097Z cpu56:2123550)WARNING: qfle3i:vmhba70:qfle3i_get_rdp_info:1261 failed to read SFP parameters

2020-05-21T12:03:25.097Z cpu56:2123550)WARNING: qfle3i:vmhba70:qfle3i_get_rdp_info:1261 failed to read SFP parameters

2020-05-21T12:03:25.098Z cpu56:2123550)WARNING: qfle3i:vmhba68:qfle3i_get_rdp_info:1261 failed to read SFP parameters

2020-05-21T12:03:25.098Z cpu56:2123550)WARNING: qfle3i:vmhba68:qfle3i_get_rdp_info:1261 failed to read SFP parameters

2020-05-21T12:03:25.098Z cpu56:2123550)WARNING: qfle3i:vmhba66:qfle3i_get_rdp_info:1261 failed to read SFP parameters

2020-05-21T12:03:25.098Z cpu56:2123550)WARNING: qfle3i:vmhba66:qfle3i_get_rdp_info:1261 failed to read SFP parameters

2020-05-21T12:03:25.098Z cpu56:2123550)WARNING: qfle3i:vmhba64:qfle3i_get_rdp_info:1261 failed to read SFP parameters

2020-05-21T12:03:25.098Z cpu56:2123550)WARNING: qfle3i:vmhba64:qfle3i_get_rdp_info:1261 failed to read SFP parameters

ESXI 6.0 HELP

$
0
0

Dear all,

 

I have a nas storage was configured with esxi 6.0 connected to 2 HP CURVE switches. Unfortunately, both switches were reset to default.  Now esxi can't connect to the datastore from the nas and all the vm installed in the nas server. Plaese help!

 

the first screenshot is the esxi and the secnd one is Nas storage.

Recommended budget CPU for ESXi home lab?

$
0
0

I want to set up an ESXi home lab to run a couple of Windows and/or Ubuntu VMs. Budget is up to £200 for the CPU. Can anyone recommend a decent CPU within that budget which will work well with ESXi? Would a Ryzen 2700X be a good choice, or would something with on board GPU be a better choice?

ESXi 6.5 only using half of CPU resources

$
0
0

Hello everyone,

 

*background*

I'm new to both VMware products and sysadmin work in general so forgive me if some of this seems ignorant. I've inherited the task of troubleshooting a server that we've had for over a year now and has never been usable for my business. The main problem that's been keeping it from usability is that any VM's installed on it will encounter a soft freeze after a few days of idling, and the only way to fix it is to restart the host. I'm still working on narrowing down that one, but in the meantime I've found an issue that (I hope) is related and also prohibitive to this server's operation.

 

*host specs*

MPN: Dell Poweredge R620

CPU: 2x Xeon E5-2665 @2.4GHz

RAM: 8x4GB DDR3 @ 1600MHz

Storage: 4x 1TB @ 7200RPM Seagate Constellation ST91000640NS, configured in RAID 10

OS: Dell's curated version of ESXi 6.5 found on the support site for this product.

Firmware: It's all up to date, but I can provide versions for specific pieces if needed. I've updated this during the troubleshooting process and have confirmed that nothing has changed.

 

*Extras*

I'm using Veeam ONE(Community edition) as a secondary monitor for host resource usage

Additionally, I've installed the iDRAC Service Module in the host OS and enabled the ESXi shell for monitoring.

 

*the problem*

The ESXi host will never go past 50% of CPU consumption, it's a hard line that I've verified through the ESXi HTML5 client, Veeam ONE, and esxtop. Any VM installed on the host will max out at 50% of its CPU allocation by clock speed, while the guest OS is getting crushed by CpuStres or a linux shell analog. Changing the number of vCPU's doesn't seem to have an effect on this, with cycles scaling accordingly. I've tested the host directly by inputting "dd if=/dev/zero of=/dev/null&" into the ESXi shell once for each core/thread I want to test, and looking at the per-core/thread stats by using esxtop p. The %Used stat always maxes out at 50, while the $UTIL stat is at 100 and the %A/MPERF stat is perfectly static at 50.0 and never fluctuates regardless of load. I've recreated this with "logical processor"  AKA hyperthreading disabled in the BIOS and recreated the same results.

 

Note that while the first picture displays different %used stats than I've described, these numbers only last until the first "refresh" of esxtop which is when they're 50% across the board. I assumed this was a reporting error but decided to include it just in case.

ESXtop Host CPU Stress.PNG

 

I'm not sure why this picture is displaying the %used as 25 this time, I've confirmed that it's displayed 50 with hyperthreading enabled in past tests, but I'd been fiddling in the BIOS a bit before this and may have inadvertently caused this.

Web Client Host max CPU Usage HT.png

ESXtop Host CPU stress HT.PNG

 

I basically have full reign to do whatever I want to troubleshoot this, I've booted it to an Ultimate Bootable CD and it seems like it's capable of fully consuming the CPU here, but I may be misreading it. Screenshot attached below. Note that hyperthreading is turned off here.

UBCD CPU Stress .PNG

Lastly, here's an example of a VM maxed out in the guest OS but only using half the host resources.

VM1 Guest OS CPU Stress HT.PNGVeeam ONE VM1 Host CPU Stress HT.PNG

I can and am willing to do basically anything to get to the bottom of this, I just want to know if I'm focusing my efforts in the right place here, and what else I can do to narrow down this issue.  if it winds up being a hardware issue so be it, but the reseller we got this from has been less than helpful on warranty work so I want to have a strong case if we point the finger at them again.Lastly, here's some screenshots of the BIOS with HT enabled.

BIOS CPU 1.PNGBIOS CPU 2.PNGBIOS Memory.PNGBIOS Sys Profile.PNG

Intel X550 10gbe network adapter not showing under storage adapters

$
0
0

Hey so i got some new dell servers, R640. They have 4 10gbe interfaces. Two intel and two broadcom.

 

I cabled up the intel nics and set them up fine. I added to the virtual switch and gave storage IP addresses to the ports. Until i got the part where i generally set up the storage connection. The adapters do not appear in the list of storage adapters. I can add a virtual iscsi storage adapter, but i can only add one. That does work but seems buggy to me, in that when i go an scan for HBAs the whole esxi host grinds to a halt for hours.

 

I am thinking this is because the physical adapters are not rendering under storage adapters.

 

How do i get them to appear in this list? do i have to enable something on the card itself in the firmware?

 

The other two 10gbe nics (Broadcom BCM57416 NetXtreme-E 10GBASE-T), maybe they will show up under storage adapters when they are cabled up? does the cable status effect when cards show up under storage adapters or do i have to do something else to make them show up there? in my memory they were always just there.

 

ESXi release is 6.7.0 15160138 provided by dell EMC.

 

 

 


Evaluation Datastore 8GB

$
0
0

Hi, I've installed an evaluation copy of ESXi on a nice Dell R720 which has at least 130GB hard drive, with a 1TB data drive. I logged in the Center only to see that the 'datastore1' is 8GB! Did I miss something or really bungle on the install?

TIA!

Unable to setup passwordless ssh to esx node

$
0
0

May I know why I can not do a passwordless ssh to an ESXI node?

 

[mahmood@hpc ~]$ ssh-keygen

Generating public/private rsa key pair.

Enter file in which to save the key (/home/mahmood/.ssh/id_rsa):

Enter passphrase (empty for no passphrase):

Enter same passphrase again:

Your identification has been saved in /home/mahmood/.ssh/id_rsa.

Your public key has been saved in /home/mahmood/.ssh/id_rsa.pub.

The key fingerprint is:

SHA256:WlIY5cI9vrxfN3Wh0jyyyl9YM73Y+IwlVQkYD/AzPuA mahmood@hpc.scu.ac.ir

The key's randomart image is:

+---[RSA 2048]----+

|      .....oo.   |

|     . =  ..o . .|

|      + =. + . o.|

|       +..o = o o|

|      . SE = O oo|

|       = .  B O.o|

|      . o  o.=o+ |

|        ......*. |

|        .+o. . o |

+----[SHA256]-----+

[mahmood@hpc ~]$ ssh-copy-id root@10.1.1.101

/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/mahmood/.ssh/id_rsa.pub"

/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed

/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys

Password:

Warning: untrusted X11 forwarding setup failed: xauth key data not generated

 

Number of key(s) added: 1

 

Now try logging into the machine, with:   "ssh 'root@10.1.1.101'"

and check to make sure that only the key(s) you wanted were added.

 

[mahmood@hpc ~]$ ssh root@10.1.1.101

Password:

Warning: untrusted X11 forwarding setup failed: xauth key data not generated

The time and date of this login have been sent to the system logs.

 

WARNING:

   All commands run on the ESXi shell are logged and may be included in

   support bundles. Do not provide passwords directly on the command line.

   Most tools can prompt for secrets or accept them from standard input.

 

VMware offers supported, powerful system administration tools.  Please

see www.vmware.com/go/sysadmintools for details.

 

The ESXi Shell can be disabled by an administrative user. See the

vSphere Security documentation for more information.

[root@localhost:~]

[root@localhost:~] ls -l .ssh/

total 4

-rw-------    1 root     root           403 May 23 15:25 authorized_keys

 

 

Any idea about that?

ESXi does not allow to pass my RTX 2080 ti from host os (linux + qemu + kvm + vfio) to a new VM

$
0
0

Hello

 

I'm running ESXi 7 on top of my Linux Ubuntu / qemu / kvm / vfio configuration. I have replicated the same configuration that I usually use for running Windows 10 guest on top of qemu / kvm configured for sharing my RTX 2080 ti graphic card between ubuntu host and windows 10 guest os,with small changes. good. Now I would like to know how can I pass my graphic card to a new windows 10 VM created in ESXi 7.0 because the web interface says that all my devices are not capable of doing SR-IOV and they can't be passed,as u can see from the image attached below. I went also in maintenance mode to assign the device,because I saw in some video tutorials that this is the method to do that,but it didn't work. I read somewhere can I can pass the graphic card also if my motherboard is not SR-IOV enabled. is this true ? thanks.

ESXCLI | LUN ID for protocol endpoint.

$
0
0

Hi,

is there any way to check which LUN ID belongs to the protocol end point?

LUN id is not presenting with the following command:

esxcli storage vvol protocolendpoint list

 

Thanks

6.7 U2: wrong WWNs

$
0
0

Got 2 HP servers with one 2-port FC and one 1-port FC HBA and ESXi shows incorrect WWNs in web GUI.
Same as here: VMware: Fiber Channel HBA Reports Wrong WWN? | PeteNetLive

Why is that?

 

6.7.0 Update 2 (Build 13006603)

 

esxcli storage core adapter list

HBA Name  Driver      Link State  UID                                   Capabilities         Description

--------  ----------  ----------  ------------------------------------  -------------------  -----------------------------------------------------------------------------

vmhba1    qlnativefc  link-up     fc.5001438028551b81:5001438028551b80  Second Level Lun ID  (0000:11:00.0) QLogic Corp ISP2532-based 8Gb Fibre Channel to PCI Express HBA

 

qlogic.png

 

esxcli storage core adapter list

vmhba2lpfclink-up fc.20000000c9aa0b7a:10000000c9aa0b7a  Second Level Lun ID  (0000:08:00.0) Emulex Corporation Emulex LPe12000 8Gb PCIe Fibre Channel Adapter
vmhba3lpfclink-up fc.20000000c9aa0b7b:10000000c9aa0b7b  Second Level Lun ID  (0000:08:00.1) Emulex Corporation Emulex LPe12000 8Gb PCIe Fibre Channel Adapter

emu1.png

emu2.png

Viewing all 8132 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>