Quantcast
Channel: VMware Communities : Discussion List - ESXi
Viewing all 8132 articles
Browse latest View live

Newbie question - Booting ESXi via USB stick

$
0
0
Hello,

 

 

I've been using an Intel NUC 8th gen as my home ESXi 6.7 U3 server 6 months, what I did was install ESXi on the internal SSD drive and it's been working well. Something has corrupted my host recently and I want to rebuild. On the SSD is also my Datastore 1 and I also have another SSD inside with Datastore 2.

 

 

Anyway many have said boot off a USB key instead and just keep the datastore 1 and 2 on each disk, I like this idea.

 

 

Well I just used Rufus and built a bootable USB drive with 6.7 U3 and it boots off the USB key fine, but gets to a point where it can't find a network card/device and has to halt. It has a 1GB port and what is strange when I installed this 6 months ago to the internal SSD it installed find it, why is booting off USB different? Do I need to inject the drivers?

 

 

If I reboot into my corrupt host I can ping the host so the NIC is fine and recognised.

 

 

Don't know what to do now

 

 

Any ideas?

 

 

Thanks

 


CAn't start vmware vm in current state.

$
0
0

Hello,

I am John Tankersley. I am running vmware on Esxi 6.0 on a HP  P4300 G2.

I have 2 xeon processors  running 4 cores each or total 8 cores at 2.6 Ghz.

I have about 8 gigabyes of ram .

I can log into vmware 6.0 Esxi on the server putting the ip address into a browser on a client machine and gain access.

I can't start the virtual machine installed. I keep getting that the vm can't be started in the current state.

Can anyone give me any help?

John Tankersley

john_tnkrsly@yahoo.com

Thank-you.

The ramdisk 'var' is full

$
0
0

Hi guys.

 

I am thinking of doing a reboot of this host with the error message. Wanting to know is there anything else I can check prior to a reboot?

I am not sure how to get onto host to check which files are there.

 

Thanks

ESXi 6.7 with "net51-r8169" network driver, dependency error of "vmkapi"

$
0
0

Hello,
I must virtualize my dedicated server with ESXi. Processor of the server is a Ryzen 3600 and I need to use ESXi 6.7 or newer. Server's network adapter is RealTek RTL-8169 and it's driver hasn't been supported by default since ESXi 6.1 (https://vibsdepot.v-front.de/wiki/index.php/Net51-r8169 ). This page says the driver does NOT work With: ESXi 6.7 and newer.
I tried to inject the driver with VMWare PowerCLI into latest ESXi 6.7 but some kind of a dependency problem occurred and below are the logs:

 

PowerCLI C:\> .\ESXi\ESXi-Customizer-PS-v2.6.0.ps1 -v67 -vft -load sata-xahci,esxcli-shell,net51-r8169

 

This is ESXi-Customizer-PS Version 2.6.0 (visit https://ESXi-Customizer-PS.v-front.de for more information!)

(Call with -help for instructions)

 

Logging to C:\Users\mbdor\AppData\Local\Temp\ESXi-Customizer-PS-29064.log ...

 

Running with PowerShell version 5.1 and VMware PowerCLI version 6.5.0.2604913

 

Connecting the VMware ESXi Online depot ... [OK]

 

Connecting the V-Front Online depot ... [OK]

 

Getting Imageprofiles, please wait ... [OK]

 

Using Imageprofile ESXi-6.7.0-20191204001-standard ...

(dated 11/25/2019 11:42:42, AcceptanceLevel: PartnerSupported,

Updates ESXi 6.7 Image Profile-ESXi-6.7.0-20191204001-standard)

 

Load additional VIBs from Online depots ...

   Add VIB sata-xahci 1.42-1 [New AcceptanceLevel: CommunitySupported] [OK, added]

   Add VIB esxcli-shell 1.1.0-15 [OK, added]

   Add VIB net51-r8169 6.011.00-2vft.510.0.0.799733 [OK, added]

 

Exporting the Imageprofile to 'C:\ESXi\ESXi-6.7.0-20191204001-standard-customized.iso'. Please be patient ...

 

WARNING: The image profile fails validation.  The ISO / Offline Bundle will still be generated but may contain errors and may not boot or be functional.  Errors:

WARNING:   VIB VFrontDe_bootbank_net51-r8169_6.011.00-2vft.510.0.0.799733 requires vmkapi_2_1_0_0, but the requirement cannot be satisfied within the ImageProfile. However, additional VIB(s)

VMware_bootbank_esx-base_5.5.0-0.14.1598313, VMware_bootbank_esx-base_6.0.0-3.87.8934903, VMware_bootbank_esx-base_5.1.0-3.52.2575044, VMware_bootbank_esx-base_6.0.0-3.100.9313334,

VMware_bootbank_esx-base_5.1.0-0.5.838463, VMware_bootbank_esx-base_6.0.0-3.57.5050593, VMware_bootbank_esx-base_6.0.0-0.0.2494585, VMware_bootbank_esx-base_6.5.0-1.29.6765664,

VMware_bootbank_esx-base_6.0.0-0.8.2809111, VMware_bootbank_esx-base_5.1.0-1.19.1312874, VMware_bootbank_esx-base_6.5.0-2.71.10868328, VMware_bootbank_esx-base_6.0.0-3.76.6856897,

VMware_bootbank_esx-base_5.1.0-1.15.1142907, VMware_bootbank_esx-base_5.1.0-2.29.1900470, VMware_bootbank_esx-base_5.1.0-3.60.3070626, VMware_bootbank_esx-base_5.1.0-3.82.3872638,

VMware_bootbank_esx-base_6.5.0-2.61.10175896, VMware_bootbank_esx-base_6.5.0-3.96.13932383, VMware_bootbank_esx-base_6.5.0-0.11.5146843, VMware_bootbank_esx-base_5.5.0-3.114.7967571,

VMware_bootbank_esx-base_5.5.0-1.28.1892794, VMware_bootbank_esx-base_6.0.0-2.37.3825889, VMware_bootbank_esx-base_6.0.0-0.6.2715440, VMware_bootbank_esx-base_6.5.0-2.64.10390116,

VMware_bootbank_esx-base_5.5.0-2.54.2403361, VMware_bootbank_esx-base_5.1.0-2.47.2323231, VMware_bootbank_esx-base_6.0.0-0.11.2809209, VMware_bootbank_esx-base_5.1.0-3.50.2323236,

VMware_bootbank_esx-base_5.1.0-1.16.1157734, VMware_bootbank_esx-base_6.0.0-1.20.3073146, VMware_bootbank_esx-base_6.0.0-2.34.3620759, VMware_bootbank_esx-base_6.5.0-3.101.14320405,

VMware_bootbank_esx-base_5.5.0-2.62.2718055, VMware_bootbank_esx-base_6.5.0-1.26.5969303, VMware_bootbank_esx-base_6.5.0-0.9.4887370, VMware_bootbank_esx-base_5.1.0-1.22.1472666,

VMware_bootbank_esx-base_5.1.0-0.10.1021289, VMware_bootbank_esx-base_6.0.0-3.113.13003896, VMware_bootbank_esx-base_5.1.0-3.85.3872664, VMware_bootbank_esx-base_6.0.0-3.58.5224934,

VMware_bootbank_esx-base_5.5.0-3.107.7618464, VMware_bootbank_esx-base_5.5.0-0.0.1331820, VMware_bootbank_esx-base_5.5.0-3.86.4179631, VMware_bootbank_esx-base_5.5.0-3.71.3116895,

VMware_bootbank_esx-base_6.0.0-3.138.15169789, VMware_bootbank_esx-base_6.0.0-0.14.3017641, VMware_bootbank_esx-base_6.0.0-3.72.6765062, VMware_bootbank_esx-base_6.0.0-0.5.2615704,

VMware_bootbank_esx-base_6.5.0-0.14.5146846, VMware_bootbank_esx-base_6.5.0-2.83.13004031, VMware_bootbank_esx-base_6.0.0-3.66.5485776, VMware_bootbank_esx-base_5.1.0-0.9.914609,

VMware_bootbank_esx-base_5.1.0-0.0.799733, VMware_bootbank_esx-base_6.5.0-2.57.9298722, VMware_bootbank_esx-base_6.5.0-3.108.14990892, VMware_bootbank_esx-base_5.5.0-3.103.6480267,

VMware_bootbank_esx-base_6.5.0-1.33.7273056, VMware_bootbank_esx-base_6.5.0-2.50.8294253, VMware_bootbank_esx-base_5.1.0-2.41.2191354, VMware_bootbank_esx-base_5.5.0-1.18.1881737,

VMware_bootbank_esx-base_5.1.0-2.28.1743533, VMware_bootbank_esx-base_6.0.0-1.26.3380124, VMware_bootbank_esx-base_6.0.0-3.129.14513180, VMware_bootbank_esx-base_5.5.0-3.81.3343343,

VMware_bootbank_esx-base_6.0.0-3.79.6921384, VMware_bootbank_esx-base_5.5.0-2.42.2302651, VMware_bootbank_esx-base_5.5.0-3.106.6480324, VMware_bootbank_esx-base_6.0.0-2.40.4179598,

VMware_bootbank_esx-base_5.1.0-1.20.1312873, VMware_bootbank_esx-base_5.5.0-3.124.9919047, VMware_bootbank_esx-base_5.5.0-3.120.9313066, VMware_bootbank_esx-base_5.1.0-3.55.2583090,

VMware_bootbank_esx-base_5.1.0-2.27.1743201, VMware_bootbank_esx-base_6.0.0-1.22.3247720, VMware_bootbank_esx-base_5.5.0-3.95.4345813, VMware_bootbank_esx-base_5.5.0-3.117.8934887,

VMware_bootbank_esx-base_5.5.0-1.16.1746018, VMware_bootbank_esx-base_6.5.0-0.23.5969300, VMware_bootbank_esx-base_6.5.0-3.105.14874964, VMware_bootbank_esx-base_6.0.0-3.93.9239792,

VMware_bootbank_esx-base_5.5.0-2.58.2638301, VMware_bootbank_esx-base_5.5.0-2.59.2702869, VMware_bootbank_esx-base_6.0.0-1.31.3568943, VMware_bootbank_esx-base_6.5.0-2.67.10719125,

VMware_bootbank_esx-base_5.5.0-3.89.4179633, VMware_bootbank_esx-base_5.1.0-3.57.3021178, VMware_bootbank_esx-base_5.1.0-0.8.911593, VMware_bootbank_esx-base_5.5.0-3.75.3247226,

VMware_bootbank_esx-base_6.0.0-3.116.13635687, VMware_bootbank_esx-base_5.5.0-2.55.2456374, VMware_bootbank_esx-base_6.5.0-0.0.4564106, VMware_bootbank_esx-base_6.0.0-2.52.4600944,

VMware_bootbank_esx-base_5.5.0-2.39.2143827, VMware_bootbank_esx-base_6.5.0-1.41.7967591, VMware_bootbank_esx-base_5.5.0-1.15.1623387, VMware_bootbank_esx-base_6.0.0-2.43.4192238,

VMware_bootbank_esx-base_6.0.0-3.96.9239799, VMware_bootbank_esx-base_5.5.0-2.65.3029837, VMware_bootbank_esx-base_6.5.0-0.15.5224529, VMware_bootbank_esx-base_6.5.0-3.111.15177306,

VMware_bootbank_esx-base_5.5.0-3.101.5230635, VMware_bootbank_esx-base_5.1.0-2.23.1483097, VMware_bootbank_esx-base_6.0.0-1.29.3568940, VMware_bootbank_esx-base_6.0.0-2.54.5047589,

VMware_bootbank_esx-base_6.5.0-1.47.8285314, VMware_bootbank_esx-base_5.1.0-2.32.1904929, VMware_bootbank_esx-base_5.5.0-3.84.3568722, VMware_bootbank_esx-base_5.5.0-0.15.1746974,

VMware_bootbank_esx-base_5.5.0-3.100.4722766, VMware_bootbank_esx-base_6.5.0-1.36.7388607, VMware_bootbank_esx-base_6.0.0-3.69.5572656, VMware_bootbank_esx-base_6.0.0-2.49.4558694,

VMware_bootbank_esx-base_5.5.0-1.25.1892623, VMware_bootbank_esx-base_6.0.0-3.84.7967664, VMware_bootbank_esx-base_6.0.0-1.17.3029758, VMware_bootbank_esx-base_5.5.0-3.97.4756874,

VMware_bootbank_esx-base_6.0.0-3.107.10474991, VMware_bootbank_esx-base_6.5.0-3.116.15256468, VMware_bootbank_esx-base_6.5.0-0.19.5310538, VMware_bootbank_esx-base_5.5.0-0.8.1474528,

VMware_bootbank_esx-base_6.5.0-2.79.11925212, VMware_bootbank_esx-base_6.5.0-2.88.13635690, VMware_bootbank_esx-base_5.1.0-2.35.2000251, VMware_bootbank_esx-base_5.5.0-1.30.1980513,

VMware_bootbank_esx-base_5.5.0-0.7.1474526, VMware_bootbank_esx-base_5.1.0-1.13.1117900, VMware_bootbank_esx-base_6.0.0-3.135.15018929, VMware_bootbank_esx-base_6.5.0-2.75.10884925,

VMware_bootbank_esx-base_5.5.0-2.33.2068190, VMware_bootbank_esx-base_6.5.0-2.92.13873656, VMware_bootbank_esx-base_5.5.0-3.78.3248547, VMware_bootbank_esx-base_5.1.0-0.11.1063671,

VMware_bootbank_esx-base_6.0.0-1.23.3341439, VMware_bootbank_esx-base_5.1.0-2.26.1612806, VMware_bootbank_esx-base_5.5.0-3.92.4345810, VMware_bootbank_esx-base_5.5.0-2.51.2352327,

VMware_bootbank_esx-base_6.5.0-3.120.15256549, VMware_bootbank_esx-base_6.0.0-2.46.4510822, VMware_bootbank_esx-base_5.5.0-2.36.2093874, VMware_bootbank_esx-base_6.5.0-2.54.8935087,

VMware_bootbank_esx-base_5.5.0-3.68.3029944, VMware_bootbank_esx-base_6.0.0-3.110.10719132, VMware_bootbank_esx-base_5.1.0-2.44.2191751, VMware_bootbank_esx-base_6.0.0-3.125.14475122,

VMware_bootbank_esx-base_5.1.0-1.12.1065491 from depot can satisfy this requirement.

 

All done.

 

I coloured the logs. The red text is the important section of the log. I do not have any idea about "vmkapi", I need help.

Failed to lock the file

$
0
0

I have esxi 5.1 standalone and there are 3 vm running on the esxi host. a virtual machine crashes on that host then ı reboot this vm . what is the problem. i get this warning on the host

 

 

Lost access to volume

57beefa6-e02fd332-e776-901b0e6c7e5c (datastore1)

due to connectivity issues. Recovery attempt is in

progress and outcome will be reported shortly.

info

01.02.2020 20:24:40

datastore1

 

 

Message on appa-01: The operation on the file "/vmfs/

devices/deltadisks/23bbc4f-vm-100-disk-1-s001-s001-s0-

01-s001.vmdk" failed (Failed to lock the file). The file

system where disk "/vmfs/devices/deltadisks/23bbc4f-vm

-100-disk-1-s001-s001-s001-s001.vmdk" resides is full.

Select Retry to attempt the operation again. Select

Cancel to end the session.

info

01.02.2020 20:25:27

appa-01

User

Wrong CPU type

$
0
0

Hello,

we did some Migration of VM´s from older Server to some new.

I changed the EVC mode for each cluster to highest Level.

No change of the HW Level was done.

 

Now we find out, the wronh Prozessor Type was shown (the old one), even after a reboot.

Doing HW Upgrade and Tools fixed this.

 

Due to VMware aren´t emuleting the CPU, shouldn´t they show the right Prozessor type after a reboot, ss the VM started I looks for the CPU..

ESXTOP - System %RDY

$
0
0

I am looking at my ESXTOP output and from what I can see, my CPU load is OK, and my %PCPU is OK. My question is can anyone tell me what this SYSTEM metric for %Ready is and why it is so high compared to my VMs which seem to have an acceptable %RDY? I am having issues with "pokey" VMs. What I am noticing is that my %VMWAIT seems to spike frequently for a VM but my CPU utilization seems to be acceptable? I know I am missing something here. Thanks.

 

2:36:27pm up 5 days 16:04, 671 worlds, 3 VMs, 14 vCPUs; CPU load average: 0.21, 0.22, 0.24

PCPU USED(%):  20  22  30 1.9  33 3.2 6.7  28 5.0  31 3.6  32  33 0.1  28 1.3 0.2 0.2 0.3  30  28 0.0  32 0.0  15 0.0  11 0.2 7.9 0.1  31 0.0 AVG:  13

PCPU UTIL(%):  19  21  30 2.5  31 3.9 7.4  27 5.4  29 4.0  29  30 0.2  25 1.4 0.2 0.2 0.3  28  26 0.1  29 0.1  13 0.1  10 0.3 8.5 0.1  28 0.1 AVG:  13

CORE UTIL(%):  39      31      33      32      33      33      30      26     0.4      28      26      29      13      10     8.5      29     AVG:  25

 

      ID      GID        NAME                 NWLD      %USED      %RUN      %SYS     %WAIT    %VMWAIT     %RDY    %IDLE   %OVRLP    %CSTP   %MLMTD   %SWPWT

   13679    13679   AAAASERVER02     16           153.50        140.51      0.05       1468.00       0.43             0.13       362.65      0.18          0.00          0.00           0.00

  133816   133816 ZZZSERVER02      14          130.42        120.65      0.09         1285.00      16.59             0.11      264.48       0.14         0.00          0.00            0.00

   13664    13664    PP01                      15          54.34          50.21        0.11        1457.78      2.39              0.05      450.44       0.12         0.00          0.00            0.00

   1              1           system                 298           1.12           2875.21    0.00        26664.11       -               341.75    0.00         0.93         0.00          0.00           0.00

newly created user

$
0
0

i created new user into my ESXi 6.5 host.

roles and permission assigned, but i still cannot login.

error message is Cannot complete due to incorrect user name or password.

 

have already followed the recommended steps to create new users and assigned roles through the web gui


Corrupted ISCSI LUN used as a VMFS Datastore

$
0
0

I would like to ask for assistance regarding an ISCSI LUN used as a datastore. It started when I upgraded my ESXI host to 6.7. I have created a dump file . I would like to know how to proceed.

Your help is much appreciated mr. continuum.

Thanks

esxi upgrade

$
0
0

we have psod error on esxi hosts. the problem nic driver. now we want to upgrade esxi host and nic, firmware etc..

psod problem usually happens when moving virtual machine to another host then suddenly esxi host crashes . I want to ask something, how can I upgrade esxi host without moving virtual machines. some servers have to work 24/7 . maybe esxi may shut down when moving servers

 

how can i upgrade without turning off the servers also there is psod bug .

Random restarts of Server 2019 on ESX 6.0.0

$
0
0

I have a freshly installed Server 2019 that randomly restarts. The event log shows an uptime event, and the next event, nine seconds later, is a bootup event. There is also a Hyper-V machine running Server 2019 that disappears after these boots. But after rebooting another time, the Hyper-V guest reappears. This is the second time I have encountered this in about a week.

 

Server 2019 is supported by ESX 6.0.0.

[ warning] [guestinfo] GuestInfoGetDiskDevice: Missing disk device name; VMDK mapping unavailable for "/", fsName: "/dev/sda2"

$
0
0

After updating open-vm-tools 11.0.1 errors began to appear in the logs, they are recorded every minute:

 

[ warning] [guestinfo] GuestInfoGetDiskDevice: Missing disk device name; VMDK mapping unavailable for "/", fsName: "/dev/sda2"

 

 

How to solve a problem?

 

Ubuntu 18.4.3

ESXI 6.7.0 Update 3 (Build 15160138)

Overlapping Partitions

$
0
0

Can't have overlapping partitions.

Unable to read partition table for device /vmfs/devices/disks/

CRASH & purple screen on ESX 6.5.0 exception 14

$
0
0

Hello,

I'm using ESXi 6.5.0 Releasebuild 5310538 without any problems till now.

Today i got a purple screen with exception 14.

 

I used the search function of this forum, read through some threads and posts and i found out that sometines its a hardware problem , sometines a software failure but it also could be (on older versions) a problem with the network card.

 

How can i know how to follow up on this? Any suggestions what to check or what to do?

Screenshot attached.

esx_error_slavkali.JPG

A cold reboot of the system solved the problem for now but i want to find the root cause.

 

I'll now go and collect all neccessary log files.

 

Thank you for your help,

XFS: Bad Version

$
0
0

Hello,

 

we virtualized a  SLES 9 Server (32Bit)  with vmware vCenter Converter standalone. All went fine.

After Starting the VM (ESXI 6.5) we got the following error:

 

sles9.png

 

If we mount the filesystem to another Linux System there is no problem to read the XFS Filesystem. But the SLES 9 Kernel will not mount the xfs Filesystem anymore.

Has anyone got any suggestions for what I can try to fix the problem?

Thanks in advance !

 

Hartmut


LLDP+Broadcom 10/25g

$
0
0

VMware ESXi, 6.7.0, 15160138.

 

Earlier many of us had issues with Intel X710 nics having a hardware LLDP agent that made LLDP unavailable from the VMware side of things. There is a similar issue with broadcom nics, but LLDP works fine from the VMware side. From the switch side it does however get announments both from VMware and from the nic itself. While VMware's LLDP agent transmits both the server hostname and vmnic, the hardware nic agent only transmits the physical mac address.

 

It looks like different switch vendors handle this scenario differently. Cisco seems to only store the last value it received while Arista stores both values.

As we can see here, also from the Arista management UI we see that the MAC address is listed first and in many views only the first line is used.

 

As far as I can tell there is no parameter available in the ESXi 6.7 bnxtnet driver to disable LLDP in the same way we could on Intel x710:

] esxcli system module parameters list -m bnxtnet
Name                          Type          Value  Description
----------------------------  ------------  -----  --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
DRSS                          array of int         Number of RSS Queues to create on the Default Queue RSS pool. [Default: 4, Max: 16]
RSS                           array of int         Number of RSS queues to create in netqueue RSS pool. [Default: 4, Max: 16]
async_cmd_cmpl_timeout        uint                 For debug purposes, currently set to 10000 msec's, user can increase it. [Default: 10000, Min: 10000]
debug                         uint                 Debug msglevel: Default is 0 for Release builds
disable_dcb                   bool                 Disable the DCB support. 0: enable DCB support, 1: disable DCB support. [Default: 1]
disable_fwdmp                 bool                 For debug purposes, disable firmware dump feature when set to value of 1. [Default: 0]
disable_geneve_filter         bool                 For debug purposes, disable Geneve filter support feature when set to value of 1. [Default: 0]
disable_geneve_oam_support    bool                 For debug purposes, disable Geneve OAM frame support feature when set to value of 1. [Default: 1]
disable_q_feat_pair           bool                 For debug purposes, disable queue pairing feature when set to value of 1. [Default: 0]
disable_q_feat_preempt        bool                 For debug purposes, disable FEAT_PREEMPTIBLE when set to value of 1. [Default: 0]
disable_roce                  bool                 Disable the RoCE support. 0: Enable RoCE support, 1: Disable RoCE support. [Default: 1]
disable_shared_rings          bool                 Disable sharing of Tx and Rx rings support. 0: Enable sharing, 1: Disable sharing. [Default: 0]
disable_tpa                   bool                 Disable the TPA(LRO) feature. 0: enable TPA, 1: disable TPA. [Default: 0]
disable_vxlan_filter          bool                 For debug purposes, disable VXLAN filter support feature when set to value of 1. [Default: 0]
enable_default_queue_filters  int                  Allow filters on the default queue. -1: auto, 0: disallow, 1: allow. [Default: -1, which enables the feature when NPAR mode and/or VFs are enabled, and disables if otherwise]
enable_dr_asserts             bool                 For debug purposes, set to 1 to enable driver assert on failure paths, set to 0 to disable driver asserts. [Default: 0]
enable_geneve_ofld            bool                 Enable Geneve TSO/CSO offload support. 0: disable Geneve offload, 1: enable Geneve offload. [Default: 1]
enable_host_dcbd              bool                 Enable host DCBX agent. 0: disable host DCBX agent, 1: enable host DCBX agent. [Default: 0]
enable_r_writes               bool                 For debug purposes, set to 1 to enable r writes, set to 0 to disable r writes. [Default: 0]
enable_vxlan_ofld             bool                 Enable VXLAN TSO/CSO offload support. 0: disable, 1: enable. [Default: 1]
force_hwq                     array of int         Max number of hardware queues: -1: auto-configured, 1: single queue, 2..N: enable this many hardware queues. [Default: -1]
int_mode                      uint                 Force interrupt mode. 0: MSIX; 1: INT#x. [Default: 0]
max_vfs                       array of int         Number of Virtual Functions: 0: disable, N: enable this many VFs. [Default: 0]
multi_rx_filters              int                  Define the number of RX filters per NetQueue: -1: use the default number of RX filters, 0,1: disable use of multiple RX filters, so single filter per queue, 2..N: force the number of RX filters to use for a NetQueue. [Default: -1]
psod_on_tx_tmo                bool                 For debug purposes, set to 1 to force PSOD on tx timeout, set to 0 to disable PSOD on tx timeout. [Default: 0]

 

Nic firmware and driver is:

] esxcli network nic get -n vmnic4   Advertised Auto Negotiation: true   Advertised Link Modes: 1000BaseCR1/Full, 25000BaseCR1/Full, Auto   Auto Negotiation: true   Cable Type: DA   Current Message Level: 0   Driver Info:         Bus Info: 0000:a1:00:0         Driver: bnxtnet         Firmware Version: 214.0.253.1         Version: 214.0.230.0

 

Anyone seen this issue before?

 

Lars

Network Adapters and VLAN (optional) greyed out in DCUI

$
0
0

I am studying for my VCP exam and I am using a nested setup in Workstation. Suddenly I don't have access to the network adapters in the DCUI. I have migrated both NICs used for management to a vmkernel management port group in a distributed switch.

Does someone know what is going on here? Google gave me this hint Configure Management Network Greyed out in DCUI

But in my case the hosts are still connected to vCenter and are working fine no issues anywhere.

 

2015-02-11 21_42_46-vlabdkcphesxi01 - VMware Workstation.png

CVE-2018-3646 in ESXi 6.7.0 Update 3 (Build 15160138)

$
0
0

Hey there

 

Im setting up my first private ESXi for testing purposes..

 

I have installed ESXi 6.7.0U3 (Build 15160138) with a custom ISO (Realtek driver)..

I'm now getting this message:

and I am honestly not sure what I have to do now.. I've read through the article but I have no clue how to update or if I even have to? (6.7.0U3 is pretty new compared to the CVE.. isn't this fix already integrated in this version?)

 

Would be happy about some help

vsphere 6.7 LSI raid card, can't get drive status

$
0
0

Hi!

 

I have this server setup at home:

 

SuperMicro X10SLL-F

Intel Xeon E3-1241 v3

32GB 3CC 1600

LSI 8708EM2

2x Velociraptor 10k 300gb RAID 1

4x WD Red 2TB Raid 5

 

I was running Vsphere 6.0 since a long time, and I decided to go over 6.7. Installation worked fine, I have access to everything, all is working... Except 2 things:

 

 

1. I can't get my LSI 8708EM2 (detected as a 1078) to show drive status

2. Can't get my Storage Manager to access disk info or rebuild raid.

 

When I was on vSphere 6.0, I was able to see my drive status (and veeam one could alert me), and with my WIndows VM, I had LSI MSM and I was able to configure the raid card.

 

So what I did:

 

Install SAS driver I was using in version 6 --> not working

Installed the SMIS I had  --> not working

Installed lastest SMIS  --> not working

Disabled the Firewall --> not working

 

 

I have no idea what else I can do. I know it's an old card, but I got it for cheap. Any idea how can I get this fixed?

 

 

Thanks

Issues w/ 6.7 and windows megaraid LSA

$
0
0

   Bear with me, this is my first day w/ this software as I normally work w/ Hyper-V.  For the life of me I cannot get the AVAGO LSI storage authority to log into my host to display info.  I get the following "Error Code:65537 : Logon failure: unknown user name or bad password"  I've confirmed correct credentials to log into host.

  I did get it to work once oddly and it hasn't since.  What I do notice is in ESXi under  Host | Monitor | hardware | storage | everything but memory is Unknown.

 

Hardware:  Lenovo Thinkserver TS460

RAID controller: embedded MR9340-8i

 

Things I have tried/done:

  • Followed instructions and added host file to guest OS.  Both VMWARE and GUEST OS can ping each others host names
  • Disabled firewall on VMWARE per lenovo driver install instructions and confirmed disabled
  • Ensured that LSA detects VMWARE host
  • Ensure slpd and sfcbd-watchdog services were running.  restarted both as a diag step
  • ran enum_instances cim_system lsi/lsimr13 and got the following so I assume it is at least seeing it?

 

LSIESG_MegaRAIDHBA.CreationClassName="LSIESG_MegaRAIDHBA",Name="500605B00DD3F6E0"\

Name = 500605B00DD3F6E0

CreationClassName = LSIESG_MegaRAIDHBA

 

What is installed:

VMWARE

  • LSIPROVIDER:  LSI Provider for ESX Server 500.04.V0.71-0004 (From broadcom site)
  • RAID DRIVER:  Version: 7.705.10.00-1OEM.670.0.0.8169922 (From lenovo site)

 

GUEST OS (2019 Server)

  • LSI Storage Authority 004.184.000.000
  • MegaRAID 7.7

 

Any input would be greatly appreciated.

 

Dave

Viewing all 8132 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>