Quantcast
Channel: VMware Communities : Discussion List - ESXi
Viewing all 8132 articles
Browse latest View live

Recover deleted vmdk files?

$
0
0

I accidentally deleted four vmdk files that I absolutely need to recover.  What are my options?


repair corrupt flat.vmdk files?

$
0
0

is there a way to repair corrupt flat.vmdk files ?

the files cannot be opened i always get the  the error message "invalid arguments"

Account Activation Incorrect Password

$
0
0

hello all

i just signed up and sure about my password

 

when i try to activate it keeps giving me INCORRECT PASSWORD

 

please help

ESXI 6.5 U2 - make wi-fi/bluetooth adapter available for VM and Guest OS?...

$
0
0

Hi,

 

Hi, how do I make wi-fi/bluetooth adapter available for VM and Guest OS?...

 

I have ESXI 6.5 U2 installed bare-metal on Motherboard with:

  • Intel Gigabit Ethernet ( Intel® I219V)
  • Integrated 2x2 802.11ac Wi-Fi with MU-MIMO support.

 

In ESXI 6.5, I can see Physical adapter " vmnic0 , 1000 Mbps, Full"

In Guest OS, when I add Network Adapter, I can see Network Adapter Types (depending on OS):

  • E1ooo
  • E1000e
  • SR-IOV passthrough
  • VMXNET 3
  • VMXNET 2 (enhanced)

 

In Guest OS, I can connect to internet without problem, but no matter which OS I open, it will always tell me that wi-fi and bluetooth are UNAVAILABLE.

 

Why is that? Is problem:

 

(1) BIOS? (I have latest BIOS setting, but not sure if I ever explicitly enabled wifi/bluetooth)

(2) ESXI 6.5 U2

(3) Guest OS

 

Thank you!

vSphere Web Client is so bad that my experience managing and supporting VMware has turn to &^#$%@

$
0
0

Purpose of this post is simple and obvious...  bring back development to thick client.  THANKS!

Can not connect to ESXi server through vsphere client and web

$
0
0

Hello

We created new user ID on Esxi server and given Read only permission. But facing below error while going to login through Vsphere client and web.

Vmware login.JPG

The operation is not allowed in the current state. Can't create snapshot.

$
0
0
Recently upgraded my Dell T620 from 5.5. to 6.0 to 6.7. Now i can't take any snapshots. I tried restarting services.sh. Still can't. Error message:
Create Snapshot

 

Key

haTask-1-vim.VirtualMachine.createSnapshot-3243076288

 

Description

Create a new snapshot of this virtual machine

 

Virtual machine:

DC01

 

State

Failed - The operation is not allowed in the current state.

hypervisor protection

$
0
0

Hi,

what are the solutions for hypervisor protection (malware, virus wise)?

What about TrendMicro DeepSecurity product for ESXi?

 

What other using for virtual infrastructure protection.

Currently OfficeScan in place on each VM, but I heard a lot about DeepSecurity efficiency and etc.

 

Thx.


HP FlexFabric 20Gb 2-port 650FLB - Gen9 networking inconsistency

$
0
0

I have come across an interesting issue with a new HPE platform. The system is running within a C7000 BladeSystem, with BL460c Gen9 blades.

 

We have noticed some degradation in performance on iSCSI connection (using the Software iSCSI initiator), this traffic runs over vmnic1 and vmnic2 details from the NIC list are below.

 

vmnic1  0000:06:00.1  elxnet  Up        Up       10000  Full32:a6:05:e0:00:be  1500  Emulex Corporation HPE FlexFabric 20Gb 2-port 650FLB Adapter
vmnic2  0000:06:00.2  elxnet  Up        Up       10000  Full32:a6:05:e0:00:bd  1500  Emulex Corporation HPE FlexFabric 20Gb 2-port 650FLB Adapter

 

Each NIC is reporting at 10000 Mb full, however I am not able to set the speed on the ESXi server. vmnic1 reports the following for advertised link modes;

 

[root@ESX:~] esxcli network nic get -n vmnic1

   Advertised Auto Negotiation: true

   Advertised Link Modes: 1000BaseKR2/Full, 10000BaseKR2/Full, 20000BaseKR2/Full, Auto

   Auto Negotiation: true

 

Where as vmnic2 reports the following modes

 

[root@ESXi2b-14:~] esxcli network nic get -n vmnic2

   Advertised Auto Negotiation: false

   Advertised Link Modes: 20000None/Full

   Auto Negotiation: false

 

Confused, the settings are identical for these within OneView. Both NIC's are using firmware - 12.0.1110.11 from SPP 2018.06.0. The HPE ESXi image has been used including driver version 12.0.1115.0 which shows as being compatible on the comparability guide VMware Compatibility Guide - I/O Device Search.

 

Has anyone else seen this issue? If I try and manually set the speed/duplex settings via esxcli it fails with the following error in the vmkernel.log

 

2018-08-14T23:49:41.361Z cpu20:65677)WARNING: elxnet: elxnet_linkStatusSet:7471: [vmnic2] Device is not privileged to do speed changes

 

As a result of this when using HCIBench to test the storage throughput the 95%tile_LAT value is reading excessively when traversing vmnic2 - 95%tile_LAT = 3111.7403 ms

 

Any thoughts??

reclaim

$
0
0

I have vcenter 5.5 and 5.5 esxi host and vnx 5400 storage. on vnx 5400 storage lun is think but virtual machine disks is thin on vcenter datastores . I will delete some virtual servers ( thin disk). I should run the unmap command for reclaim . vnx 5400 luns created as a thick. I think this should be storage thin for relaim. do I need to run unmap if storage lun is thick?

 

Thanks

How do I disable IPv6 on my iSCSI adapter?

$
0
0

I noticed that my hosts are no longer in line with my host profile.  It has probably been this way for a while.  All of them show that IPv6 is enabled for the iSCSI VMHBAs.

vmhba-ipv6-error.png

 

I am not sure why this would be a new thing, but I tried to remediate against the host profile and got an error that it could not disable this setting.

cant-disable-ipv6-2.png

 

I went into the properties of the adapter and attempted to manually turn this off, but again was denied.

cant-disable-ipv6.png

 

Any idea why I can't disable IPv6 on this adapter?  Perhaps a limitation of the associated Emulex driver?  Or maybe it can be disabled from command line?  I can always just tell the host profile to ignore this setting, but thought it was weird that I could not disable this.

 

Thanks for any input.

add to inventory greyed out - no lck file?

$
0
0

I have a VM that following an ESX ungraceful power off can no longer be added to the inventory.

 

It now doesnt show but I can browse the datastore where it is saved.

 

The is no lck file suggesting the vmx is not locked and I dont believe any snapshots of this server were in place either.

 

How can I bring the vm back into my environment?

bnxtnet fails to load on 3 of 4 identical servers

$
0
0

I have four brand new identicalDell Poweredge R730's with BCM57406 10G nic adapters (on the 6.7U1 HCL)

 

Model : BCM57406

Device Type : Network

Brand Name : DELL

Number of Ports: 2

DID : 16d2

SVID : 14e4

SSID : 4060

VID : 14e4

 

One of the four servers will load bnxtnet driver and activate the nic just fine.  The other three will not and I am stumped.  I have checked any and all bios/nic settings. All firmware is identical, PCI slots are identical, esxi 6.7U1 is loaded identical.... and yet I cannot get three of them past this error.

 

vmkernel.log from server that works....

 

2019-01-31T12:19:13.436Z cpu1:2097664)Loading module bnxtnet ...

2019-01-31T12:19:13.437Z cpu1:2097664)Elf: 2101: module bnxtnet has license BSD

2019-01-31T12:19:13.441Z cpu1:2097664)Device: 192: Registered driver 'bnxtnet' from 22

2019-01-31T12:19:13.441Z cpu1:2097664)Mod: 4962: Initialization of bnxtnet succeeded with module ID 22.

2019-01-31T12:19:13.441Z cpu1:2097664)bnxtnet loaded successfully.

2019-01-31T12:19:13.442Z cpu6:2097620)bnxtnet: bnxtnet_initialize_devname:61: [0000:06:00.0 : 0x4309fd3bfe10] PCI device 16d2:14e4:4060:14e4 detected

2019-01-31T12:19:13.442Z cpu6:2097620)bnxtnet: bnxtnet_dev_probe:1275: [0000:06:00.0 : 0x4309fd3bfe10] Starting Cumulus device probe

2019-01-31T12:19:13.442Z cpu6:2097620)DMA: 679: DMA Engine 'cumulus-0000:06:00.0' created using mapper 'DMANull'.

2019-01-31T12:19:13.442Z cpu6:2097620)DMA: 679: DMA Engine 'cumulus-co-0000:06:00.0' created using mapper 'DMANull'.

2019-01-31T12:19:13.442Z cpu6:2097620)VMK_PCI: 914: device 0000:06:00.0 pciBar 0 bus_addr 0x91c20000 size 0x10000

2019-01-31T12:19:13.442Z cpu6:2097620)bnxtnet: bnxtnet_map_pci_mem:784: [0000:06:00.0 : 0x4309fd3bfe10] mapped pci bar 0 at vaddr  0x450196a40000

2019-01-31T12:19:13.442Z cpu6:2097620)VMK_PCI: 914: device 0000:06:00.0 pciBar 2 bus_addr 0x91c30000 size 0x10000

2019-01-31T12:19:13.442Z cpu6:2097620)bnxtnet: bnxtnet_map_pci_mem:784: [0000:06:00.0 : 0x4309fd3bfe10] mapped pci bar 2 at vaddr  0x450196a60000

2019-01-31T12:19:13.442Z cpu6:2097620)VMK_PCI: 914: device 0000:06:00.0 pciBar 4 bus_addr 0x91dc2000 size 0x2000

2019-01-31T12:19:13.442Z cpu6:2097620)bnxtnet: bnxtnet_map_pci_mem:784: [0000:06:00.0 : 0x4309fd3bfe10] mapped pci bar 4 at vaddr  0x450196468000

2019-01-31T12:19:13.443Z cpu6:2097620)bnxtnet: dev_init_device_info:1113: [0000:06:00.0 : 0x4309fd3bfe10] PHY is AutoGrEEEn capable

2019-01-31T12:19:13.479Z cpu6:2097620)WARNING: bnxtnet: bnxtnet_alloc_mem_probe:933: [0000:06:00.0 : 0x4309fd3bfe10] Disable VXLAN/Geneve RX filter due to firmware bug. Refer to VMware Compatibilit

2019-01-31T12:19:13.479Z cpu6:2097620)bnxtnet: bnxtnet_alloc_intr_resources:899: [0000:06:00.0 : 0x4309fd3bfe10] The intr type set to MSIX

2019-01-31T12:19:13.479Z cpu6:2097620)VMK_PCI: 764: device 0000:06:00.0 allocated 16 MSIX interrupts

2019-01-31T12:19:13.479Z cpu6:2097620)bnxtnet: bnxtnet_dev_probe:1352: [0000:06:00.0 : 0x4309fd3bfe10] Interrupt mode: MSIX, max fastpaths: 16 max roce irqs: 0

2019-01-31T12:19:13.479Z cpu6:2097620)bnxtnet: bnxtnet_dev_probe:1358: [0000:06:00.0 : 0x4309fd3bfe10] Ending successfully cumulus device probe

2019-01-31T12:19:13.479Z cpu6:2097620)bnxtnet: bnxtnet_attach_device:235: [0000:06:00.0 : 0x4309fd3bfe10] Driver successfully attached cumulus device (0x2d544305d9cc7d46) with Chip ID=0x16D2 Rev/Me

2019-01-31T12:19:13.480Z cpu6:2097620)Device: 327: Found driver bnxtnet for device 0x2d544305d9cc7d46

2019-01-31T12:19:13.480Z cpu6:2097620)CpuSched: 697: user latency of 2097666 netpoll-backup 0 changed by 2097620 vmkdevmgr -6

2019-01-31T12:19:13.480Z cpu6:2097620)CpuSched: 697: user latency of 2097667 netpoll-backup 0 changed by 2097620 vmkdevmgr -6

2019-01-31T12:19:13.480Z cpu6:2097620)CpuSched: 697: user latency of 2097668 netpoll-backup 0 changed by 2097620 vmkdevmgr -6

2019-01-31T12:19:13.480Z cpu6:2097620)CpuSched: 697: user latency of 2097669 netpoll-backup 0 changed by 2097620 vmkdevmgr -6

2019-01-31T12:19:13.480Z cpu6:2097620)CpuSched: 697: user latency of 2097670 netpoll-backup 0 changed by 2097620 vmkdevmgr -6

2019-01-31T12:19:13.480Z cpu6:2097620)CpuSched: 697: user latency of 2097671 netpoll-backup 0 changed by 2097620 vmkdevmgr -6

2019-01-31T12:19:13.480Z cpu6:2097620)CpuSched: 697: user latency of 2097672 netpoll-backup 0 changed by 2097620 vmkdevmgr -6

2019-01-31T12:19:13.480Z cpu6:2097620)CpuSched: 697: user latency of 2097673 netpoll-backup 0 changed by 2097620 vmkdevmgr -6

2019-01-31T12:19:13.480Z cpu6:2097620)CpuSched: 697: user latency of 2097674 netpoll-backup 0 changed by 2097620 vmkdevmgr -6

2019-01-31T12:19:13.480Z cpu6:2097620)CpuSched: 697: user latency of 2097675 netpoll-backup 0 changed by 2097620 vmkdevmgr -6

2019-01-31T12:19:13.480Z cpu6:2097620)CpuSched: 697: user latency of 2097676 netpoll-backup 0 changed by 2097620 vmkdevmgr -6

2019-01-31T12:19:13.480Z cpu6:2097620)CpuSched: 697: user latency of 2097677 netpoll-backup 0 changed by 2097620 vmkdevmgr -6

2019-01-31T12:19:13.480Z cpu6:2097620)CpuSched: 697: user latency of 2097678 netpoll-backup 0 changed by 2097620 vmkdevmgr -6

2019-01-31T12:19:13.480Z cpu6:2097620)CpuSched: 697: user latency of 2097679 netpoll-backup 0 changed by 2097620 vmkdevmgr -6

2019-01-31T12:19:13.480Z cpu6:2097620)CpuSched: 697: user latency of 2097680 netpoll-backup 0 changed by 2097620 vmkdevmgr -6

2019-01-31T12:19:13.480Z cpu6:2097620)CpuSched: 697: user latency of 2097681 netpoll-backup 0 changed by 2097620 vmkdevmgr -6

2019-01-31T12:19:13.480Z cpu6:2097620)bnxtnet: bnxtnet_start_device:389: [0000:06:00.0 : 0x4309fd3bfe10] Driver successfully started cumulus device (0x2d544305d9cc7d46)

2019-01-31T12:19:13.480Z cpu6:2097620)Device: 1466: Registered device: 0x4305d9cc0070 pci#s00000005.00#0 com.vmware.uplink (parent=0x2d544305d9cc7d46)

2019-01-31T12:19:13.480Z cpu6:2097620)bnxtnet: bnxtnet_scan_device:559: [0000:06:00.0 : 0x4309fd3bfe10] Successfully registered uplink device

 

 

 

vmkernel.log from other three servers that don't work....

 

2019-01-31T12:18:56.545Z cpu4:2097664)Loading module bnxtnet ...

2019-01-31T12:18:56.546Z cpu4:2097664)Elf: 2101: module bnxtnet has license BSD

2019-01-31T12:18:56.550Z cpu4:2097664)Device: 192: Registered driver 'bnxtnet' from 22

2019-01-31T12:18:56.550Z cpu4:2097664)Mod: 4962: Initialization of bnxtnet succeeded with module ID 22.

2019-01-31T12:18:56.550Z cpu4:2097664)bnxtnet loaded successfully.

2019-01-31T12:18:56.551Z cpu7:2097620)bnxtnet: bnxtnet_initialize_devname:61: [0000:05:00.0 : 0x4309fd3bfe10] PCI device 16d2:14e4:4060:14e4 detected

2019-01-31T12:18:56.552Z cpu7:2097620)bnxtnet: bnxtnet_dev_probe:1275: [0000:05:00.0 : 0x4309fd3bfe10] Starting Cumulus device probe

2019-01-31T12:18:56.552Z cpu7:2097620)DMA: 679: DMA Engine 'cumulus-0000:05:00.0' created using mapper 'DMANull'.

2019-01-31T12:18:56.552Z cpu7:2097620)DMA: 679: DMA Engine 'cumulus-co-0000:05:00.0' created using mapper 'DMANull'.

2019-01-31T12:18:56.552Z cpu7:2097620)VMK_PCI: 914: device 0000:05:00.0 pciBar 0 bus_addr 0x91c20000 size 0x10000

2019-01-31T12:18:56.552Z cpu7:2097620)bnxtnet: bnxtnet_map_pci_mem:784: [0000:05:00.0 : 0x4309fd3bfe10] mapped pci bar 0 at vaddr  0x450196540000

2019-01-31T12:18:56.552Z cpu7:2097620)VMK_PCI: 914: device 0000:05:00.0 pciBar 2 bus_addr 0x91c30000 size 0x10000

2019-01-31T12:18:56.552Z cpu7:2097620)bnxtnet: bnxtnet_map_pci_mem:784: [0000:05:00.0 : 0x4309fd3bfe10] mapped pci bar 2 at vaddr  0x450196560000

2019-01-31T12:18:56.552Z cpu7:2097620)VMK_PCI: 914: device 0000:05:00.0 pciBar 4 bus_addr 0x91c42000 size 0x2000

2019-01-31T12:18:56.552Z cpu7:2097620)bnxtnet: bnxtnet_map_pci_mem:784: [0000:05:00.0 : 0x4309fd3bfe10] mapped pci bar 4 at vaddr  0x450196468000

2019-01-31T12:18:56.552Z cpu7:2097620)bnxtnet: dev_init_device_info:1113: [0000:05:00.0 : 0x4309fd3bfe10] PHY is AutoGrEEEn capable

2019-01-31T12:18:58.068Z cpu7:2097620)WARNING: bnxtnet: hwrm_send_msg:168: [0000:05:00.0 : 0x4309fd3bfe10] HWRM cmd resp_len timeout, cmd_type 0x11(HWRM_FUNC_RESET) seq 5

2019-01-31T12:18:59.583Z cpu7:2097620)WARNING: bnxtnet: hwrm_send_msg:168: [0000:05:00.0 : 0x4309fd3bfe10] HWRM cmd resp_len timeout, cmd_type 0x11(HWRM_FUNC_RESET) seq 6

2019-01-31T12:18:59.583Z cpu7:2097620)DMA: 724: DMA Engine 'cumulus-0000:05:00.0' destroyed.

2019-01-31T12:18:59.583Z cpu7:2097620)DMA: 724: DMA Engine 'cumulus-co-0000:05:00.0' destroyed.

2019-01-31T12:18:59.583Z cpu7:2097620)WARNING: bnxtnet: bnxtnet_attach_device:208: [0000:05:00.0 : 0x4309fd3bfe10] failed to find cumulus device (status: Failure)

2019-01-31T12:18:59.583Z cpu7:2097620)Device: 2628: Module 22 did not claim device 0x1bd34305d9cc7d46.

2019-01-31T12:18:59.584Z cpu7:2097620)bnxtnet: bnxtnet_initialize_devname:61: [0000:05:00.1 : 0x4309fd3bfe10] PCI device 16d2:14e4:4060:14e4 detected

2019-01-31T12:18:59.584Z cpu7:2097620)bnxtnet: bnxtnet_dev_probe:1275: [0000:05:00.1 : 0x4309fd3bfe10] Starting Cumulus device probe

2019-01-31T12:18:59.585Z cpu7:2097620)DMA: 679: DMA Engine 'cumulus-0000:05:00.1' created using mapper 'DMANull'.

2019-01-31T12:18:59.585Z cpu7:2097620)DMA: 679: DMA Engine 'cumulus-co-0000:05:00.1' created using mapper 'DMANull'.

2019-01-31T12:18:59.585Z cpu7:2097620)VMK_PCI: 914: device 0000:05:00.1 pciBar 0 bus_addr 0x91c00000 size 0x10000

2019-01-31T12:18:59.585Z cpu7:2097620)bnxtnet: bnxtnet_map_pci_mem:784: [0000:05:00.1 : 0x4309fd3bfe10] mapped pci bar 0 at vaddr  0x450196500000

2019-01-31T12:18:59.585Z cpu7:2097620)VMK_PCI: 914: device 0000:05:00.1 pciBar 2 bus_addr 0x91c10000 size 0x10000

2019-01-31T12:18:59.585Z cpu7:2097620)bnxtnet: bnxtnet_map_pci_mem:784: [0000:05:00.1 : 0x4309fd3bfe10] mapped pci bar 2 at vaddr  0x450196520000

2019-01-31T12:18:59.585Z cpu7:2097620)VMK_PCI: 914: device 0000:05:00.1 pciBar 4 bus_addr 0x91c40000 size 0x2000

2019-01-31T12:18:59.585Z cpu7:2097620)bnxtnet: bnxtnet_map_pci_mem:784: [0000:05:00.1 : 0x4309fd3bfe10] mapped pci bar 4 at vaddr  0x45019469c000

2019-01-31T12:19:00.090Z cpu7:2097620)WARNING: bnxtnet: hwrm_send_msg:168: [0000:05:00.1 : 0x4309fd3bfe10] HWRM cmd resp_len timeout, cmd_type 0x0(HWRM_VER_GET) seq 0

2019-01-31T12:19:00.090Z cpu7:2097620)DMA: 724: DMA Engine 'cumulus-0000:05:00.1' destroyed.

2019-01-31T12:19:00.090Z cpu7:2097620)DMA: 724: DMA Engine 'cumulus-co-0000:05:00.1' destroyed.

2019-01-31T12:19:00.090Z cpu7:2097620)WARNING: bnxtnet: bnxtnet_attach_device:208: [0000:05:00.1 : 0x4309fd3bfe10] failed to find cumulus device (status: Failure)

2019-01-31T12:19:00.090Z cpu7:2097620)Device: 2628: Module 22 did not claim device 0x602e4305d9cc7eef.

 

 

The server with the working nic is actually working with the older driver

 

bnxtnet                        20.6.101.7-11vmw.670.0.0.8169922      VMW     VMwareCertified   2019-01-16

bnxtroce                       20.6.101.0-20vmw.670.1.28.10302608    VMW     VMwareCertified   2019-01-16

 

 

But I have tried the older and the newest version on the other three

 

bnxtnet                        212.0.119.0-1OEM.670.0.0.8169922      BCM                    VMwareCertified   2019-01-31

bnxtroce                       212.0.114.0-1OEM.670.0.0.8169922      BCM                    VMwareCertified   2019-01-31

 

 

I have swapped nics between the servers and the results are the same... the server with the working nic works with any of the nics and the other three servers won't so the physical nic cards are fine.

 

I don't know if this is a vmware or Dell issue.

 

Any ideas/thoughts on possible issues or other things to try?  Next step is to swap the Dell PCI riser and see if maybe somehow that might be an issue.

 

 

 

 

 

 

 

 

ESXi 6.7U1 standalone - resource pool view missing

$
0
0

Hi

 

I've just upgraded standalone ESXi from 6.0 to 6.7U1 as I thought the HTML5 client was now full feature parity to the C# fat client.

 

But I now cannot seen the resource pools that I had set up on 6.0 using the C# client.

 

I've searched online for info but all documentation seems to use vCenter which I don't use. Has this feature been removed on the free version or am I not looking in the right place?

Need to Update Generic ESXi ISO with Custom HP ISO

$
0
0

Hi,

 

I have installed a generic ISO (officially downloaded from vmware.com main download section) and installed in on my ProLiant DL389 Gen10. PFB the details below:

 

Capture.PNGCapture.PNG

 

I recently got to know that there is a 'better' compatible version of VMWare ESXi out there specifically made for HP Servers available through custom HP Images: https://my.vmware.com/group/vmware/details?downloadGroup=OEM-ESXI67U1-HPE&productId=742

 

Now I would love to have that version on my servers. Please help on what would be the exact update process which I need to follow. This is the first time I've ever deployed ESXi hence sorry for the noob question. Also, can I even update in the first place? I would like to believe so that I can update since I believe I'm running the 6.7 version whereas the HP custom image is running on 6.7U1. Also, there is no vCenter attached.

 

Regards,

John


HomeLab with Vmware ESXI and GPU passtrough

$
0
0

Hi there,
This is my first trip to this forum, and i am quite new to vmware ESXI.
What i am trying to do is creating a lab computer with one guestOS with GPU passthrough, but i have some issues and was hoping for some help.

 

My build is as following

CPU: Intel® Core™ i7-6700K Processor (8M Cache, up to 4.20 GHz) Product Specifications

Motherboard: Asus Z170-A https://www.asus.com/no/Motherboards/Z170-A/

GPU: GeForce GTX1060 (MSI) GeForce GTX 1060 GAMING X 6G | Graphics card - The world leader in display performance | MSI Global

 

After what i have been able to dig up this setup should be compatible with IOMMU and VT-X/VT-D. I have read that passing trough a consumer
card like the 1060 can be difficult duo to Nvidia blocking this for their consumer cards, but also that it should be possible tricking by editing the conf with hypervisor.cpuid.v0 = "FALSE".

 

After trying a failing a lot i have managed to get the gpu visible in the guestOS, but i only get error (43). I have tried upgrading the BIOS, the guest OS and installing the latest driver. I also tried with both the ESXI version 5.5 and 5.7. The last test i was not able to enable "IOMMU" on the CPU however, and got an error saying there was something wrong or the hardware was not supported.

 

Anyone got any suggestions, or is this not possible for this setup?

Exsi 6.7 and cucm 12 Sorry if this is the wrong post area Please point in right direction

$
0
0

Here is my problem I have a hp proliant server DL360 G7 intel  Xeon Cpu E5640 at 2.67ghz memory is 32 for now. I created a virtual switch  and Port group  tied to it. Then I created a cisco virtual router csr1000v and that i created a virtual machine using an iso file setup for CUCM 12v. Here comes my problem after the install everything is going I let the system get the background things run and go to the ip address of the cucm server. I then put in the ip address nothing happens but I can ping the cucm server and it pings back and anything thing on my lab network pings back as well using its ipv4 address. When I was the ipv4 address for the cucm as I mention nothing happens but when i use the ipv6 the cucm comes up. I'm totally at a lost of what I could be doing wrong. I search the internet and youtube be i'm still lost. Any help in the right direction i would gladly take. Thanks

Is that possible to install ESXI 6.7u1 on intel P4800x?

$
0
0

hello

We will purchase a supermicro 6029P-TRT server.

however, I hope to install ESXI 6.7u1 itself on intel P4800x.

I didn't found any article about this.

Could anyone give me some experience?

 

Thanks all

Purple screen PCPU no Heartbeat

$
0
0

Hello all,

 

after stop the VM, not the ESX, i have receveid the purple screen with the error PCPU no heartbeat. i have attached the screen with the error.

Please some can help to analyze this issue?

 

Thanks all

Expand Clustered VMDK

$
0
0

I have 6.5 vcenter with 6.5 ESXi hosts. I have two windows 2012 SQL cluster . The VMs each have a SCSI controller set in "virtual" I am not using an RDM, I'm using a VMDK file with a shared disk. I am using EagerZeroed vmdks on ms cluster. I want to extend the disk to 250 GB. which command should I use. I've seen too many different commands on the internet

 

vmkfstools -X 250g  /vmfs/volumes/cs-ee-symmlun-001A/cormac.vmdk  or vmkfstools -X 250G -d eagerzeroedthick  /vmfs/volumes/cs-ee-symmlun-001A/cormac.vmdk               

 

which one should i use? I need to install any tool for vmkfstools ?                

Viewing all 8132 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>