Quantcast
Channel: VMware Communities : Discussion List - ESXi
Viewing all articles
Browse latest Browse all 8132

ESXi 6.5: System disk RAID1 failure = unstable ESXi performance

$
0
0

As summarized in the title I have an Adaptec 6805T with a RAID1 system disk (2x 256GB SSD) where I'm running a standalone ESXi 6.5.

 

The story goes:

One of the VMs started to behave strangely last week (high memory usage, filesystem check every second reboot) slow response. So I initially focused on fixing the VM issue but after a bit I noticed similar problems on a second VM hence started to investigate ESXi itself.

 

Without detailing all the steps I discovered an issue with one of the two disks in the system RAID1. Rebooted the host entered in the controller BIOS and noticed a "Rebuild" action was taking place.

So used the Controller interface I check the Health of the disks and one of the two didn't complete the check after stopping with an error.

 

So Ok: disk broken not a big deal.......... but can anybody explain why ESXi is suffering from this? Isn't the controller meant to exclude the first disk and use the second one as the primary? Why the performance degradation?

 

Also (more an Adaptec question to be fair) assuming the disk is not completely broken as I can still see it, does anybody know how I can exclude this from the RAID1 and force a DEGRADED array until I get a replacement hardware? Unplugging would be an option but I don't have physical access to the box until I go on site (in 10 days) so only IPMI access.

 

Thanks!


Viewing all articles
Browse latest Browse all 8132

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>