Among the servers ended up being an ESXi host with a attached HP StorageWorks MSA60.
We noticed that none of our guest VMs are available (they’re all listed as “inaccessible”) when we logged into the vSphere client,. So when we go through the equipment status in vSphere, the array controller and all sorts of connected drives look as “Normal”, nevertheless the drives all reveal up as “unconfigured disk”.
We rebooted the host and attempted going to the RAID config energy to see just what things seem like after that, but we received the following message:
“an drive that is invalid had been reported during POST. Adjustments towards the array setup after an invalid drive movement can lead to lack of old setup information and articles associated with initial rational drives”.
Of course, we are extremely confused by this because absolutely absolutely nothing ended up being “moved”; nothing changed. We simply driven within the MSA as well as the host, and have now been having this presssing problem from the time.
We have two primary questions/concerns:
The devices off and back on, what could’ve caused this to happen since we did nothing more than power? We needless to say have the choice to reconstruct the array and commence over, but i am leery concerning the chance for this occurring once again (especially since I have have no clue exactly what caused it).
Is there a snowball’s possibility in hell that I’m able to recover our array and guest VMs, alternatively of getting to rebuild every thing and restore our VM backups?
We have two questions/concerns that are main
- Since we did absolutely nothing a lot more than power the products down and right back on, exactly what could’ve caused this to occur? We needless to say have the choice to reconstruct the array and commence over, but i am leery concerning the risk of this taking place once more (especially since I have actually have no clue just what caused it).
Any number of things. Can you schedule reboots on your gear? Or even you want to just for this reason. The main one host we’ve, XS decided the array was not ready over time and don’t install the storage that is main on boot. Always good to learn these plain things ahead of time, right?
- Can there be a snowball’s opportunity in hell that i will recover our array and guest VMs, rather of experiencing to reconstruct every thing and restore our VM backups?
Perhaps, but i have never ever seen that one mistake. We are speaking really restricted experience right here. According to which RAID controller it really is linked to the MSA, you may be in a position to see the array information through the drive on Linux making use of the md utilities, but at that point it is faster merely to restore from backups.
A variety of things. Can you schedule reboots on all of your gear? Or even you should really just for this explanation. The main one host we’ve, XS decided the array was not ready over time and did not mount the primary storage space amount on boot. Always good to understand these plain things in advance, right?
I really rebooted this host numerous times about a month ago once I installed updates onto it. The reboots went fine. We additionally entirely driven that server down at round the exact same time because I added more RAM to it. Once more, after powering every thing right right right back on, the raid and server array information had been all intact.
A variety of things. Can you schedule reboots on all your valuable gear? Or even you want to just for this explanation. The main one host we now have, XS decided the array wasn’t prepared over time and did not install the storage that is main on boot. Constantly good to understand these things in advance, right?
I really rebooted this host numerous times about a month ago once I installed updates about it. The reboots went fine. We additionally entirely driven that server down at across the same time because I added more RAM to it. Once more, after powering every thing straight straight right back on, the server and raid array information had been all intact.
Does your normal reboot routine of the host add a reboot associated with MSA? would it be which they had been driven straight straight back on into the order that is incorrect? MSAs are notoriously flaky, likely that’s where the problem is.
I would phone HPE help. The MSA is really a flaky unit but HPE help is very good.
I really rebooted this host numerous times about a month ago once I installed updates upon it. The reboots went fine. We additionally entirely driven that server down at round the time that is same I added more RAM to it. Once again, after powering every thing right back on, the raid and server array information ended up being all intact.
Does your normal reboot routine of one’s host add a reboot associated with the MSA? would it be which they had been driven straight back on within the wrong purchase? MSAs are notoriously flaky, likely this is where the issue is.
We’d phone HPE help. The MSA is just a flaky unit but HPE help is very good.
We regrettably don’t possess a “normal reboot routine” for almost any of our servers :-/.
I am not really yes just just what the proper purchase is :-S. I might assume that the MSA would get driven on first, then a ESXi host. Should this be proper, we now have currently tried doing that since we first discovered this dilemma today, as well as the problem continues to be :(.
We do not have help agreement about this host or even the connected MSA, and they are most most likely way to avoid it of guarantee (ProLiant DL360 G8 and a StorageWorks MSA60), and so I’m unsure simply how much we would need to invest to get HP to “help” us :-S.
We really rebooted this host times that are multiple a month ago once I installed updates onto it. The reboots went fine. We additionally entirely driven that server down at across the time that is same I added more RAM to it. Once again, after powering every thing straight straight back on, the server and raid array information ended up being all intact.
Comentarios recientes