HPE E208i-p Raid Card

Home Forums Computers (software & hardware) … Servers Et al HPE E208i-p Raid Card

Viewing 15 posts - 16 through 30 (of 46 total)
  • Author
    Posts
  • #16955
    kev2021
    Participant
    • Posts 262
    • Regular

    Yes :)

    I’ve decided if memory doesn’t come, I should still be able to setup the server and then just add the extra memory after, can’t see it making too much of a difference as the extra memory will only be used by the virtual machines anyway as esxi will have sufficient for itself just with the 16GB it comes with so i’ll probably keep 2GB or so for it but its a pretty small OS etc :)

    Kev

    #16956
    UK Sentinel
    Moderator
    • Posts 4233
    • Skipper

    I’ve decided if memory doesn’t come, I should still be able to setup the server and then just add the extra memory after,

    In a completely sane world, madness is the only freedom (J.G.Ballard).

    #16968
    kev2021
    Participant
    • Posts 262
    • Regular

    Hi all,

    So, raid card arrived – yipee

    drive caddies arrived – yipee

    Memory was due to arrive but that’s where it all went down hill…. was due for delivery, out on van etc… checking it this afternoon, results in delivery problem – address issue – no option to update address etc so only option was to call them up.

    So takes ages to get through – prob 20 mins… get through tell the guy, seems they had the wrong house number.. no idea how as i ordered it from same company that I got server from and have an account wit them and never changed address so bit odd.

    Guy said he would try sort it out now and hopefully would get today – nothing.. latest status after just checking is its back at the depo and scheduled for 2nd delivery, next working day so I presume Monday now… no details saying address has been updated so unsure on that…

    So decided to set it up as is…. more issues..

    The raid card, with the bracket fitted the same as the full height one, just will not fit into the available slot.. the bracket stops it .. so after taking ilo, riser card out and trying independent of server as its a bit tricky and small area… only solution was to fit the low height bracket instead on on the front, put the bracket behind it – hey presto, it fits perfectly….

    So got that all back onto the board, start to push the MB tray back into the case and immediate issue, the SAS cable is pushing the front of the raid card down quiet badly.. so pull it out… try to re-arrange cable, cad to cut bit of the label round the cable to give it a bit of extra wiggle room and managed to get it back in without it pressing down on the board too much so all looks ok now.

    So put it all together, put in my 2 SSDs (those trays will be a pain to get out as not much to told on to etc but will see how they go.  Anyway HP box complaining of overheating in bay 2…. its only been on by now maybe 10mins.. ???? bit odd.. anyway went into raid controller, the 860 EVO it see as 100% life, the 850 EVO – nothing showing next to it, not even 0%…. it then complains about being in mixed mode but upon reading, it seems this is default and you cant’ change it and says it wont be visible to OS… anyway I decided at this point, no data on it so nothing lost went for it and it has let me install esxi on it with no issues.

    Machine still complaining of overheating in bay 2 so my current thinking is maybe the SSD is no good or there is a issue with the actual bay but its all a bit small inside there so hard to tell.

    I don’t have another SSD to replace the 850 EVO with at the moment and buying one seems pretty hard so will see.  At the moment nothing lost as no VMS on it etc.

    I might try taking the cover off tomorrow and powering it on and just see if any air is getting through etc but its a bit hard as effectively he drives slide in at the front directly onto the SATA board at the back.

    If it seems ok then I’ll have to get another drive, if that doesn’t work then I guess its a call to HP maybe?  I have marked the 2 IML log entries as repaired incase it is just flagging up earlier faults :)

    Kev

    #16969
    Grisu
    Participant
    • Posts 553
    • Addict

    Why dont you swap the 2 SSDs to see the failure wandering or staying on bay 2?

    #16970
    UK Sentinel
    Moderator
    • Posts 4233
    • Skipper

    Machine still complaining of overheating in bay 2

    A servers engineers life is never easy, Also look at page 64 or attached manual and make sure fan is connected correctly and able to rotate, as I have had loads of issues with fans in my younger days.

    Maybe also worth running through the HP Active Health System Viewer (or simular) to see if the fan is working as it should be.

    Does the  HP Microserver Gen 10 Plus have an air baffle or blank ?

    Hopefully just a faulty drive

    HP-Microserver-Gen-10-Plus-user-guide.pdf

     

    In a completely sane world, madness is the only freedom (J.G.Ballard).

    #16972
    kev2021
    Participant
    • Posts 262
    • Regular

    Nothing today and delivery status just says scheduled for 2nd attempt delivery but nothing to say when that will be and nothing to say address has been updated so no idea, I can only assume will be Monday.. if they fail again, they will hold it and I’ll need to go collect

    So now I’ve still got issue with bay 2 OVERHEATING but what’s weird is that this occurs as soon as I turn the box on and the SSD etc are cold to touch so I’m not entirely sure what’s happening.  I’ve had server turned off at the wall socket all night so not even been in “standby mode” so can access ilo etc – all off, turn on today (have “repaired” all IML logs and still same error.

    I’m now wondering if its a sensor maybe? not sure how it tells if it is overheating…

    I’ve taken the 850 EVO drive out and temp connected to my windows machine as a 2nd drive and run Samsung magician which tells you state of drive and also if any firmware updates are due – all fine and no firmware updates reported so that’s all ok.

    I’m now left with is it complaining due to bay 1 having 860 EVO with 100% life left and bay 2 has a850 EVO? or is it a sensor/issue with the server?

    I’m in process of trying to check/update server firmwares for bios, raid card etc, bit looong winded process to get the files (creating accounts etc) and currently trying to make USB key so can update accordingly.

    Anyone have any idea how I can move my 2 x 2TB WD Gold drives in RAID 1 from microserver gen 8 to microserver gen 10 plus? I can’t find anyone who has done this.. the gen 8 uses I think a B120i raid controller (inbuilt) and my gen 10 plus I have a E208i-p raid card.  Everything i read implies can move from like for like raid cards but these are totally different….

    Also just discovered a issue with y plan to move to windows 2022 server, seems like i need domain functional level to be min 2008, which is fine, mine is 2012 BUT it seems I need to move sysvol to DFSR and I’m using FRS still as apparently FRS has been dropped now….

    So more to investigate and work out how to do before can start migrations once i figure out how to get drives move to gen 10 plus.. I got a horrible feeling I’m going to have to move the VMs off again to temp locations, put the drives in new box and then setup raid and then move the VMS back… loong process :( Was hoping as still HPE cards, the smart array would detect the raid on it but bit of a gamble if i plug them in and it doesn’t like them or see the raid..

    Kev

     

    #16973
    UK Sentinel
    Moderator
    • Posts 4233
    • Skipper

    Ouch!

    I would if still possible revert everything back to the Microserver gen 8 (if anything has changed since before you started) and then address the Heat issue separately.

    Once everything is back as it was and working (including the 2 x 2TB WD Gold drives in RAID 1 back in the Microserver gen 8)

    Then address sensor/issue with the server Bay 1, try swapping over SSD from bay 2 to bay 1 and visa versa and see if problem follows the drive (I think this is what @Grisu detailed)

    I would be a tad cautious until  drive bay issues has been resolved,  as the HP Microserver gen 10 plus or RAID card (not sure which controller the RAID 1 setup is using) may have to be returned due to fault ?

    One problem at a time

    ——-

    Regarding moving RAID 1 drives from 1 RAID Controller to another, I do not see any issues as cards are both HP and RAID configuration is RAID 1, so just remove 2 x 2TB WD Gold drives in from Microserver gen 8 to the hp microserver gen 10 plus and create new RAID Set (drive roaming (drive migration)).

    Note: you have probable read this already, but moving drives can be done by drive roaming (drive migration) but HP kit the issue could be with UEFI boot entries, as they don’t follow the disks When moved, you can use F9 to check on the health of the array under System Configuration.  You will then need to select disk logical drive to boot from under the One Time Boot Menu or under System Configuration – Boot Options – UEFI Boot Settings – Add Boot Option – Then pick the correct device and drill down to the correct .efi file to boot from and name the entry

    Have a look here : https://community.hpe.com/t5/ProLiant-Servers-ML-DL-SL/Move-RAID-Disks-from-one-server-to-another/td-p/7033936#.Ydm5bmDP1PY

     

    In a completely sane world, madness is the only freedom (J.G.Ballard).

    #16974
    kev2021
    Participant
    • Posts 262
    • Regular

    Hi all,

    So as it stands:-

    Gen 8 server has:-

    2 x 2TB WD Gold in raid 1 running esxi 6.0

    2 x 2TB WD Gold in Radi 1 containing VMs

    Gen 10 plus server has:

    2 x 250GB SSDs in raid 1 running esxi 7.0.U2

     

    I’ve got the latest service pack and mounted that iso via ILO and its run and updated what needed (all automatic) and from what could tell it was only the raid card that had a update.

    Not sure how the service packs for Hp work as never needed to bother before but sems latest is 2021.10 and not been once since so I presume last one was October 2021.  There is a newer bios version 2.54 (i’m running 2.52) but it doesn’t seem much update in that so i’ll wait till HP release next service pack, incase of any issue with the update.

    As for the drive issue, I think i *may* have resolved it…… in the ilo is a Fans section and the min was set to 0% which meant the fan speed was operating mostly at 8% using optimal cooling.  i set it to increased cooling which seemed to push the fan speed up to 19%, let it at that for bout 30mins, rebooted and no error on boot up regarding bay 2 etc… so I’ve since put it back to optimal cooling and set the min fan speed to 12% and left that for a bit and again rebooted and no error.

    I’ve now set the min to 10% so will see how that goes.  Just really looking for the lowest noise option :)

    My concern is that at the moment the drive bays 1 & 2 (2 on the left hand side of the case) are SSDs so they have room for air to flow above etc, I’m slight apprehensive putting my 2 x WD Gold 3.5″ drives as they will fit each drive slot so will see what happens.

    I did think I had other issues but it seems these are normal :)

    the BMC temp is 75C, says s all o but seems a bit high but then i looked on line and someone else is at 80C and they tried everything and even went to HP, they shipped him new MB, all fitted and same results but effectively HP gave up when they realised he was home user and not in proper datacenter environment and they didn’t have time to get a server and test it… doesn’t bow well..

    but alas, it seems that is normal for this server so I’m ok with that, not much room to start adding fans etc tbh anyway.

    So for now I’ve got OS up and running and will start to move my VMs off as I have a copy and will then chance removing the 2 drives and putting into gen 10, never moved a raid before so that’s, I’ll check out that link you posted :)

    oh also bought a ilo advanced license so just waiting on that too :)

    Kev

    #16975
    kev2021
    Participant
    • Posts 262
    • Regular

    Machine still complaining of overheating in bay 2

    A servers engineers life is never easy, Also look at page 64 or attached manual and make sure fan is connected correctly and able to rotate, as I have had loads of issues with fans in my younger days. Maybe also worth running through the HP Active Health System Viewer (or similar) to see if the fan is working as it should be. Does the HP Microserver Gen 10 Plus have an air baffle or blank ? Hopefully just a faulty drive <noscript></noscript>HP-Microserver-Gen-10-Plus-user-guide.pdf

    Thanks for suggestion but I found when i put my hand in the bad just above the ssd, I could feel some air on bay 1 but hardly any on bay 2, also I checked firmware on 850 and status and all ok so that made me confident not a drive issue but looked to be air flow issue but as mentioned above, increasing min fan speed looks to have solved issue so far, will have to see how long it solves it for, its either a perm fix or a temp one lol :)

    Kev

    #16977
    UK Sentinel
    Moderator
    • Posts 4233
    • Skipper

    I checked firmware on 850 and status and all ok so that made me confident not a drive issue but looked to be air flow issue but as mentioned above, increasing min fan speed looks to have solved issue so far, will have to see how long it solves it for, its either a perm fix or a temp one lol :)

    So low Air flow in Bay 2 triggered a temp sensor warning, I suppose in a positive way, that is good and hopefully temp sensor warning does not re appear 

    In a completely sane world, madness is the only freedom (J.G.Ballard).

    #16980
    kev2021
    Participant
    • Posts 262
    • Regular

    Yes seems so. will have to see how it goes once I’ve got my other 2 drives in and its has VMs running :)

    Also forgot to say earlier, the default setup in the Gen 10 plus is all 4 bays are conencted through one cable to a mini SAS port on the motherboard.  So all i’ve done is fit the E208i-p raid card, unplug the mini sas cable from the motherboard and connect it to port 1 on the E208i-p raid card, hence no additional cable is required and effectively the onboard S100i raid controller is not being used.

    All drives are connected to the E208i-p raid card.

    Just starting the process of copying the VMS off again onto my NAS as its the only place I have approx. 1TB free to store them, although this time, I don’t be deleting them off the nas, I’ll leave them on there.  Worst case, I might get a external drive and move them all onto that and then keep that offline somewhere, effectively a backup as of x date :)  I’ve only used 3 VMs since I moved them to the WD Golds and 1 of those is now redundant and no longer needed with esxi 7.0 from what I can tell so far as not using vcenter now, just gonna leave it as esxi now and skip vcenter :)  Plus vcenter can no longer be run on windows platform (I had a windows VM running vcenter before)

    Kev

    #16981
    kev2021
    Participant
    • Posts 262
    • Regular

    What was weird was it complained on boot up but it would still continue to boot up and all temp settings in ilo are marked ok with green ticks so as far as ilo was concerned no issues at all temp wise, it only recorded the temp warning on initial start up and waited 20seconds to continue so bit strange. I was expecting to see a warning on the temp settings on ILO where it shows all temps.

    Kev

    #16982
    UK Sentinel
    Moderator
    • Posts 4233
    • Skipper

    All drives are connected to the E208i-p raid card.

    , enjoy your late night ;-)

    In a completely sane world, madness is the only freedom (J.G.Ballard).

    #16988
    kev2021
    Participant
    • Posts 262
    • Regular

    Well managed to get all VMS copied off.

    Took the drives out and put into gen 10 plus (same order) and went into the raid config, on first page it only shows SSDs no physical drives, but ont he right it says 2 logical volumes and 4 physical drives

    Click on the controller and clicking configure shows logic drive A as 2 x ssd and logic drives B as 2 x 2TB drives in raid 1

    So I’ve seen no option to adopt or import or whatever so after checking around some more, I simply rebooted the box and let it boot up – it booted up into esxi and when I logged into esxi web gui, it even showed the 2 TB drive with correct name and all VMs were there, they just needed to be registered in esxi to then boot them up.

    I’m still not entirely sure if this is correct as previously on HP smart arrays it shows the drives on the main screen so little puzzled but it looks to work so fingers crossed

    I’m now keeping a copy of the VMs as of today on nas so it will only be the VMs that are changed that will result in some data loss etc but at least i got main VMS

    On the downside, the bay 2 issue is back saying overheating…. I’ve now increased the fan speed to 20% but still says it on boot up, after boot up ILO reports all temps as ok and within spec so little confused here.  Prob is, i can’t have the fans going too much more as where the box will live will be too noisy so I’m waiting for memory to arrive, will fit than and then it will be moment of truth when i fit the box into its actual location and then see what its like then.

    At the moment its right next to me on my desk so it could just seem more noisy than the gen 8 but in fact isn’t.

    Vms are booted back up and running ok, i think i’ve also migrated from FRS to DFSR so now setting up new 2022 O/S’s to replace my DC & Db server with.

    Kev

    Kev

    #16990
    UK Sentinel
    Moderator
    • Posts 4233
    • Skipper

    I’m now keeping a copy of the VMs as of today on nas so it will only be the VMs that are changed that will result in some data loss etc but at least i got main VMS

    Very sensible, have you looked at your  UEFI boot entries ? just a thought ?

    In a completely sane world, madness is the only freedom (J.G.Ballard).

Viewing 15 posts - 16 through 30 (of 46 total)
  • You must be logged in to reply to this topic.