Jump to content

???

 

I'm having one of those days today...

 

Tried re-imaging the SSD to get back to windows again on the server.... and the USB boot isn't working, so make another bootable USB, finally get it to boot, and then no mouse or keyboard will work, even my main one from PC.

So now I thought, well I'm taking longer here trying to get this shit to work than it'd be to install windows again, install backup program and then re-image.... about an hour to 90 mins later and the fucking install is still on "just a moment" - I could fucking shit right now I am so mad!!

 

Maybe I WILL stick with unraid, LOL... haven't wiped the drives yet, so could do... I am sooo torn right now.

  1. Windows7ge

    Windows7ge

    Just throw a PCI_e SSD in there as a pool cache. Should give you all the speed you could want.

  2. paddy-stone

    paddy-stone

    Yeah, I don't have one at the moment though... I do have an NVME drive though, and a SATA SSD is already a cache for it, can't understand wtf it's so slow.

    Right now I have the SSD with windows on it again, and gonna restore... and have done some copies from my secondary server to the SSD and getting the full network speed af approx 113MB/s sustained. When copying to an external drive I get approx 130MB/s sustained. And similarly when I had a striped pool in windows of the 4x4TB disks was getting full network speed, yet unraid was less than half that speed, even with the SSD as the cache drive.

    I might give it a go in a week or so, buy some more drives and have a proper test with unraid again.

  3. Windows7ge

    Windows7ge

    Unfortunately I don't know how to configure unRAID so I can't do much to help you troubleshoot. Based on how it's marked you'd think it'd be fairly plug'n'play.

     

    Does unRAID require you to use an independent device for read/write caching?

  4. paddy-stone

    paddy-stone

    I don't know TBH, so have just been using the SSD as cache only anyway, just in case... I am sooo pissed off at the moment that I am gonna give it a rest for tonight and just watch a movie or something... I'll make a plan tomorrow whether to go back to unraid and just take the disk performance deficit, or to throw freenas pon it again or whatever. I might even give ubuntu or another linux ditro a shot again... it's not majorly important that I get this operational in the next day or so. I have switched to using my backup server for now for media consumption and so on... just don't have plex ATM. And I still have my data backups safe, which were backed up yesterday before doing the unraid installation anyway.

    It's just annoying that so many things have been a PITA today, whereas another time I can re-deploy an image onto a machine fairly easily and quickly. So I;ll have to lookm itno that too, and see if it's a problem just on that one machine or not.

  5. Windows7ge

    Windows7ge

    If you decide to install Debian let me know. I'll help. I have that guide I wrote on Ubuntu Server 19.04. Pretty much all the steps should be usable across any other Debian distro. Except maybe package names but that can fixed fairly easily.

     

    Any VMs you have will run a lot better on QEMU than Bhyve(FreeNAS) I'll tell you that. I can help you optimize the system to increase the VM performance further. (Huge pages, pinning CPU cores, virtio drivers, GPU pass-though, etc)

     

    If you want a desktop you can install one on a CLI only server distro. It's only a one line command. You can even pick the desktop (Ubuntu|Cinnamon(Mint)|Kubuntu|Lubuntu). Don't like one? Uninstall it, swap to another.

     

    That's what I love about playing with Linux, the shear modularly. You can swap the system kernel itself to optimize your workflow if you want. Tell me how the heck you can pull that off in Windows :D

     

  6. paddy-stone

    paddy-stone

    Yep true. Thanks for the offer, I may take you up on that.

     

    So, weird problems day 2...

     

    Now for some strange reason I can't get a ubuntu/freenas/unraid USB to boot on that machine.. I tested the USB drives themselves, they work in other machines and boot just fine. I checked whether I needed CSM enabled or not. And I checked the USB ports themselves.. I did everything I can possibly think of. So for now I am stuck with windows for the moment. I can't think for the life of me, what this fucking problem is. I've tried like 5 USB drives, all the ports, different distros. I even dismantled my front USB ports and unplugged the cable.

    The machine was working perfectly fine before I installed unraid on there though, and apart from taking so fucking long yesterday, windows installed fine on there too. I even changed the RAM I had in there.... even though it's unrelated.

    I am starting to lose my shit with this machine now, LOL.

  7. Windows7ge

    Windows7ge

    What are the core specs?

  8. paddy-stone

    paddy-stone

    I finally got freenas latest version to install on there..

     

    AMD Athlon 200GE

    ASrock AB350M

    16GB DDR4 3000 (2666 because of the Athlon)

    LSi 9211-8i with breakout cables

    Crucial M500 240GB m.2 drive

    3x Seagate 4TB Ironwolf

    1x WD 4TB Red

    At the moment I have it set as a z1 with the 3x4TB Ironwolf drives with the 4TB red being the parity drive. Have only set up basics for now, and copied over some media for testing of sorts.

     

    This time I used etcher to create the USB... so maybe something about Rufus has borked, but the drives worked fine up until yesterday. Will have to investigate this more, but for now will stick with freenas... installed on the SSD this time, even though the installer says a flash drive is still recommended, but I seem to remember you (or someone else maybe) said that flash drives not recommended anymore after 11.2 or something??  anyway version installed is latest 11.2-u5 and is going well so far.

    I may change my setup a bit, to have my HP proliant microserver be main server again, with 8GB ram and 3x3TB red drives for now... and take my time setting this other server up, and have it be my backup server this time. The microserver uses a fair bit less energy anyway, but will keep changing my mind most likely.

     

    Anyway, I am also going to set up my old pi 3 as a media server too I think, then I can mess around with it without much hassle... plus will be learning again.

    And also, might keep my i7 setup as an ESXi tester/ homelab again, and just get another few drives, maybe a couple of 6tb shucked drives that are £95 each ATM. (edit) they are £90 each actually, which is pretty good.

  9. Windows7ge

    Windows7ge

    Not a bad little NAS. Is that LSI 9211-8i acting as a RAID card or an HBA?

     

    I think I understand what you're saying about how the RAID is assembled. I think the explanation is a little bit odd because technically all four drives take part in the parity. So technically all four drives are your parity drive. You're not wrong though. The obsolete RAID4 worked exactly the way you stated.

     

    Unless FreeNAS 11.2+ makes considerably more write operations that it's older variants then I can't say how true or false that is. I can comment and say the NAND flash you find in SSDs is of a much higher quality than flash drives so if you're deploying this in a production environment you're better off with a SSD from a longevity standpoint. You can create a ZFS mirror between two thumb drives to add redundancy to the boot drive but eh...

     

    I had just ordered another 10TB Seagate Ironwolf for ~£200. For my own storage I'm more interested in getting the drives I order than leaving it up to chucking to give me something that's not great. I'm going to repurpose 4~5 of my 3TB WD Red drives to the DAS and set them up in Windows Storage Spaces. This is going to buy me time while I buy the SSDs.

  10. paddy-stone

    paddy-stone

    It's an HBA, from what I understand hardware raid is pretty much dead for most purposes... but from a personal consumer standpoint I guess.

    Yeah if I was gonna be using the drives 24/7, I would just get some more ironowlf or red drives even though it would be a LOT more expensive. Where are you getting that 10TB drive for £200, IIRC they are mostly around £300 now, https://uk.pcpartpicker.com/products/internal-hard-drive/#A=8000000000000,16000000000000&sort=price&page=1

    So I figured if I'm gonan be using mostly for media, maybe for the Pi3 or even just as an occasional spinup drive, then £90 for 6TB is pretty good.It's probably a WD blue drive or similar, but that's still £70 cheaper than buying it un-shucked https://uk.pcpartpicker.com/products/internal-hard-drive/#A=6000000000000,16000000000000&sort=price&page=1

    That £90 is about 60%  the price of buying it alone.

  11. Windows7ge

    Windows7ge

    Eh, last i checked ESXi does not support software RAID so it still has a place. Hardware RAID on Windows Server is fine. Then there's other applications for it in the enterprise so it's not dead. Just asking because last I checked the 9211 is a RAID card. ZFS doesn't like that.

     

    I bought it off Amazon for 250USD. I converted it to pounds. Checking Amazon.co.uk they are much more expensive. No idea why. Supply & demand I guess.

     

    I would shuck drives if I could guarantee the type of drive inside. Now even WD is installing preventative measure to try and stop people from shucking. I do see if I shucked a drive I could save about 40USD but for this server storage always on application I don't want to chance getting consumer drives.

     

    Some day in the future though if I need cheap mass storage I will definitely consider doing this.

  12. paddy-stone

    paddy-stone

    Yeah, I meant for regular consumers it's dead really... not for enterprise. The card is flashed to IT mode and is p20 firmware, which is what I am led to believe should suffice for ZFS... I have never had a problem with it like this before anyway. Will see how it turns out, so far so good though with transfer from backup server (now main), to this machine hitting around 80-100MB/s sustained transfer with large files... which isn't bad IMO considering it is also using parity on the destination drives.

    Yeah, prices in the UK for stuff is shit most of the time.. if I could get a 10TB NAS drive for that price I would for sure. I can actually buy 3x 6TB USB drives for less than the 10TB drive would cost me, so ATM that's a no brainer considering I might be able to make use of the drives as is for some projects anyway, then when/if I am done with them I can shuck and chuck them into my backup server maybe.

    Yeah, I try to check if the drives are shuckable before buying, just in case the vendor pulls a fast one to try and stop people shucking them.... I don't see what difference it makes to them personally, as TBH the warranty is broken when they get shucked anyway, and they are making the money they ask for them anyway, so what people do with them after they buy them is none of their business and they should fuck right off!!  ?

    If it was only £25-30 I was saving then there'd be no question, I would buy the NAS drives every time... but these 6TB drives are like £70 cheaper than the cheapest 6TB NAS drive. They are £10 cheaper than a 4TB NAS drive even, so 50% more capacity and £10 cheaper = ?

     

     

     

  13. Windows7ge

    Windows7ge

    So about 85USD. That's a nice chunk of change. If I still used mass local storage on my computer I'd do that.

     

    My SAS expander showed up yesterday. All I need to do now is find a way to cool the controller chip. After that my main server will have twelve 3.5" bays in addition to forty 2.5" bays.

     

    By using more smaller drives you also get a speed bump but you really only notice that if you're working with 10Gbit which you sound like you're not.

  14. paddy-stone

    paddy-stone

    Yeah, I don't do local storage either, except for some small files that I also dropbox/google cloud too... nothing that important, just lists and project texts and such usually. That was actually one of the reasons I decided to go for a server in the first place, to offload having to have local storage, and a convenient place to centralise everything.... it certainly came in useful for when changing devices, putting software on new PCs for myself and family/friends etc.. which I used to do a LOT more often.

     

    Nice, Are you doing a build/update log for your SAS expander, or is it mostly software side other than the actual expander?

     

    Yeah, I keep thinking about 10 Gbit, but other than when I am re-arranging/deploying NAS I don't do that much transferring TBH, because of the whole NAS being the centralised storage for everything I use basically. I might bite when the price is a bit more affordable for a 10Gbit switch... even though there is one or 2 that are in the £200-ish range now, can't remember the name, saw it on craft computing channel a little while back. It would be extremely handy to have when doing a major NAS re-config like at the moment with doing the new server again, but other than that it wouldn't get much usage. Most of my downloads are saved directly to the server in a temp directory, that I then just move the resultant files where they need to be, which is very quick in itself and I can do that from anywhere I can access my NAS.

    I will continue to think about it though, as usual. I both love and hate re-configuring my setups from time to time... thinking of ways to get things done better :)

     

  15. Windows7ge

    Windows7ge

    It does make setup/deployment of clients faster/easier. That's one reason I like having centralized network storage.

     

    If it works how I expect it to there shouldn't be much or any software setup. I'm going to take my old five 3TB drives and set them up in a RAID10. They're in a 12 bay 2U server enclosure. I could make a topic about it, I was thinking just a status update but a complete build log on it was something I was considering. Might go that route. I have to verify everything works first though. If it does this can be done very cheaply and you can cut a lot of corners to make it even cheaper.

     

    Lookup the CRS305-1G-4S+IN should cost ~£100. You can buy Mellanox ConnectX-2 cards for ~£20, SFP+ transceivers for ~£13, & 15Meter fiberoptic cable for ~£14.

  16. paddy-stone

    paddy-stone

    Ahh that's the baby brother of the one I was saying about, couldn't remember the manufacturer name... the one I was saying about has 8 ports and can take 10gbe ethernet, sfp+ copper and fibre IIRC, so not bad. Thanks for the info, will come in handy if I do decide to go that route.

    Can't find the video of it ATM, can fidn the one about the one you suggested though 

     

  17. paddy-stone
  18. paddy-stone

    paddy-stone

    Ahh here you go.... wasn't craft computing, LOL

     

     

  19. Windows7ge

    Windows7ge

    For my own setup I went with the UniFi US-16-XG. Fair bit more expensive but more ports ?.

     

    You know you can also do P2P connections right? A switch & router aren't a requirement.

  20. paddy-stone

    paddy-stone

    Yeah, I know thanks, but much more complicated for my set up to do p2p, as it is the main server (one I am re-doing) only just has enough pci-e ports as I have an m-ATX case and board.

    I have been considering going full ATX again, but that comes with it's own set of problems too... the biggest one being where to put it... unless I re-config what goes under my desktop area.

    I can't put it in another room either yet.

    For now I think i'll just carry on as-is, I start getting confused if I make too many plans at once. I will make update my to-do list though with a plan to try and sort something out.

    I don't think there's space in the micro-server either for the pci-e card for 10Gbe, so that complicates things too, LOL

  21. paddy-stone

    paddy-stone

    Man I wish I had room in the house (and the required networking) for a rack, would make my life much less complicated for things like this.

  22. Windows7ge

    Windows7ge

    Quote

    but much more complicated for my set up to do p2p, as it is the main server 

    I want your 10Gig network to look like a token-ring topology.

     

    You'll have to consolidate your AICs. Trust me if you move large files it's a must have.

     

    I propositioned my landlord to let me install a standing rack but he shot me down. My servers just sit in a pile stacked on & below a desk/table.

     

    My own dual socket servers are running out of space for things I'd like to do. I lost QEMU/KVM on my desktop but I may be able to stream 60Hz/60fps over the network with a program called parsec. It seems intended for gaming but I should be able to use it for other. What I need to do is free up slots in my VM server to make space for a GPU. Then pass-though the GPU to the VM. This would be done on PROXMOX.

  23. paddy-stone

    paddy-stone

    Yeah, I have a dual socket Xeon sitting beneath my desk, would love to use it more, but it's just too loud for my bedroom, LOL... would be nice to use those 12/24 cores/threads for more than occasional testing... another downside though is at the time I thought I would use 2.5" drives more, so that's why I got it with 2.5 instead of 3.5 bays. I WILL make use of it one day, when I have room in another area of the house that won't be affected by the noise, and when I can run some cables there too. Already have a 48 port switch ready to go too, so would be nice to have a little rack in an out of the way place.

  24. Windows7ge

    Windows7ge

    The biggest pain I think you're going to have to figure out is cooling. As great as it'd be to tuck the equipment away in a closet it will get hot. The room I have mine in is a relatively large space but the room still hits up to 86°F(30°C) when they're under load for a time.

     

    Mind you it's still summer where I am. Those temps should drop come Winter especially since I can use it to heat the home.

  25. paddy-stone

    paddy-stone

    I actually have no problem with cooling on the servers, the main (Athon 200GE) sits at around 25-30C even tucked in the cubby media centre I made... ATM it's only using a Cryorig C7 too. I used to have my main PC components in that Fractal node 804, albeit with a 240mm AIO, but it didn't get any hotter in there as it does in my fractal meshify C, hits around 69C even @3.9Ghz all core in the height of summer.

    I love the Corsair ML fans, they move a decent amount of air and are almost as quiet as Noctuas in my experience.. even at low RPM.

     

    I've almost finished transferring all my data over now, after letting it sit cold for a few days to cool my temper. I'm getting between 45-60MB/s with things like audiobooks, and for big files hitting around 80-110MB/s... the biggest hit I take is with my multitude of ebooks, seeing as most are under 1MB in size... and then their .jpg and metadata files too. I'm leaving those until last, will just take an hour or so IIRC. Good news is that plex was easy enough this time, after so many times re-doing it, LOL... I finally got the hang of permissions too, after struggling with it on some previous server builds... I'm back to LIKING Freenas and Plex again ?

     

    I've decided I will make another server though, I still have my i7 6700K build to use if I want for ESXi usage... and I may set up my 4670K build again too, that at least has 3 x16 pci-e slots to use... If I decide to do both of those I just need to get another PSU, as all 3 I have will be used by then.... oh and some HDDs. couldn't get those bloody 6TB ones when I finally made up my mind, they are now around £127, so increased by around £37 ?

×