Jump to content

I need some help from the community (This is my first post). I have a custom gaming PC from 2017 that I want to convert into a Nas/Home Server/Home Lab and need some advice before I purchase what extras I need. I'm planning on building a new custom gaming PC and figured I didn't want to get rid of my OG PC and rather convert it to something else. This is the first time I'm giving this a shot and I have been reading forums and watching Youtube videos on the topic for days now.

 

**** I do want the system to be energy efficient since it will be running 24/7 ****

**** I need help with picking out a case, figuring out a good GPU, ideal software needed for all my ideas, and open to any suggestions others might have.

 

I didn't use this PC daily, it was more like a couple times a week or every other week kind of thing since it's initial build. I would game on it on occasion, use it for heavy applications, with my old setup it was easier to use a laptop, and my gaming group mainly stayed on PS4 & PS5.

OG PC (2017)

CPU: Ryzen 7 1700

Ram: 16gb DDR4

GPU: RTX 2060 6gb

MOBO: Micro MSI Mortar B350M

Storage: 256GB M.2 & 4TB HDD

Power: 750 watt EVGA (I think 🤔, I can't easily see the unit)

 

I have a spare GTX 1070 since that was my original GPU. It got a little bent while traveling one time so now it crashes every time I game on it but it works fine for everything else. I used it for Mining for a few months and stopped. Now I just need to figure out what to do with it.......... Open to suggestions.......

 

I want to use this Home Lab to save all my photos, digital files, games, and all my digital media. This digital library probably already sits around 12-15TB of data and is growing. I want to run a version of Pi Hole on it (currently using a raspberry Pi 4), Plex or JellyFin hosting, run my automated programs, write code and run on it, possibly a Minecraft server, stream all the media stuff to probably 2-4 4k TV's in the house (I don't care if the streams are 1080p tho). Some of the TV's are smart TVs and others I'm using a Amazon Firestick to watch stuff. I will probably be traveling in the future and I want to be able to access this HomeLab while on the go to watch TVs shows or write updates to my code or test out ideas or data dump photos and videos from trips I'm on. 

 

My thoughts and community please chime in here. 

~ So I have read that my 1700 should be more than enough to handle my needs for the server so I believe I won't change that out unless someone has a different suggestion. 

~ The Ram, I think I need to change this out since the PC will randomly get that memory error blue screen when shutting down at times. I'm thinking of replacing it for a new set of 32gb DD4 memory.

~ I'm thinking of replacing the 2060 for a 1650 since I know they have options that don't require extra power cable and the 2060 might be overkill for the project.
~ I think the MOBO & Storage & Power will be fine.

 

What should I do about case options. I want like 8 bays or more. I have also seen people discuss doing a SAS enclosure for all their HDDs and just connecting them to the PC. I think hot swappable drives would be ideal since it would simplify all the wiring needed to be done and replacing a dead HDD easier.
~ I see the JONSBO N5 NAS N5 Link is a good options (12 bays), I worry about airflow for the HDD tho and keeping them cool. 

~ This option DARKROCK Classico Storage Master 

~ This option Fractal Design Node 804

~ I do want to use something like TruNAS or one of those options. I'm also considering it would be wise to use a RAID Configuration.

~ I want to get either the 20TB or 24TB NAS HDD and start out with like 3 of them and over the months keep adding more until all the bays are full. 

Link to post
Share on other sites

Here are my thoughts:
 

1) If you have to spend a single dollar to the get the 1650, then just keep the 2060. It is a better card in every way and it is in your possession. They don’t use a lot of power at idle, so there is not a tangible reason to use a 1650. I bet the 2060 idles at a lower power draw. Plus, the 2060 should have more horses under its hood for transcoding videos when needed.

 

2) Try out your 16 GB with the server. It might not have an issue at all and it could be a Windows thing. But if you do take my suggestion for an OS (#6), then upgrading to 32 GB would not hurt.

 

3) You aren’t clear on how many hard drive bays you want. I recommend the Fractal Define R5. It can be 8 comfortably and a lot more with some creativity. I can fit 16 in mine and I think I still have room for about 4 to 6 more.

 

4) Many won’t agree with me, but avoid using RAID. It has more shortcomings than benefits for simple home use. You cannot easily expand your storage. If you want to add more drives, you would have to create an entirely new pool, which means formatting your existing drives. You would need to move the data off of those drives to a back up, reconfigure, format, and then transfer all the data back. At 2x to 3x 24 TB drives, you are talking about two weeks worth of transfer time. Plus, having somewhere to move 2x to 3x 24 TB is not easy. It’s just not worth it. I recommend you consider UNRAID instead with a cache drive option, even if it’s an SSD. It acts similar to RAID software in that it gives you a single or dual parity option so you can lose up to 1 or 2 drives before you lose data. You don’t get any improvements in write or read speed, and you don’t really get the benefits of a ZFS file system, but UNRAID does let you string together different sizes of drives and add/change/remove relatively freely compared to RAID options. It’s not free, but it is like $59 for up to six drives and that’s really all you need to get started.

 

5) For that many drives, consider purchasing an HBA from eBay. Look at the LSI 9300-16i which lets you connect up to 16 SAS or SATA drives. There are several options from California that are very reliable and affordable.

 

6) You didn’t mention an OS, so I will say consider Ubuntu Server, which can handle everything you’ve mentioned. Or if you really want to get fancy, look at Proxmox. It is a hypervisor, so you can provision your computer into smaller virtual components to create virtual machines (VMs) or use containers. There is a learning curve, but it is a very nice and free to use OS that works flawlessly. You can also run UNRAID (or TruNAS if you really want RAID) in a separate VM with this OS. Whereas you would normally need a separate PC to run UNRAID or TruNAS. I guess they offer VMs and containers as well. It’s a bit of a cluster because everything can do everything now. But in general, proxmox lets you install several different “guest” operating systems like Linux, Windows, UNRAID, etc. I’ve even installed MacOS on a VM just for fun and while it worked, it wasn’t stable. But definitely doable.

Link to post
Share on other sites

28 minutes ago, johnt said:

avoid using RAID

Yup, I disagree.

28 minutes ago, johnt said:

You cannot easily expand your storage.

Expanding storage on a RAID system, is actually much easier and cheaper then on ZFS. On ZFS you would indeed need to rescue all data from the array, destroy it, create a new expanded array and copy stuff back. Not so on RAID5 and RAID6 systems: just add the new drive to the existing array, let it re-silver in the background and you're done. Only requirement for efficiency here is to use equally sized drives. Example: my current RAID6 was 4x 16TB for an effective 32TB storage and 2 drive redundancy. I added 2 more 16TB drives so now I have 64TB storage, and still 2 redundant drives. Any more drives I add just expand the capacity, provided they're 16TB or larger (but then these larger drives are limited to 16TB, so not really cost effective).

 

I'm not against using UnRAID, or ZFS, both have their use cases. But the portrayal of simple RAID above is contentious so I had to rectify that 🫡

"You don't need eyes to see, you need vision"

 

(Faithless, 'Reverence' from the 1996 Reverence album)

Link to post
Share on other sites

2 minutes ago, Dutch_Master said:

Yup, I disagree.

Expanding storage on a RAID system, is actually much easier and cheaper then on ZFS. On ZFS you would indeed need to rescue all data from the array, destroy it, create a new expanded array and copy stuff back. Not so on RAID5 and RAID6 systems: just add the new drive to the existing array, let it re-silver in the background and you're done. Only requirement for efficiency here is to use equally sized drives. Example: my current RAID6 was 4x 16TB for an effective 32TB storage and 2 drive redundancy. I added 2 more 16TB drives so now I have 64TB storage, and still 2 redundant drives. Any more drives I add just expand the capacity, provided they're 16TB or larger (but then these larger drives are limited to 16TB, so not really cost effective).

 

I'm not against using UnRAID, or ZFS, both have their use cases. But the portrayal of simple RAID above is contentious so I had to rectify that 🫡

What platform or software are you using for RAID?

Link to post
Share on other sites

Devuan ( => Debian w/o systemd) using mdadm via Webmin. Hardware: AMD EPYC 7551P on SM H11SSL-i, 128GB ECC RAM, LSI HBA, 9200 or 9300 series card in IT mode. I happen to have 2 near identical systems build for a high availability setup, but that's not implemented yet. Due to ever increasing energy prices here all systems have shut down for the time being. Chances are energy prices will never come down again I'm afraid 🤬

 

(for illustration: I used to pay 140€ per month pre-corona and got any surplus returned every year, I've just switched supplier and the cheapest I could find was 200€. Per month! Previous supplier was 170 but also wanted 200+ when my contract ran out last week 💸 )

"You don't need eyes to see, you need vision"

 

(Faithless, 'Reverence' from the 1996 Reverence album)

Link to post
Share on other sites

I probably should have left this note in my OG Post. I might use the 2060 for a new gaming build depending on the prices of GPUs when I do my shopping. If prices are good then I don't mind keeping the 2060 in the NAS Build otherwise I will use the 2060 in my new gaming build for a few months. 

 

Also, I'm in the USA (Florida) just in case that plays a factor.

 

johnt - Thank you for all the insight, I appreciate the thoughts. Just want to add some details. I'll take a look at UNRAID to learn more about it. Thanks for the case recommendation. 

11 hours ago, johnt said:

You aren’t clear on how many hard drive bays you want.

~ I wrote in the OG Post that I want at least 8 bays or more since I plan on buying HDDs as I find details for them and filling up the Server. 

 

11 hours ago, johnt said:

You didn’t mention an OS

~ I wrote I was thinking of TruNAS but wasn't sure which OS to really use. I appreciate you calling out Ubuntu Server and Proxmox as options, I will research them.

 

11 hours ago, johnt said:

I can fit 16 in mine

~ How did you manage that?? Do you have a pic to show the mad genius skills?

 

Dutch_Master - Thank you for the update on the raid settings. I do need redundancy in this server since I'll be saving photos that I can't lose and wouldn't be able to replace if a HDD dies. 

10 hours ago, Dutch_Master said:

16TB, so not really cost effective

~ Confused, what part is not cost effective? My thoughts are I want to have a large storage area for all my media and willing to buy the large capacity HDDs to achieve it. IDK if you started your build when 16TB was the largest HDD. I was thinking grabbing 24TB so I have the flexibility to grab any drive at that level or below in the future. 

~ Might be dumb question, Does RAID5 or RAID6 have a drive capacity limit or is it pretty flexible to whatever you want to use?

Link to post
Share on other sites

29 minutes ago, Terrofmen said:

what part

In a RAID, the smallest drive determines how much capacity the array will have. In a RAID1 (mirror) with a 16 and 24TB drive, the 24TB drive will only use 16TB for the RAID. The remaining 8TB is not used and, IIRC, cannot be used otherwise either. This is why it's not cost effective to buy different sizes of drives. UnRAID has a different system that allows mixed sized drive to be used. Not sold on the underlying design ideas behind it but it works for others so YMMV! (not gonna dive into that, look up the differences online)

 

My original RAID (from 2008) had 500GB drives and I upgraded through the years until I ran out of money with 4x 4TB drives installed. More recently I got some room in my finances and found that 16TB refurbished enterprise drives are fairly affordable at some 200-250USD each. I stored my data on a singe 8TB drive then rebuild the array from scratch (it's quicker that way) with 3 drives (RAID5) which I could easily upgrade to RAID6 by adding a new drive and instructing the system to convert the RAID from 5 to 6. Re-silvering takes ages admittedly, but can be done "on the fly" with a working array.

 

HTH!

"You don't need eyes to see, you need vision"

 

(Faithless, 'Reverence' from the 1996 Reverence album)

Link to post
Share on other sites

14 hours ago, Terrofmen said:

~ So I have read that my 1700 should be more than enough to handle my needs for the server so I believe I won't change that out unless someone has a different suggestion. 

~ The Ram, I think I need to change this out since the PC will randomly get that memory error blue screen when shutting down at times. I'm thinking of replacing it for a new set of 32gb DD4 memory.

~ I'm thinking of replacing the 2060 for a 1650 since I know they have options that don't require extra power cable and the 2060 might be overkill for the project.
~ I think the MOBO & Storage & Power will be fine.

  1. I once heard that Ryzen 1st gen had been struggling with Linux, especially with older kernels. Another possible drawback with it would be the single-thread performance, which affects Minecraft a lot. I would suggest a replacement with a faster Ryzen 5000 processor as well as a BIOS update.
  2. The RAM controller on Ryzen 1st gen has been unstable with anything faster than 2133 MT/s IIRC. A replacement to the processor would hopefully fix the RAM issue.
  3. The Intel Arc A380/A310 may be worth considering as they are priced similarly to 1650 while having much more advanced video units with up to AV1 transcoding.
  4. The power supply may become much less efficient with <10% load (<75W) when in idle, although this would no longer be an issue with 8 spinning drives (about 100W in idle).
42 minutes ago, Terrofmen said:

~ Confused, what part is not cost effective?

They might refer to price per terabytes, and yes, 16 TB drives are still competitive in this regard. There are also re-certified ones readily available with somewhat lower price tags, as what Linus has showcased in this video.

 

https://www.youtube.com/watch?v=PcnWneULGAQ

Link to post
Share on other sites

1 hour ago, Terrofmen said:

~ I wrote in the OG Post that I want at least 8 bays or more since I plan on buying HDDs as I find details for them and filling up the Server. 

 

Sorry I thought that's how many bays that particular case had. Something larger will serve you better in the long run to help control temperatures. I wouldn't do a Node 804 or any tiny Jonsbo case if you want 8+ drives. The HBA puts out a lot more heat than you expect, plus a GPU, and it's going to be toasty.

 

1 hour ago, Terrofmen said:

~ I wrote I was thinking of TruNAS but wasn't sure which OS to really use. I appreciate you calling out Ubuntu Server and Proxmox as options, I will research them.

You're right. I missed it... I guess I don't think of TrueNAS as an OS. I'm anti RAID 🙂

 

1 hour ago, Terrofmen said:

~ How did you manage that?? Do you have a pic to show the mad genius skills?

I don't have a picture but it's common. I bought Phanteks stackable drive bays. They used to be cheaper, just like everything else in this world. But there is enough room between a normal sized PSU and in-case drive cage to add more drives. If you go for a smaller PSU or SFF PSU at any point, the R5 has a lot of spare room.

Link to post
Share on other sites

10 hours ago, Dutch_Master said:

Devuan ( => Debian w/o systemd) using mdadm via Webmin. Hardware: AMD EPYC 7551P on SM H11SSL-i, 128GB ECC RAM, LSI HBA, 9200 or 9300 series card in IT mode. I happen to have 2 near identical systems build for a high availability setup, but that's not implemented yet. Due to ever increasing energy prices here all systems have shut down for the time being. Chances are energy prices will never come down again I'm afraid 🤬

 

(for illustration: I used to pay 140€ per month pre-corona and got any surplus returned every year, I've just switched supplier and the cheapest I could find was 200€. Per month! Previous supplier was 170 but also wanted 200+ when my contract ran out last week 💸 )

Everything I am reading on TrueNAS says you can't easily add more drives though??

Link to post
Share on other sites

4 hours ago, Dutch_Master said:

RAID1 (mirror)

~ Gotcha, this makes a lot of sense and thank you for giving a little explanation. I appreciate the insight and like what you suggested for RAID5-6 on the setup. 

 

4 hours ago, Bersella AI said:

Ryzen 1st gen had been struggling with Linux

~ Thanks for this call out. I had no idea there were problems with this. I see some forums talking about trying different fixes. I am trying to keep the cost low but I will keep the 5000 series in mind as a solution to consider. 

~ I'll take a look at the Intel ARC cards and will consider changing out the RAM if I use a different CPU.

4 hours ago, Bersella AI said:

re-certified ones readily available

~ Nice! I'll take a look at those HDDs as well.

 

3 hours ago, johnt said:

Sorry

~ All good Johnt, I appreciate the insight you have provided. I'm really liking the Fractal Define cases and will considering grabbing one of those. 

~ I'll consider looking up the stackable drive bays for future expansion needs down the road. 

 

Link to post
Share on other sites

Yes, that's because how ZFS works with vdev's. ZFS has been overhyped IMO by a lot of youtubers using TrueNAS but it doesn't scale well. So the makers introduced TrueNAS Scale, based on Linux instead of BSD for Core. I understand work has been done to at least allow for expansion of a vdev pool by adding new disks, but I haven't dived into the particulars as I won't use ZFS in the first place. It's a non-native FS for Linux (originates on BSD, Apple IIRC) so I'd prefer BTRFS instead, if it weren't for the RAID5/6 issue that FS has. Again, work has been carried out to rectify the issue but it's not fully ready for deployment yet.

"You don't need eyes to see, you need vision"

 

(Faithless, 'Reverence' from the 1996 Reverence album)

Link to post
Share on other sites

  • 2 weeks later...
On 7/1/2025 at 5:47 AM, Terrofmen said:

Should I just buy that and replace it for the Ryzen 7 1700 so I don't have to worry about the Linux problems?

Would be a great deal, and would also fix the RAM issue. Don't forget to upgrade BIOS before replacing the processor.

Link to post
Share on other sites

On 6/18/2025 at 6:10 AM, Dutch_Master said:

Expanding storage on a RAID system, is actually much easier and cheaper then on ZFS. On ZFS you would indeed need to rescue all data from the array, destroy it, create a new expanded array and copy stuff back.

ZFS can be expanded much easier than that...

 

Adding more vdevs is very simple and doesnt require a full rebuild.

 

Adding drives to existing vdevs isnt possible, sure, but you can replace drives with larger ones very easily. Just depends exactly what your trying to do.

 

I've used mdraid and ZFS extensively for many years, and my home setup now resides on ZFS. There are a few advantages that i find invaluable, the snapshotting and replication features are fantastic and i have my whole ~30TB array backed up to another server off-site, making good use of those features. I dont use TrueNAS. Just a barebones Ubuntu server install with the ZFS stuff added on.

 

The current ZFS setup started out with 4 6TB drives configured as a pair of mirror vdevs (similar to raid10 if you like). Later i added a third mirror with 8tb drives, and have since then replaced the 4 6's with a pair of 18 and a pair of 12tb. All that swapping around done live, without taking anything offline or otherwise rebuilding anything.

Link to post
Share on other sites

@Aragorn- Sorry, but you've just made clear you didn't understand what I was saying, contradicting yourself in just 3 sentences.

 

When I started a RAID, I had 3x 500GB drives (back in 2008-ish) which I quickly expanded from the original RAID5 to a RAID6 with an additional drive. After that, the array was swapped out gradually to 2TB drives, then 4TB drives. Now the array has 16TB drives and when I added 2 more recently, these fully increased capacity over redundancy as the 2 drives for that were already part of the array. Each time I could swap out a drive (hot-swap caddies) then mark the removed drive as no longer part of the array and add the new drive to the array, let it re-silver and done, rinse/repeat for the next drive. I didn't need to create a new vdev or go through whatever hoops ZFS presents to expand an existing vdev with larger drives as you claim it can, just replace the drive, do some easy admin tasks in Webmin and let mdadm take care of itself.

"You don't need eyes to see, you need vision"

 

(Faithless, 'Reverence' from the 1996 Reverence album)

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×