Jump to content

Synology DS918+ & HDDs (Windows / Defrag / Read & Write / Longevity)

I have a few questions about Western Digital 10 TB Red & Seagate Ironwolf 10 TB HDD's. 

1.) I currently use some of these drives in my Windows desktop PC. I'm wondering how much overhead (free storage space) I should leave to keep the read/writes running optimally and also so Windows can defrag them. This seems to be a highly controversial subject for some reason. I've seen people/articles suggest 15-20% for Windows Defrag. Others claim this is an outdated estimate and instead suggest 5% or even lower. Does anyone know for sure? 

2.) I'll eventually be moving these drives over to my Synology DS918+ NAS and using RAID 5 or SHR-1 and the same question as above applies. If I want to keep the read & writes running optimally and if I want to keep them defragged via whatever software solution Synology uses, how much free space should I leave available? 

The reason I'm asking these questions is because I have enough data to 'top off' these drives and then some, but I don't want to do it at the expense of performance/HDD longevity/the proper functioning of Windows/Synology defrag.

Link to comment
Share on other sites

Link to post
Share on other sites

Windows defrag is a joke, always was and still is, even if it learned a few tricks.

 

For a good defrag try to get a copy of O&O Defrag. first defrag manually for space, then for access. Afterwards keep a background task running access level defrag every now and then, or just run it manually.

 

As for windows and performance, as long as you defrag frequently, you can fill it up to at least 99% without losing performance drive wise. If you don't defrag frequently the write speeds will suffer the most, even may "freeze" your system until windows with NTFS has found enough space crumbs to write your file, which can take a while. 

 

Using a NAS like synology or qnap or what have you, will mostly lose you performance against a well build "storage PC" running raid. That we tested a lot at work (an IT System house). (Okok if you use a old machine as storage server, really old, then a NAS might be faster but else a current system will be faster than a NAS, but will usually draw more energy as well. But here you need to keep in mind if running for example WD Red or Seagate Iron Wolfs, those disks are made for 24/7 usage. Making them spin down and up all the time, as energy saver, shortens their lifespan dramatically.)

 

 

Cheers

Ang

Main System:

Anghammarad : Asrock Taichi x570, AMD Ryzen 7 5800X @4900 MHz. 32 GB DDR4 3600, some NVME SSDs, Gainward Phoenix RTX 3070TI

 

System 2 "Igluna" AsRock Fatal1ty Z77 Pro, Core I5 3570k @4300, 16 GB Ram DDR3 2133, some SSD, and a 2 TB HDD each, Gainward Phantom 760GTX.

System 3 "Inskah" AsRock Fatal1ty Z77 Pro, Core I5 3570k @4300, 16 GB Ram DDR3 2133, some SSD, and a 2 TB HDD each, Gainward Phantom 760GTX.

 

On the Road: Acer Aspire 5 Model A515-51G-54FD, Intel Core i5 7200U, 8 GB DDR4 Ram, 120 GB SSD, 1 TB SSD, Intel CPU GFX and Nvidia MX 150, Full HD IPS display

 

Media System "Vio": Aorus Elite AX V2, Ryzen 7 5700X, 64 GB Ram DDR4 3200 Mushkin, 1 275 GB Crucial MX SSD, 1 tb Crucial MX500 SSD. IBM 5015 Megaraid, 4 Seagate Ironwolf 4TB HDD in raid 5, 4 WD RED 4 tb in another Raid 5, Gainward Phoenix GTX 1060

 

(Abit Fatal1ty FP9 IN SLI, C2Duo E8400, 6 GB Ram DDR2 800, far too less diskspace, Gainward Phantom 560 GTX broken need fixing)

 

Nostalgia: Amiga 1200, Tower Build, CPU/FPU/MMU 68EC020, 68030, 68882 @50 Mhz, 10 MByte ram (2 MB Chip, 8 MB Fast), Fast SCSI II, 2 CDRoms, 2 1 GB SCSI II IBM Harddrives, 512 MB Quantum Lightning HDD, self soldered Sync changer to attach VGA displays, WLAN

Link to comment
Share on other sites

Link to post
Share on other sites

Well, what you're dealing with is a performance curve and it's really up to you at what point of that curve you deem too slow. I have a vague memory of a program that people would use that would read/write off various points of the disk and show you the performance from the beginning of the disk (near the center) to the end (near the outer edge) and from there people would partition the drive so they'd have a fast partition. I don't recall the name of the software though.

 

Defragmentation is really not all that beneficial unless you are writing/reading/deleting/rewriting a lot of information over a long period of time. As far as I have experienced once you go beyond about the ~80% point is where you really start to see thoughput degradation. Anything really I/O intensive having to access data that far out on the disk will appear slower so I would imagine the "controversy" you've experienced is because everybody's use case is different. What affects one users workflow doesn't necessarily affect anothers so you get different answers, different opinions. The only real way you'll get the best answer would be to adapt it around your specific use case. There isn't any one be all & end all when it comes to this.

 

If you're only on a 1Gbit network you have a decent amount of overhead to not have to worry about the maximum performance here. It wouldn't be until the pool is quite overwhelmingly full that you'd start to notice a loss since the most you'd ever see is about 115MB/s. Of course if it can be helped we wouldn't want to see the maximum read/writes ever lower than that.

Link to comment
Share on other sites

Link to post
Share on other sites

34 minutes ago, Anghammarad said:

Windows defrag is a joke, always was and still is, even if it learned a few tricks.

 

For a good defrag try to get a copy of O&O Defrag. first defrag manually for space, then for access. Afterwards keep a background task running access level defrag every now and then, or just run it manually.

 

As for windows and performance, as long as you defrag frequently, you can fill it up to at least 99% without losing performance drive wise. If you don't defrag frequently the write speeds will suffer the most, even may "freeze" your system until windows with NTFS has found enough space crumbs to write your file, which can take a while. 

 

Using a NAS like synology or qnap or what have you, will mostly lose you performance against a well build "storage PC" running raid. That we tested a lot at work (an IT System house). (Okok if you use a old machine as storage server, really old, then a NAS might be faster but else a current system will be faster than a NAS, but will usually draw more energy as well. But here you need to keep in mind if running for example WD Red or Seagate Iron Wolfs, those disks are made for 24/7 usage. Making them spin down and up all the time, as energy saver, shortens their lifespan dramatically.)

 

 

Cheers

Ang

Hmm, how are you going to defrag at 99%? Let's say I have a 100-200 GB file, doesn't defrag need enough room to move that file around? If the HDD is almost 'topped off', you won't be able to move that file around to defrag. At least that would be my understanding of the situation. What is so terrible with Windows defrag? It runs on its own as far as I understand. I'm generally not a fan of using 3rd party software unless I absolutely need to. 

Link to comment
Share on other sites

Link to post
Share on other sites

Windows defrag tells you "Everythings fine! All defragged!" and when you run a real defrag tool, like O&O, which is a kind of leader in this field, an in depth analysis of the drive tells a whole other story. 

 

This is now, for me personally, over 20 years of experience regarding defragging on windows systems. 

 

as for 100-200gb file... files, only if really really small, aren't moved in one piece but in small chunks. so a good defragger can use this even to defrag such a file on a drive which hasn't got that much space left to store a full copy. 

 

As for O&O you can watch what it does, which blocks it reads and where it does put them. This can take a while, the less space there is due to shuffling data around, but it still works. 

 

As for O&O, I'm not promoting everything they have in their portfolio, but the defrag is, in my experience over the years, the best on the market. Their Backup/Imaging software isn't that bad, but there are better choices on the market. 

 

And another AS... as for the price for O&O defrag, there are special offers every few weeks where you can get it for around 15 bucks, or for 40 bucks you get their "all around happy" package with 5 licences per included product. (that I ususally grad every 2-3 years when upgrades are worth it).

Main System:

Anghammarad : Asrock Taichi x570, AMD Ryzen 7 5800X @4900 MHz. 32 GB DDR4 3600, some NVME SSDs, Gainward Phoenix RTX 3070TI

 

System 2 "Igluna" AsRock Fatal1ty Z77 Pro, Core I5 3570k @4300, 16 GB Ram DDR3 2133, some SSD, and a 2 TB HDD each, Gainward Phantom 760GTX.

System 3 "Inskah" AsRock Fatal1ty Z77 Pro, Core I5 3570k @4300, 16 GB Ram DDR3 2133, some SSD, and a 2 TB HDD each, Gainward Phantom 760GTX.

 

On the Road: Acer Aspire 5 Model A515-51G-54FD, Intel Core i5 7200U, 8 GB DDR4 Ram, 120 GB SSD, 1 TB SSD, Intel CPU GFX and Nvidia MX 150, Full HD IPS display

 

Media System "Vio": Aorus Elite AX V2, Ryzen 7 5700X, 64 GB Ram DDR4 3200 Mushkin, 1 275 GB Crucial MX SSD, 1 tb Crucial MX500 SSD. IBM 5015 Megaraid, 4 Seagate Ironwolf 4TB HDD in raid 5, 4 WD RED 4 tb in another Raid 5, Gainward Phoenix GTX 1060

 

(Abit Fatal1ty FP9 IN SLI, C2Duo E8400, 6 GB Ram DDR2 800, far too less diskspace, Gainward Phantom 560 GTX broken need fixing)

 

Nostalgia: Amiga 1200, Tower Build, CPU/FPU/MMU 68EC020, 68030, 68882 @50 Mhz, 10 MByte ram (2 MB Chip, 8 MB Fast), Fast SCSI II, 2 CDRoms, 2 1 GB SCSI II IBM Harddrives, 512 MB Quantum Lightning HDD, self soldered Sync changer to attach VGA displays, WLAN

Link to comment
Share on other sites

Link to post
Share on other sites

34 minutes ago, Windows7ge said:

I would imagine the "controversy" you've experienced is because everybody's use case is different.

Even under the same use case scenario I receive different responses--especially when it comes to defragging. I'm primarily using these drives for media consumption and some document backup via my NAS/Desktop PC. So I'll be accessing files typically from 8 GB up to 100-200 GB. Though, I will have smaller-ish files like Windows Notepad small. Based on my personal experience with 'topping off' drives, they do seem to be a bit slower when the blue capacity bar turns red.

Example: I have 1x HDD with 697 GB free out of 9.09 TB and 1x with 75.4 GB free out of 7.27 TB in my Windows desktop. Both *seem* to be slower then what they were before (accessing/reading), not sure about writing as I've never measured the before/after performance. 

As I mentioned in my OP, the reason I'm asking is because I'd prefer to fill up these drives, but not if It's going to create slow access times / may interfere with video / audio playback via large files. And ideally, tinkering around with this to find the 'sweet spot' could be time consuming so I was checking to see if someone could offer a precise estimate because transferring this much data only to find out I've transferred too much and having to remove a lot of it would be annoying. 

Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, Vectraat said:

so I was checking to see if someone could offer a precise estimate

I know there are people here who could answer that with statistical data. Really though it's never a good idea to try to write over every last available bit before adding a new disk. I don't see why you would want to maintain a full-as-possible array. You will see performance degradation as a result.

Link to comment
Share on other sites

Link to post
Share on other sites

Personally I don't worry about defrag (I let it run on schedule but never check it), I've never encountered any meaningful decrease or increase in performance. If you are doing a lot of deletions it may become an issue but rarely would it cause application performance or access issues like video files not playing back or time seeking well.

 

For larger servers with hundreds of users simultaneously accessing it sequential access patterns actually become random I/O as it has to service these multiple sessions, more users the more random like the I/O will be and the less impact fragmentation has since the disks will be seeking constantly for blocks of data.

Link to comment
Share on other sites

Link to post
Share on other sites

The importance of defrag'ing manually hasn't been relevant since the early 2000s in all honesty. Since Windows Vista, windows has been auto-defragging to keep everything in tip-top shape. Now as a jaded bitter old man (ish), drive health is a crapshoot. They either will last you, or they won't. Modern drives are smart enough to mark bad/shitty sectors as unusable, so I wouldn't expect performance to dergade regardless of how you use it. At least not in leaps and bounds.

 

In regards to your 100-200gb file, windows store bits of it in RAM (physical or virtual) while it plays musical chairs. My 8TB external has 300gb left, which is what 4%? I'm still getting upwards of 150mbyte/s. I have definitely been removing/adding files of various sizes over the past year, so if it were 2001 I'd expect my seek times to be measured in seconds and speeds to be halved lol.

 

The reason you get conflicitng information is because people have been doing this for 20+ years and have a lot of knowledge spanning many iterations of technology.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1. Don't worry about defrags. In terms of partitioning a RAID array itself, I wouldn't bother with "overprovisioning" (the practice if leaving some space unpartitioned).

2. I wouldn't worry too much about how close you get to full (say, keep it above 5%, but realistically, it won't really make a difference anyway)

3. Don't fill up your drive completely - this isn't due to performance reasons, but more so due to practical reasons. Plan ahead. If your drive capacity is getting under 10%, time to order a new drive, since you'll likely fill the drive up completely and run out of spare space soon enough anyway.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×