Jump to content

Western Digital started shipping 20TB drives

AndreiArgeanu
13 hours ago, DildorTheDecent said:

>Imagine having a 32 bay, 2.5 inch server and not filling it with dirt cheap/Chinese garbage SSDs and configuring then into a giant RAID 0 array and seeing which one dies last and then sending that drive back to the manufacturer along with a note that says "this drive survived, use it for development purposes"

 

Madness.

Problem is no cheap SSDs work on hardware RAID controllers and perform worse than HDDs, but hey they would all die within a year. I've already killed 4 SSDs that way, not super cheap ones either, they were SanDisk.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Tony Tony Chopper said:

 

My first ssd had sandisk controller and died with 99% health, only had samsung ssd and crucial ssd since then non of them died highest life is 95% with 40 tb written, i figure ssd would die quite easily tho when used for caching in a nas.

I've had 1 840 EVO die, but I usually buy Pros. Only brought cheap ssd for my offsite server that doesn't matter as much but I mostly regret just not spending more on the Samsung Pros I would normally get.

Link to comment
Share on other sites

Link to post
Share on other sites

 

 

I want to see Linus review these for his next server upgrade, only WD has filled every single one of the drives they send him with images of Linus shopped on Jean Claude Van Damme.

 

Literally millions of images of Linus as Jean Claude Van Damme.

 

 

PC - NZXT H510 Elite, Ryzen 5600, 16GB DDR3200 2x8GB, EVGA 3070 FTW3 Ultra, Asus VG278HQ 165hz,

 

Mac - 1.4ghz i5, 4GB DDR3 1600mhz, Intel HD 5000.  x2

 

Endlessly wishing for a BBQ in space.

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/28/2019 at 3:30 AM, leadeater said:

People did want them in the server space and the problem was the platter density wasn't really getting that much better. Over a 3.5" disk with multiple platters the size increases were worth it, wasn't the case for 2.5". Thickness is a non issue for servers as there is a standard size, much thicker than in laptops.

 

And I wasn't even wanting SATA 2.5" I was waiting for NL-SAS and SAS to go over 2TB, disks that have no relation to laptops and their constraints.

 

The stop gap during that time period was to use double dense 3.5" bays to double the disks per chassis and also benefit from the increases in disk size for 3.5".

What standard? There used to be normal 2.5" drives and "slim" 2.5" drives. Now they go in size up to 15mm thickness. And everything in between. I realized that when I was thinking of buying a 5TB 2.5" drive and stick it in my case toolless 2.5" cages. When I realized none of large capacity 2.5" drives would fit coz they are all too thick. LOL. I'd have to buy an adapter and stick them in 3.5" cages coz they'll only fit there...

Link to comment
Share on other sites

Link to post
Share on other sites

29 minutes ago, RejZoR said:

Now they go in size up to 15mm thickness

Server 2.5" bays have always been 15.6mm, the discussion was about servers not consumer devices that use pretty much what ever they can get a manufacture to make for them. The only 2.5" storage devices in servers that are not 15mm are SSDs mounted in 15.6mm 2.5" sleds.

 

Edit:

Like when I started with saying I brought my 32 2.5" bay server on the presumption of the continued capacity increase of HDDs that had until then been true. This server:

0.9B56.jpg

https://lenovopress.com/tips0852-system-x3500-m4

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, RejZoR said:

I realized that when I was thinking of buying a 5TB 2.5" drive and stick it in my case toolless 2.5" cages.

I had to design my own SFF case to be able to put 2 of them...

F@H
Desktop: i9-13900K, ASUS Z790-E, 64GB DDR5-6000 CL36, RTX3080, 2TB MP600 Pro XT, 2TB SX8200Pro, 2x16TB Ironwolf RAID0, Corsair HX1200, Antec Vortex 360 AIO, Thermaltake Versa H25 TG, Samsung 4K curved 49" TV, 23" secondary, Mountain Everest Max

Mobile SFF rig: i9-9900K, Noctua NH-L9i, Asrock Z390 Phantom ITX-AC, 32GB, GTX1070, 2x1TB SX8200Pro RAID0, 2x5TB 2.5" HDD RAID0, Athena 500W Flex (Noctua fan), Custom 4.7l 3D printed case

 

Asus Zenbook UM325UA, Ryzen 7 5700u, 16GB, 1TB, OLED

 

GPD Win 2

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/26/2019 at 4:05 PM, The1Dickens said:

JBOD?

Yep.  That’s it.  Might be a terrible choice.  I know little about it.  I just poked about a bit since I now have a term I can search (thanks btw :) ).  It seems that it has its points in that you can just hook a bunch of random disparate things together.  It appears it has less redundancy and backup than I thought it had (apparently zero!) Apparently the deal is you have to do your backing up using a different system, there are just more of them available now.  A JBOD is something I personally would very much want to keep redundant and backed up.

putting two JBOD arrays in raid1 might help if such a thing could even be done.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Bombastinator said:

Yep.  That’s it.  Might be a terrible choice.  I know little about it.  I just poked about a bit since I now have a term I can search (thanks btw :) ).  It seems that it has its points in that you can just hook a bunch of random disparate things together.  It appears it has less redundancy and backup than I thought it had (apparently zero!) Apparently the deal is you have to do your backing up using a different system, there are just more of them available now.  A JBOD is something I personally would very much want to keep redundant and backed up.

putting two JBOD arrays in raid1 might help if such a thing could even be done.

JBOD really just means more drive connectors in a convenient chassis, the point is to just have disks connected to the host system with nothing in-between so they will all appear as individual disks no different to the SATA ports on a motherboard. JBODs are most commonly used with software defined storage and software RAIDs, rather than hardware RAID where 8 years later you might not be able to get a replacement card if it fails which means you're dead in the water.

 

The problem is there is another thing which is also called a JBOD, which is not a JBOD!. The other one you might have seen is a SPAN, where all the disks appear as one large contiguous volume across the disks not using RAID 0 but everything still dies if only 1 disks fails, it's just bad.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, leadeater said:

JBOD really just means more drive connectors in a convenient chassis, the point is to just have disks connected to the host system with nothing in-between so they will all appear as individual disks no different to the SATA ports on a motherboard. JBODs are most commonly used with software defined storage and software RAIDs, rather than hardware RAID where 8 years later you might not be able to get a replacement card if it fails which means you're dead in the water.

 

The problem is there is another thing which is also called a JBOD, which is not a JBOD!. The other one you might have seen is a SPAN, where all the disks appear as one large contiguous volume across the disks not using RAID 0 but everything still dies if only 1 disks fails, it's just bad.

So there is JBOD and JBOD! . Argh.  
definition of terms keeps eating me here.  English is inadequate.

Does JBOD! refer to a SPANed(?) (using whichever of the several systems that are used to do it) on a JBOD then?

 

I was thinking JBOD(! I guess) because issue seemed to be limitation of the number of SATA ports available on the main machine’s motherboard and the cost of hard drives vs the size of the files, which are apparently huge.  A JBOD(no “!”) doesn’t need drives to be the same size, and mechanical hard drives are slow.  Sata6 is kind of wasted on them.  A JBOD(!) box could be built out of any crusty old spare machine, stuffed full of low number SATA connection systems of whatever type and then filled with whatever old drives are lying about in an ad hoc manner without really caring what size they are.   Fragility would be an issue of course.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

Hold up, just lemme uhhhh.
Buy 20 and download the entire internet.

I do remember somewhere one of the HDD manufacturers saying smth about trying to hit 20 TB before 2020?

Someone told Luke and Linus at CES 2017 to "Unban the legend known as Jerakl" and that's about all I've got going for me. (It didn't work)

 

Link to comment
Share on other sites

Link to post
Share on other sites

JBOD has been used to refer to span for a long while, only recently have I found consumer products that distinguish the 2 (and offer both).

F@H
Desktop: i9-13900K, ASUS Z790-E, 64GB DDR5-6000 CL36, RTX3080, 2TB MP600 Pro XT, 2TB SX8200Pro, 2x16TB Ironwolf RAID0, Corsair HX1200, Antec Vortex 360 AIO, Thermaltake Versa H25 TG, Samsung 4K curved 49" TV, 23" secondary, Mountain Everest Max

Mobile SFF rig: i9-9900K, Noctua NH-L9i, Asrock Z390 Phantom ITX-AC, 32GB, GTX1070, 2x1TB SX8200Pro RAID0, 2x5TB 2.5" HDD RAID0, Athena 500W Flex (Noctua fan), Custom 4.7l 3D printed case

 

Asus Zenbook UM325UA, Ryzen 7 5700u, 16GB, 1TB, OLED

 

GPD Win 2

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Bombastinator said:

Does JBOD! refer to a SPANed(?) (using whichever of the several systems that are used to do it) on a JBOD then?

It used to be one in the same thing sorta, long while ago if you brought like a 4-12 bay JBOD the default configuration was the disks in a SPAN. JBOD is the type of enclosure, no inbuilt hardware RAID or storage controller (advanced), and the SPAN is what sits over top of that.

 

Personally in the more enterprise world we just call them disk shelves, no fuss no confusion. It's just a shelf of disks and nothing else.

 

Honestly I don't think I've ever heard JBOD used in a work context by anyone or any storage vendor. Not something I've really thought about before.

Link to comment
Share on other sites

Link to post
Share on other sites

21 hours ago, leadeater said:

It used to be one in the same thing sorta, long while ago if you brought like a 4-12 bay JBOD the default configuration was the disks in a SPAN. JBOD is the type of enclosure, no inbuilt hardware RAID or storage controller (advanced), and the SPAN is what sits over top of that.

 

Personally in the more enterprise world we just call them disk shelves, no fuss no confusion. It's just a shelf of disks and nothing else.

 

Honestly I don't think I've ever heard JBOD used in a work context by anyone or any storage vendor. Not something I've really thought about before.

back in 93 we experimented with JBOD arrays.  They were essentially just that. A mix of random drives that presented as one volume.  The term JBOD = just a bunch of drives meant exactly that.  I don;t know if it has some more specific definition now or some other industry spec, but it was common in my circles to list array options as raid 0,1, 5, 10 and JBOD.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, mr moose said:

but it was common in my circles to list array options as raid 0,1, 5, 10 and JBOD.

Yes it was called JBOD but that's just the colloquial name everyone used where as the real name of it was SPAN or BIG.

 

Quote
  • JBOD (derived from "just a bunch of disks"): described multiple hard disk drives operated as individual independent hard disk drives.
  • SPAN or BIG: A method of combining the free space on multiple hard disk drives from "JBoD" to create a spanned volume. Such a concatenation is sometimes also called BIG/SPAN. A SPAN or BIG is generally a spanned volume only, as it often contains mismatched types and sizes of hard disk drives.[1]

 

Not like it really matters as people generally know which is being referred to simply by context but it's one of the typical issues of a term being misused often enough it's become the understood name of it while still actually being wrong.

 

I guess that is why today we avoid that term and use disk shelf/enclosure instead.

Link to comment
Share on other sites

Link to post
Share on other sites

Are there any general benefits of using a specialized drive for home use over those general purpose ones? Some come with SAS interface so those aren't an option, but I think I've seen some WD Purple, WD Gold, Seagate Exos or Skyhawk Ai etc that come with SATA interface. Are they really so specialized they'd work well only for "video surveillance" or would any of them be more durable long term because they are meant for most heavy duty tasks and usage in home would make them more resilient/durable? Just wondering, coz when you go with such high capacities, buying an allegedly more durable drive is more convenient even if it costs a bit more even for relatively disposable, non critical data. It's rather inconvenient to redownload all the pr0n, movies or just all the stuff you archived that's not critical but nice to have it hoarded at hand instead of relying on the internet download links that often go dead over time... One would assume "surveillance" drives might suck and random access, but would be great at sequential which is what a bulk archive drive would essentially be...

Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, RejZoR said:

Are there any general benefits of using a specialized drive for home use over those general purpose ones? Some come with SAS interface so those aren't an option, but I think I've seen some WD Purple, WD Gold, Seagate Exos or Skyhawk Ai etc that come with SATA interface. Are they really so specialized they'd work well only for "video surveillance" or would any of them be more durable long term because they are meant for most heavy duty tasks and usage in home would make them more resilient/durable? Just wondering, coz when you go with such high capacities, buying an allegedly more durable drive is more convenient even if it costs a bit more even for relatively disposable, non critical data. It's rather inconvenient to redownload all the pr0n, movies or just all the stuff you archived that's not critical but nice to have it hoarded at hand instead of relying on the internet download links that often go dead over time... One would assume "surveillance" drives might suck and random access, but would be great at sequential which is what a bulk archive drive would essentially be...

For surveillance disks overall no, like edge case maybe in a single disk setup where you start to reach maximum performance capability and these prioritize writes but really there is little you can do so it's all borderline margin of error. Once you start putting disks in to RAID arrays etc then all of that goes out the window as write caching comes in so there are only 2 important factors for that, vibration compensation and read error timeout (TLER WD speak).

 

NAS optimized drives, or any with Read Error Timeout/TLER are actually worse for a single disk configuration like in a desktop as the disk is more likely to give up trying on a small error and crash an app or the OS. In this case you actually want try forever which is how standard HDDs treat errors. In an array another disk will have the problem bit/sector so you want the disk to report back an error as soon as possible so it can be retrieved from another disk and that error fixed. 

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/30/2019 at 9:23 PM, leadeater said:

Server 2.5" bays have always been 15.6mm, the discussion was about servers not consumer devices that use pretty much what ever they can get a manufacture to make for them. The only 2.5" storage devices in servers that are not 15mm are SSDs mounted in 15.6mm 2.5" sleds.

 

Edit:

Like when I started with saying I brought my 32 2.5" bay server on the presumption of the continued capacity increase of HDDs that had until then been true. This server:

0.9B56.jpg

https://lenovopress.com/tips0852-system-x3500-m4

Laws of physics... always check the roadblock that is the laws of physics. XD

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, TechyBen said:

Laws of physics... always check the roadblock that is the laws of physics. XD

Well I did, 2.5" was getting larger as time went on back then. The stall was just so unexpected, damn SSDs reducing the need lol.

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, leadeater said:

Well I did, 2.5" was getting larger as time went on back then. The stall was just so unexpected, damn SSDs reducing the need lol.

Now you can wait for SSD's to continue to drop in price, until the next <insert some BS excuse cooked up by manufacturers>  causes a spike again and stuck with expensive SSD's again.

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, leadeater said:

Well I did, 2.5" was getting larger as time went on back then. The stall was just so unexpected, damn SSDs reducing the need lol.

The stall is basically at physics limit (similar with CPU, yes the speed of increase at the time was great, but we are hitting hard physical limits).

That's why shackled/heating (microwave) tech is developed, they are sidestepping the physical limit (which was reached with 90 degree magnetic poles last IIRC).

Once those two are done with, I don't see any space left for HDDs to get more dense for data. We'd have to do something drastic like solid state MEMs or similar. Perhaps removing the actuator arms and replacing them with a single sensing wire? So size and wobble is reduced to allow for more platters...

 

Similar to cars, they are already very close to the physical MPG limits on fuel. So electric or just smaller/lighter cars are the only way forwards from here.

Link to comment
Share on other sites

Link to post
Share on other sites

29 minutes ago, TechyBen said:

That's why shackled/heating (microwave) tech is developed, they are sidestepping the physical limit (which was reached with 90 degree magnetic poles last IIRC).

Once those two are done with, I don't see any space left for HDDs to get more dense for data

Those were all future tech at the time and still are really, platters were getting denser just not enough for at the 2.5" size to be worth it (in HDD manufacture eyes). Just as a reminder this is the time period where the largest HDD Seagate sold was 2TB and that was 3.5". It's also the same time period where Western Digital instead of increasing 2.5" capacity created 5mm and 7mm thick 2.5" HDDs with two platters at 250GB per platter. A standard 15mm 2.5" drive has 4 platters so in 2012 we could of had 2TB 2.5" disks but we didn't get that.

 

Physical limits is a today thing not a 2011/2012 issue. We have 5TB 2.5" HDDs today using CMR as evidence that there has been no physical limit just lack of drive to actually release the products.

 

In the early 2010's, even more so before, server world did not want 3.5" disks as number of disk per chassis per U was way more important for IOPs so it was until then critically important 2.5" drives got larger, there wasn't any reason to think otherwise. Even with the introduction of SSDs they were extremely small and extremely expensive so 10K RPM SAS was still the go to for performance and capacity, 15K RPM was reserved for more specialized use cases as they were quite unreliable but if you needed the performance then that was it. Early generation SSDs supplanted 15K RPM SAS because capacity wasn't a necessity there.

 

Edit:

What I was complaining about was that I know for a fact larger 2.5" could have been made but were not which actually impacted me, having the option now as I originally said is worthless as the server in question is now multiple generations old so not worth sinking money in to over current options of today. Today I have zero interest in 2.5" HDDs or how big they can be.

Link to comment
Share on other sites

Link to post
Share on other sites

51 minutes ago, leadeater said:
Quote

 

Those were all future tech at the time and still are really, platters were getting denser just not enough for at the 2.5" size to be worth it (in HDD manufacture eyes). Just as a reminder this is the time period where the largest HDD Seagate sold was 2TB and that was 3.5". It's also the same time period where Western Digital instead of increasing 2.5" capacity created 5mm and 7mm thick 2.5" HDDs with two platters at 250GB per platter. A standard 15mm 2.5" drive has 4 platters so in 2012 we could of had 2TB 2.5" disks but we didn't get that.

 

Physical limits is a today thing not a 2011/2012 issue. We have 5TB 2.5" HDDs today using CMR as evidence that there has been no physical limit just lack of drive to actually release the products.

 

In the early 2010's, even more so before, server world did not want 3.5" disks as number of disk per chassis per U was way more important for IOPs so it was until then critically important 2.5" drives got larger, there wasn't any reason to think otherwise. Even with the introduction of SSDs they were extremely small and extremely expensive so 10K RPM SAS was still the go to for performance and capacity, 15K RPM was reserved for more specialized use cases as they were quite unreliable but if you needed the performance then that was it. Early generation SSDs supplanted 15K RPM SAS because capacity wasn't a necessity there.

 

Edit:

What I was complaining about was that I know for a fact larger 2.5" could have been made but were not which actually impacted me, having the option now as I originally said is worthless as the server in question is now multiple generations old so not worth sinking money in to over current options of today. Today I have zero interest in 2.5" HDDs or how big they can be.

 

 

The 2.5 inch 5tb are thicker though? Right? 2.5inch has physically smaller dimensions, thus it physically cannot have as much data as the larger drives. The number of platters and some of the complexity of the arms, just physically cannot fit as much in.

 

That and vibration etc is expected to be more for a product made more for portable (and again, physical size means it gets more vibration, as less mass to dampen it).

Link to comment
Share on other sites

Link to post
Share on other sites

They are 15mm, but as he said that's the standard thickness for 2.5" drive bays in the server space already. Drives that thick aren't really targeting (current) laptop use for obvious reasons.

F@H
Desktop: i9-13900K, ASUS Z790-E, 64GB DDR5-6000 CL36, RTX3080, 2TB MP600 Pro XT, 2TB SX8200Pro, 2x16TB Ironwolf RAID0, Corsair HX1200, Antec Vortex 360 AIO, Thermaltake Versa H25 TG, Samsung 4K curved 49" TV, 23" secondary, Mountain Everest Max

Mobile SFF rig: i9-9900K, Noctua NH-L9i, Asrock Z390 Phantom ITX-AC, 32GB, GTX1070, 2x1TB SX8200Pro RAID0, 2x5TB 2.5" HDD RAID0, Athena 500W Flex (Noctua fan), Custom 4.7l 3D printed case

 

Asus Zenbook UM325UA, Ryzen 7 5700u, 16GB, 1TB, OLED

 

GPD Win 2

Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, TechyBen said:

The 2.5 inch 5tb are thicker though? Right? 2.5inch has physically smaller dimensions, thus it physically cannot have as much data as the larger drives. The number of platters and some of the complexity of the arms, just physically cannot fit as much in.

No 2.5" 5TB is 15mm thick. As I mentioned in 2012 WD, as well as Seagate, created 5mm and 7mm thick 2.5" disks. Up until then 2.5" was 15mm and for servers it has only ever been 15mm and never anything else. These current 5TB 2.5" HDDs are in the server product lines but that doesn't in any way mandate them to only be used in those situations, so there are 2.5" 5TB external HDDs but the HDD inside is a server spec part. Not all server focused parts are expensive, we do push for the lowest $/GB possible.

 

As far as I'm concerned for my use case anything not 15mm is non-standard.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, leadeater said:

No 2.5" 5TB is 15mm thick. As I mentioned in 2012 WD, as well as Seagate, created 5mm and 7mm thick 2.5" disks. Up until then 2.5" was 15mm and for servers it has only ever been 15mm and never anything else. These current 5TB 2.5" HDDs are in the server product lines but that doesn't in any way mandate them to only be used in those situations, so there are 2.5" 5TB external HDDs but the HDD inside is a server spec part. Not all server focused parts are expensive, we do push for the lowest $/GB possible.

 

As far as I'm concerned for my use case anything not 15mm is non-standard.

In fact, yes! 15mm is the standard for drives focused to servers but as you said that does not mean that they cannot be used in other cases, it is rare to see because people don´t put them in their home computer too often, but still a possibility.

Seagate Technology | Official Forums Team

IronWolf Drives for NAS Applications - SkyHawk Drives for Surveillance Applications - BarraCuda Drives for PC & Gaming

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×