Jump to content

Need storage space? Seagate to introduce 20TB drives aimed at mass market sales later this year

BondiBlue

Summary

Seagate has been producing 20TB drives for some time now, but they're planning to introduce mass market SMR and PMR drives of the same capacity later this year. The drives will use more traditional shingled magnetic recording (SMR) or perpendicular magnetic recording (PMR) technologies instead of the more expensive heat-assisted magnetic recording (HAMR) technology used in the existing 20TB drives. 

 

Quotes

Quote

"Seagate currently ships 20TB hard drives that use its heat-assisted magnetic recording (HAMR) technology to select partners and inside its Lyve storage systems. However, these drives are not intended as mass market products. Instead, the company is prepping to release PMR-based 20TB HDDs (with two dimensional magnetic recording [TDMR] enhancement) for typical customers requiring high capacities, and it's also working on SMR-based 20TB drives for hyperscalers with software that can take into account shingled magnetic recording technology."

 

My thoughts

I know that I won't personally have a 20TB drive any time soon, but having large drives is still a plus. In time these sorts of products get cheaper, so maybe in a few years time it would be viable for many people to utilize these for high capacity applications, such as in a home NAS or media server. There are some downsides to having such large quantities of data contained on a single drive, though. Say, for example, you have a RAID 1 array with 2 of these 20TB drives, and one of them fails. Your other drive now contains the only copy of the data (in this example), and it would need to handle up to 20TB of data being read from it when filling the replacement drive up. That's a lot of stress to put a drive through, and in smaller scale situations it could lead to data loss if a drive dies during a rebuild process. 

 

Still, for some people they could definitely make sense. I know I'd love to have that much storage in a small space. 

 

Sources

Tom's Hardware Link

Phobos: AMD Ryzen 7 2700, 16GB 3000MHz DDR4, ASRock B450 Steel Legend, 8GB Nvidia GeForce RTX 2070, 2GB Nvidia GeForce GT 1030, 1TB Samsung SSD 980, 450W Corsair CXM, Corsair Carbide 175R, Windows 10 Pro

 

Polaris: Intel Xeon E5-2697 v2, 32GB 1600MHz DDR3, ASRock X79 Extreme6, 12GB Nvidia GeForce RTX 3080, 6GB Nvidia GeForce GTX 1660 Ti, 1TB Crucial MX500, 750W Corsair RM750, Antec SX635, Windows 10 Pro

 

Pluto: Intel Core i7-2600, 32GB 1600MHz DDR3, ASUS P8Z68-V, 4GB XFX AMD Radeon RX 570, 8GB ASUS AMD Radeon RX 570, 1TB Samsung 860 EVO, 3TB Seagate BarraCuda, 750W EVGA BQ, Fractal Design Focus G, Windows 10 Pro for Workstations

 

York (NAS): Intel Core i5-2400, 16GB 1600MHz DDR3, HP Compaq OEM, 240GB Kingston V300 (boot), 3x2TB Seagate BarraCuda, 320W HP PSU, HP Compaq 6200 Pro, TrueNAS CORE (12.0)

Link to comment
Share on other sites

Link to post
Share on other sites

If someone's wondering like me, PMR = CMR
obraz.thumb.png.532636da174028a9345457900651d607.png

 

I am always surprised that they do CMR in top sizes at the same time they do SMR, Wouldn't it make sense for SMR to be bigger?

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, poochyena said:

I'm still here with my 450GB ssd and 280gb backup drive

yeah but if the RAID fails it doesn't take you 3 weeks to rebuild the array 🤣

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, Brooksie359 said:

Unless it's like 100 bucks I won't touch it. 

Considering most 4TB drives are between 85-100 and the cheapest 10TB is $200, extremely unlikely you'll see that pricing within the next decade. 

"Put as much effort into your question as you'd expect someone to give in an answer"- @Princess Luna

Make sure to Quote posts or tag the person with @[username] so they know you responded to them!

 RGB Build Post 2019 --- Rainbow 🦆 2020 --- Velka 5 V2.0 Build 2021

Purple Build Post ---  Blue Build Post --- Blue Build Post 2018 --- Project ITNOS

CPU i7-4790k    Motherboard Gigabyte Z97N-WIFI    RAM G.Skill Sniper DDR3 1866mhz    GPU EVGA GTX1080Ti FTW3    Case Corsair 380T   

Storage Samsung EVO 250GB, Samsung EVO 1TB, WD Black 3TB, WD Black 5TB    PSU Corsair CX750M    Cooling Cryorig H7 with NF-A12x25

Link to comment
Share on other sites

Link to post
Share on other sites

As mentioned, I'd be weary of going too far up with HDD capacity - the time it would take to read (let alone write) all the data it can store is significant. Unless you're looking for a very dense backup solution that won't be written to very often you might be better off with a couple of 10TB drives instead.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

And here I am contemplating a 4TBx2 upgrade for my mass storage needs, thinking about how it'll likely last me another 5 to 8 years. 

55 minutes ago, TVwazhere said:

Considering most 4TB drives are between 85-100 and the cheapest 10TB is $200, extremely unlikely you'll see that pricing within the next decade. 

I mean, yes, but it's not because of production costs. There simply isn't a need for most people to have that kind of storage, and that need will only reduce as more stuff moves online. I'd be surprised if it ever drops that low. 

CPU: Ryzen 9 5900 Cooler: EVGA CLC280 Motherboard: Gigabyte B550i Pro AX RAM: Kingston Hyper X 32GB 3200mhz

Storage: WD 750 SE 500GB, WD 730 SE 1TB GPU: EVGA RTX 3070 Ti PSU: Corsair SF750 Case: Streacom DA2

Monitor: LG 27GL83B Mouse: Razer Basilisk V2 Keyboard: G.Skill KM780 Cherry MX Red Speakers: Mackie CR5BT

 

MiniPC - Sold for $100 Profit

Spoiler

CPU: Intel i3 4160 Cooler: Integrated Motherboard: Integrated

RAM: G.Skill RipJaws 16GB DDR3 Storage: Transcend MSA370 128GB GPU: Intel 4400 Graphics

PSU: Integrated Case: Shuttle XPC Slim

Monitor: LG 29WK500 Mouse: G.Skill MX780 Keyboard: G.Skill KM780 Cherry MX Red

 

Budget Rig 1 - Sold For $750 Profit

Spoiler

CPU: Intel i5 7600k Cooler: CryOrig H7 Motherboard: MSI Z270 M5

RAM: Crucial LPX 16GB DDR4 Storage: Intel S3510 800GB GPU: Nvidia GTX 980

PSU: Corsair CX650M Case: EVGA DG73

Monitor: LG 29WK500 Mouse: G.Skill MX780 Keyboard: G.Skill KM780 Cherry MX Red

 

OG Gaming Rig - Gone

Spoiler

 

CPU: Intel i5 4690k Cooler: Corsair H100i V2 Motherboard: MSI Z97i AC ITX

RAM: Crucial Ballistix 16GB DDR3 Storage: Kingston Fury 240GB GPU: Asus Strix GTX 970

PSU: Thermaltake TR2 Case: Phanteks Enthoo Evolv ITX

Monitor: Dell P2214H x2 Mouse: Logitech MX Master Keyboard: G.Skill KM780 Cherry MX Red

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

57 minutes ago, Sauron said:

As mentioned, I'd be weary of going too far up with HDD capacity - the time it would take to read (let alone write) all the data it can store is significant. Unless you're looking for a very dense backup solution that won't be written to very often you might be better off with a couple of 10TB drives instead.

The bigger risk, since they'd generally be used in a RAID Array:

Once they're old enough that a single drive dies?  The amount of time it takes to rebuild an array at 20TB per drive is /staggering/.  That much constant drive thrashing across the other "old" drives in the array could legitimately cause them to fail.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Loote said:

I am always surprised that they do CMR in top sizes at the same time they do SMR, Wouldn't it make sense for SMR to be bigger?

Yes, no, maybe. SMR version is usually more about lower cost than outright achieving higher capacity. HDD manufactures aim for quite standardized and regular expected sizes so if the extra platter(s) in the SMR HDD doesn't or isn't able to create an expected size then they probably won't do it just because it's possible. So it can be tricky to offer really large capacities only in SMR and not in PMR or HAMR as you might hit issues with customers expectations or desires, it can be hard to stop customers from buying unsuitable HDDs if all they care about is the largest possible for example.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, tkitch said:

Once they're old enough that a single drive dies?  The amount of time it takes to rebuild an array at 20TB per drive is /staggering/.  That much constant drive thrashing across the other "old" drives in the array could legitimately cause them to fail.

Don't use traditional RAID with drives these large and it's not a problem. Use something like Ceph and configure maximum Recovery/Backfill parameters and you'll be able to both ensure data redundancy is kept at all times and rebuilds won't kill disks.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Franck said:

yeah but if the RAID fails it doesn't take you 3 weeks to rebuild the array 🤣

Yeah and in those three weeks the rest of the drives fail during the rebuild lmao

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, leadeater said:

Don't use traditional RAID with drives these large and it's not a problem.

Sustained throughput scales with areal density of data on the platters for the same given RPM. 
 

That said, I've rebuilt RAID6 arrays that took 3 days to complete whereas a RAID10 took 12 hours or less. The bottleneck was in calculating parity data. 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, StDragon said:

Sustained throughput scales with areal density of data on the platters for the same given RPM. 
 

That said, I've rebuilt RAID6 arrays that took 3 days to complete whereas a RAID10 took 12 hours or less. The bottleneck was in calculating parity data. 

Yea, people run in to problems when using not very good RAID cards (e.g. 3ware) and pure software RAID. As long as you've got a decent RAID card with write-back cache and it's active rebuilds aren't much of a problem but it gets real bad real fast outside of that. It's why I only use higher end LSI or HPE RAID cards, got no time for messing round with brands like highpoint.

 

Even so throughput does increase a bit but it's not actually linear with the increase in capacity sadly. Also Helium drives for example decreased the platter thickness so more platters could be put in to the same space so throughput was unchanged for those.

 

Big old it depends problem. Still would recommend moving away from RAID though if possible, but it's extremely simple and reliable so still a good option and in many cases the better option.

Link to comment
Share on other sites

Link to post
Share on other sites

If people want just one 20TB drive then a RAID setup would be better,

RAID 0 of 2x 10TB drives (for the people who like to take risks),RAID 5 with 4x 6TB drives,or a RAID 6 of 5x 5TB drives.

A PC Enthusiast since 2011
AMD Ryzen 7 5700X@4.65GHz | GIGABYTE GTX 1660 GAMING OC @ Core 2085MHz Memory 5000MHz
Cinebench R23: 15669cb | Unigine Superposition 1080p Extreme: 3566
Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Vishera said:

RAID 0 of 2x 10TB drives (for the people who like to take risks)

IMHO RAID0 no longer serves a purpose. The whole point of using in the past was for more contiguous space and double the throughput. So far example it was perfect for scratch media or video scrubbing. But a 1 or 2 TB SSD takes the place of that solution. In fact, depending on the work you probably don't even need that much space in an SSD for scrubbing.

Link to comment
Share on other sites

Link to post
Share on other sites

Looks cool. Seagate's current Exos drives are uber affordable (compared to the market around them anyways). Just picked up 4 14TB drives for 290 USD a pop on amazon, replacing 4 3TB drives in a RZ1 setup and been really pleased.

 

On the plus side, between the better NRE, great AFR and 5 year warranty, those drives are actually more efficient than the HGST Deskstar NAS drives they replaced (which had 7W idle, 9W operating, compared to 5W idle and 7-10W operating for the new ones).

 

at 20 dollars a TB that's still pretty crazy nice, so the next gen of insane high end should push these 10-16 TB drives even lower. Just need companies to start releasing SSDs bigger than 8TB to consumers so the 4TB non-QLC ones can come closer to the price/gb of 1-2 TB drives.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, StDragon said:

IMHO RAID0 no longer serves a purpose. The whole point of using in the past was for more contiguous space and double the throughput. So far example it was perfect for scratch media or video scrubbing. But a 1 or 2 TB SSD takes the place of that solution. In fact, depending on the work you probably don't even need that much space in an SSD for scrubbing.

I said it was an options for those who need 20TB but like to take risks,..

Speed is always welcome,especially when working with large amounts of data.

Some workloads require more,some less,

but why buy a 20TB SMR drive when you can just go with a RAID setup?

 

reading/writing from a single 20TB drive will be significantly slower then any configuration with similar capacity that uses striping.

A PC Enthusiast since 2011
AMD Ryzen 7 5700X@4.65GHz | GIGABYTE GTX 1660 GAMING OC @ Core 2085MHz Memory 5000MHz
Cinebench R23: 15669cb | Unigine Superposition 1080p Extreme: 3566
Link to comment
Share on other sites

Link to post
Share on other sites

For the most part consumer side really percentage wise almost everyone will be fine with current multi TB HDD for mass storage until SSD catch up even more price/GB

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Corsair K63 Cherry MX red | Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

The largest SMR drives I have is for a PLEX server I think it's 8 TB x2 Raid 0 (because I'm a made man) and when they cascaded rewrite it takes like 15mins for the system to realize whats happening as it locks up and freezes and the drive sit there at max. 20TB SMR drives can't be a usable thing unless its cold storage IMO.

 

https://www.seagate.com/tech-insights/breaking-areal-density-barriers-with-seagate-smr-master-ti/

"When a user needs to rewrite or update existing information, SMR drives will need to correct not only the requested data, but any data on the following tracks. Since the writer is wider than the trimmed track, all data in surrounding tracks are essentially picked up and as a result will need to be rewritten at a later time  When the data in the following track is rewritten, the SMR drive would need to correct the data in the subsequent track, [potentially] repeating the process accordingly until the end of the drive."

Link to comment
Share on other sites

Link to post
Share on other sites

As someone who's WD Red 8 TB died this week (within guarantee, phew) - maintaining this is going to be a pain in the ass. A single pre-clear run on Unraid took me 54 hours, and another 15 for parity sync to rebuild the array - so total of 69 hours (heh) to get the entire array online. For 8 TB.

 

I think it's fair to say that this will take 2.5 times that, so 173 hours. An entire week. At what point do you stop upgrading storage size and start upgrading storage speed? For me, the cutoff is 10 TB. 

I like cute animal pics.

Mac Studio | Ryzen 7 5800X3D + RTX 3090

Link to comment
Share on other sites

Link to post
Share on other sites

sounds like its going to be nice with more TB, but not exactly new tech?

*gets a glass storage brick with 100 TBs*

Oh well, maybe at some point we will see a common and new replacement of the current HDD for more consumers with high TB storage.

Link to comment
Share on other sites

Link to post
Share on other sites

On 7/23/2021 at 8:15 AM, Sauron said:

As mentioned, I'd be weary of going too far up with HDD capacity - the time it would take to read (let alone write) all the data it can store is significant. Unless you're looking for a very dense backup solution that won't be written to very often you might be better off with a couple of 10TB drives instead.

Fun fact: People have been making this exact same argument for decades.  Literally, when drives grew bigger than 10GB?  Same argument.  1TB?  Same argument.  Every time a new, bigger drive comes out, someone says 'Oh wow, but what if you lost *that much data* on a single drive???  And it'd take so long to write???' meanwhile they have data on storage on drives that someone said the same thing about years ago.

Desktop: Ryzen 9 3950X, Asus TUF Gaming X570-Plus, 64GB DDR4, MSI RTX 3080 Gaming X Trio, Creative Sound Blaster AE-7

Gaming PC #2: Ryzen 7 5800X3D, Asus TUF Gaming B550M-Plus, 32GB DDR4, Gigabyte Windforce GTX 1080

Gaming PC #3: Intel i7 4790, Asus B85M-G, 16B DDR3, XFX Radeon R9 390X 8GB

WFH PC: Intel i7 4790, Asus B85M-F, 16GB DDR3, Gigabyte Radeon RX 6400 4GB

UnRAID #1: AMD Ryzen 9 3900X, Asus TUF Gaming B450M-Plus, 64GB DDR4, Radeon HD 5450

UnRAID #2: Intel E5-2603v2, Asus P9X79 LE, 24GB DDR3, Radeon HD 5450

MiniPC: BeeLink SER6 6600H w/ Ryzen 5 6600H, 16GB DDR5 
Windows XP Retro PC: Intel i3 3250, Asus P8B75-M LX, 8GB DDR3, Sapphire Radeon HD 6850, Creative Sound Blaster Audigy

Windows 9X Retro PC: Intel E5800, ASRock 775i65G r2.0, 1GB DDR1, AGP Sapphire Radeon X800 Pro, Creative Sound Blaster Live!

Steam Deck w/ 2TB SSD Upgrade

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, CerealExperimentsLain said:

Fun fact: People have been making this exact same argument for decades.  Literally, when drives grew bigger than 10GB?  Same argument.  1TB?  Same argument.  Every time a new, bigger drive comes out, someone says 'Oh wow, but what if you lost *that much data* on a single drive???  And it'd take so long to write???' meanwhile they have data on storage on drives that someone said the same thing about years ago.

the difference is that hard disk speeds are not increasing anymore. if they could make this run at 500Mb/s it would be different.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, Sauron said:

the difference is that hard disk speeds are not increasing anymore. if they could make this run at 500Mb/s it would be different.

wasn't that talked about in their (LTT) NVME 2.0 hardware update and could effect HDDs too?

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×