Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Moar storage - Seagate 24TB hard drives with HAMR technology to soon arrive in 2021.

Summary

 

 Seagate has revealed that they will be using HAMR technology in their upcoming hard drives which allows them to have a capacity of about 24TB for now.

 

Quotes

Quote

Seagate has just announced that it will be using Heat Assisted Magnetic Recording (HAMR) technology in its upcoming hard drives. This new tech greatly reduces grain size with better signal noise but will be priced higher than its Perpendicular Magnetic Recording (PMR) counterparts due to the required laser heating diode.

With this new HAMR technology, the hard drive makers intend to put out a 24TB storage drive for next year. It is also interesting to point out that while Seagate is betting on HAMR tech, its main competitor, Western Digital is banking on Microwave Assisted Magnetic Recording (MAMR) as its heads do not require the use of laser heating elements yet can still push storage capacity past the 20TB range.

Seagate CEO Dave Mosley commented on the company’s choice for using HAMR tech: “We know MAMR really well. It’s a viable technology, but it’s, again, a small turn of the crank. What we believe is that HAMR, largely because of the media technology, the ability to store in much, much smaller grain sizes with better signal noise, with much more permanence, is the right path.”

HAMR

My thoughts

 Well, hard drive technology is still surprisingly advancing, I believe it was last year in December that the WD 20TB and 18TB Ultrastar HDD's were announced and shipped to OEM's. Franky enough I'm more than curious to see how these MAMR and HAMR hard drives will work especially compared to your average hard drive.

 

Sources

 TechRadar Hypebeast

Link to post
Share on other sites

Did they say how fast it would go though?

Specs: Motherboard: Asus X470-PLUS TUF gaming (Yes I know it's poor but I wasn't informed) RAM: Corsair VENGEANCE® LPX DDR4 3200Mhz CL16-18-18-36 2x8GB

            CPU: Ryzen 5 3600 @ 4.1Ghz          Case: Antec P8     PSU: G.Storm GS850                        Cooler: Antec K240 with two Noctura Industrial PPC 3000 PWM

            Drives: Samsung 970 EVO plus 250GB, Micron 1100 2TB, Seagate ST4000DM000/1F2168 GPU: EVGA RTX 2080 ti Black edition @ 2Ghz

                                                                                                                             

Link to post
Share on other sites

I'm definitely with Linus on this one (referring to that old WAN show clip about 20TB drives). The imbalance between capacity and speed is getting too big for HDDs this large to be a good choice. You don't want to spend two days rebuilding a failed drive

Link to post
Share on other sites
1 hour ago, williamcll said:

Did they say how fast it would go though?

 

Quote

The Exos 20+ series will be compatible as drop-in drives for 3.5-inch bays. In terms of power consumption, you can count on the HDDs to require less than 12W. They will feature a 7200RPM spindle speed with a read/write speed higher than 261 MB/s. However, little is known currently about the Exos 20+'s performance in term of random read and write performance.

https://au.pcmag.com/hard-drives/68905/seagate-confirms-worlds-largest-hard-disk-drive-on-track

Link to post
Share on other sites

I mean cool for very high capacity and especially these would need multi-actuators and larger caches. Have to increase speed and IOPS somehow with much larger capacity.

Though on consumer side, a much smaller capacity would probably be last of use for mass data storage as consumer can need, before SSDs can finally be found for much closer to HDD prices with multi TBs models being way cheaper.

Ryzen 7 3800X | X570 Aorus Elite | G.Skill 16GB 3200MHz C16 | Radeon RX 5700 XT | Samsung 850 PRO 256GB | Mouse: Zowie S1 | OS: Windows 10

Link to post
Share on other sites
1 hour ago, Doobeedoo said:

I mean cool for very high capacity and especially these would need multi-actuators and larger caches. Have to increase speed and IOPS somehow with much larger capacity.

Though on consumer side, a much smaller capacity would probably be last of use for mass data storage as consumer can need, before SSDs can finally be found for much closer to HDD prices with multi TBs models being way cheaper.

Well, imagine pairing this single 24TB HDD with a 2TB or 4TB SSD which acts as a cache for it. 24TB total capacity, 2 or 4TB of commonly accessed stuff cached and accessible with SSD speeds. I've seen it at smaller scale using 2TB HDD and 256MB M.2 cache and it was spectacular. But people shockingly don't even consider it after everyone seen shitty SSHD drives with useless 8GB SSD cache on board. With 8GB cache you can basically cache just OS and few apps. With 256GB or 512GB you can cache entire OS, all the apps you use daily and also games that you play currently. I was using it for few years and I was getting 75-80% cache hit ratio. That's 75-80% of all reads were done from SSD. And you can sense that in speed as well as noise. First system boot or game boot and you could hear the grinding of HDD. After a while, everything quiets down and also becomes fully quiet on next launch. With some caching systems, it happens on the fly while using the apps or games, you don't even have to do restart of them to really bring out performance benefits. It just sucks they never moved SSHD's to something with larger caches. I still believe SSHD's are excellent option, but for some dumb reason, everyone making them cocked them up from day one and never fixed their dumb design. I just can't fathom how WD or Seagate couldn't figure it out. Or were unwilling.

AMD Ryzen 7 5800X | ASUS Strix X570-E | G.Skill 32GB 3600MHz CL16 | PALIT RTX 3080 10GB GamingPro | Samsung 850 Pro 2TB | Seagate Barracuda 8TB | Sound Blaster AE-9 MUSES

Link to post
Share on other sites

Are they seriously going to continue this madness? God, there must be a way to do magnetic recording/reading without requiring moving parts close to the disk, and preferably one that could be done in parallel.

Link to post
Share on other sites

Cool but how long will it take to fill that thing with data?

Hi

 

Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler

Hi

 

 

 

 

 

 

 

 

 

 

Link to post
Share on other sites

But why tho, is it significantly cheaper or more reliable than a 24tb ssd?

Quote me for a reply, React if I was helpful, informative, or funny

 

AMD blackout rig

 

cpu: ryzen 5 3600 @4.4ghz @1.35v

gpu: rx5700xt 2200mhz

ram: vengeance lpx c15 3200mhz

mobo: gigabyte b550 pro 

psu: cooler master mwe 650w

case: masterbox mbx520

fans:Noctua industrial 3000rpm x6

 

 

Link to post
Share on other sites
4 hours ago, RejZoR said:

Well, imagine pairing this single 24TB HDD with a 2TB or 4TB SSD which acts as a cache for it. 24TB total capacity, 2 or 4TB of commonly accessed stuff cached and accessible with SSD speeds. I've seen it at smaller scale using 2TB HDD and 256MB M.2 cache and it was spectacular. But people shockingly don't even consider it after everyone seen shitty SSHD drives with useless 8GB SSD cache on board. With 8GB cache you can basically cache just OS and few apps. With 256GB or 512GB you can cache entire OS, all the apps you use daily and also games that you play currently. I was using it for few years and I was getting 75-80% cache hit ratio. That's 75-80% of all reads were done from SSD. And you can sense that in speed as well as noise. First system boot or game boot and you could hear the grinding of HDD. After a while, everything quiets down and also becomes fully quiet on next launch. With some caching systems, it happens on the fly while using the apps or games, you don't even have to do restart of them to really bring out performance benefits. It just sucks they never moved SSHD's to something with larger caches. I still believe SSHD's are excellent option, but for some dumb reason, everyone making them cocked them up from day one and never fixed their dumb design. I just can't fathom how WD or Seagate couldn't figure it out. Or were unwilling.

I think part of the reason why people don't choose a harddrive plus ssd solution is that ssd prices are so cheap now that you really aren't saving much money until you get to large capacity drives. I mean you can get a 1 tb ssd for super cheap and it is so much easier to just spend a little more on the ssd and never having to worry about setting up a cashe solution. 

Link to post
Share on other sites

Imagine losing 24 terabytes of data from a single disk failure...

 

No but seriously though, I could see myself using these drives in my NAS if they weren't going to be horribly expensive. 

 

Sure they might take ages to fill, and if one breaks rebuilding the RAID is going to be slow as balls, but I design and make purchasing decisions based on how something will perform in the everyday use that's occurring 99% of the time.

I fill my NAS very slowly, maybe like a gig or so of data on average everyday. So write speeds isn't an issue.

I use dual parity as well so even when one drive dies, I am not in a super hurry to rebuild my array. Rebuilding the array will be slow, but it will very rarely happen (hopefully) and when it does it isn't a big deal. 

Link to post
Share on other sites

Id love to get my hands on a few of these. I imagine drives will be eyewateringly expensive for a while but i just expanded my current 4x6tb array with another 5x10tb array. That should hopefully last me long enough to have the prices drop.

 

Though in 5 or so years id hope we come up with a better connector than SATA for data drives. Or a better solution with PCIE using up so many lanes on consumer CPUs for data bandwidth.

Primary:

Intel i5 4670K (3.8 GHz) | ASRock Extreme 4 Z87 | 16GB Crucial Ballistix Tactical LP 2x8GB | Gigabyte GTX980ti | Mushkin Enhanced Chronos 240GB | Corsair RM 850W | Nanoxia Deep Silence 1| Ducky Shine 3 | Corsair m95 | 2x Monoprice 1440p IPS Displays | Altec Lansing VS2321 | Sennheiser HD558 | Antlion ModMic

HTPC:

Intel NUC i5 D54250WYK | 4GB Kingston 1600MHz DDR3L | 256GB Crucial M4 mSATA SSD | Logitech K400

NAS:

Thecus n4800 | WD White Label 8tb x4 in raid 5

Phones:

Oneplux 6t (Mint), Nexus 5x 8.1.0 (wifi only), Nexus 4 (wifi only)

Link to post
Share on other sites
5 hours ago, RejZoR said:

Well, imagine pairing this single 24TB HDD with a 2TB or 4TB SSD which acts as a cache for it. 24TB total capacity, 2 or 4TB of commonly accessed stuff cached and accessible with SSD speeds. I've seen it at smaller scale using 2TB HDD and 256MB M.2 cache and it was spectacular. But people shockingly don't even consider it after everyone seen shitty SSHD drives with useless 8GB SSD cache on board. With 8GB cache you can basically cache just OS and few apps. With 256GB or 512GB you can cache entire OS, all the apps you use daily and also games that you play currently. I was using it for few years and I was getting 75-80% cache hit ratio. That's 75-80% of all reads were done from SSD. And you can sense that in speed as well as noise. First system boot or game boot and you could hear the grinding of HDD. After a while, everything quiets down and also becomes fully quiet on next launch. With some caching systems, it happens on the fly while using the apps or games, you don't even have to do restart of them to really bring out performance benefits. It just sucks they never moved SSHD's to something with larger caches. I still believe SSHD's are excellent option, but for some dumb reason, everyone making them cocked them up from day one and never fixed their dumb design. I just can't fathom how WD or Seagate couldn't figure it out. Or were unwilling.

It's a niche thing, reason SSHD are gone and no new ones are being made. Kinda no point, I had one of those, they were actually quite good, before I got an SSD eventually. But yeah, I'd rather go full SSD in time when it becomes even more affordable to stack TBs of it.

Ryzen 7 3800X | X570 Aorus Elite | G.Skill 16GB 3200MHz C16 | Radeon RX 5700 XT | Samsung 850 PRO 256GB | Mouse: Zowie S1 | OS: Windows 10

Link to post
Share on other sites
4 hours ago, LAwLz said:

 

I fill my NAS very slowly, maybe like a gig or so of data on average everyday. So write speeds isn't an issue.

I use dual parity as well so even when one drive dies, I am not in a super hurry to rebuild my array. Rebuilding the array will be slow, but it will very rarely happen (hopefully) and when it does it isn't a big deal. 

You use dual parity (RAID6) so as to mitigate against a cascade failure and loss of the volume.

 

The more spindles you have in the array, the greater the chance of a 2nd drive loss while in the middle of rebuilding a previous failed drive's content from parity. Even more so if all of the drives are old and of the same age. The proverbial straw that breaks the camel's back with the extra stress from rebuilding.

 

So yeah, if you're running a RAID5 array, you're more vulnerable the longer it takes for the array to rebuild. Under that scenario, time isn't on your side.

 

I'm at the point where I'm recommending RAID10.

Link to post
Share on other sites
1 hour ago, StDragon said:

So yeah, if you're running a RAID5 array, you're more vulnerable the longer it takes for the array to rebuild. Under that scenario, time isn't on your side.

 

I'm at the point where I'm recommending RAID10.

Please no, RAID 10 on larger disk count arrays has a higher chance of array failure than RAID 6 does. RAID 10 you can lose everything with just two disks failing, both disks in a single mirror and that is the most likely thing to happen during a rebuild as the only other disk that is read and stressed during a rebuild is the partner disk to the one that has failed and when we are talking about very large disks with extended rebuild times you are asking for problems.

 

Use RAID 10 only when you need the performance, and that is only for random write IOPs anyway. RAID 6 not only get safer than RAID 10 when the number of disks increase, with a good LSI RAID card the performance in both read and write will surpass that of RAID 10, both of which don't take that many disks in total for the RAID card cache itself to become the bottleneck anyway but we're still in the high to multiple GB/s territory.

 

And the math to support the above:

Spoiler

image.thumb.png.bb180949af5e8abef09d260794783f41.png

 

 

 

image.thumb.png.0fb3930425c5834e40230c91eba990fa.png

 

RAID 6 is many orders of magnitude safer than RAID 10 is.

 

The better solution is to use software that works on the data layer not the disk layer so doesn't do any management that is tied to disks or pairs of disks and only treats the disks as storage buckets i.e. Ceph. If you want to use data copies rather than parity use something where you can control and define the placement of those copies over multiple disks so every disk evenly participates in both normal operation (read and writes) but also rebuild operations. Traditional RAID and even ZFS have this shortcoming of being disk centric rather than data centric for hardware reliability, you don't want anything like that when dealing with very large disks and large quantities of data. ZFS underneath say GlusterFS is or was common.

Link to post
Share on other sites
6 hours ago, gabrielcarvfer said:

Are they seriously going to continue this madness? God, there must be a way to do magnetic recording/reading without requiring moving parts close to the disk, and preferably one that could be done in parallel.

There's a relatively new technology in the works that are designed to use magnetic recording in a cell based approach with no moving parts. Its called FRAM or ferroelectric RAM which is a write destructive non volatile random access memory. It's designed as a contender for flash primarily due to faster write speeds (~1000x), lower power usage and greater read/write endurance at the cost of currently small storage densities, and higher cost. They're currently (to my knowledge) limited to embedded projects/products, but in the future you might see magnetic SSD's if they develop the technology further but I doubt it. Where it most likely see use is the program memory of microcontroller as TI has discovered that it only requires 2 extra masks in the manufacturing process to embed FRAM into the silicon compared to like 9 for flash. 

CPU: Intel i7 - 5820k @ 4.5GHz, Cooler: Corsair H80i, Motherboard: MSI X99S Gaming 7, RAM: Corsair Vengeance LPX 32GB DDR4 2666MHz CL16,

GPU: ASUS GTX 980 Strix, Case: Corsair 900D, PSU: Corsair AX860i 860W, Keyboard: Logitech G19, Mouse: Corsair M95, Storage: Intel 730 Series 480GB SSD, WD 1.5TB Black

Display: BenQ XL2730Z 2560x1440 144Hz

Link to post
Share on other sites
14 minutes ago, leadeater said:

Traditional RAID and even ZFS have this shortcoming of be disk centric rather than data centric for hardware reliability, you don't want anything like that when dealing with very large disks and large quantities of data.

Interesting. I'm not into server stuff, but data centric reliability, rebuilding data and scaleability just reminded me of torrent.

 

Couldn't it be used to replicate data blocks on a multitude of disks? Also could be used to maintain a distributed map of the blocks so no single point of failure.

 

Resilvering would simply be asking to download a set of specific blocks (which doesn't necessarily need to be tracked, as the number of peers with the blocks can also be used as an heuristic on how many additional copies are required to maintain a certain level of confidence the data won't be lost).

Link to post
Share on other sites
1 minute ago, gabrielcarvfer said:

Couldn't it be used to replicate data blocks on a multitude of disks? Also could be used to maintain a distributed map of the blocks so no single point of failure.

Ceph already does this, look up CRUSH Map. GlusterFS also does data placement. There's a lot of competing software defined storage solutions on the market, it's where the industry is moving and basically has to because of the flaw with tying everything down to physical hardware.

Link to post
Share on other sites

To point the obvious…this shouldn't be used as a single disk to dump all your stuff on and call it a day.  That would be asking for trouble.  But in a proper redundant setup, or as a rotated out backup disk of a set, this could be a great option.  One would never use groups of these for performance, but rather for capacity, which generally cares less about iops unless it is something db driven…and that'd be one hell of a db.

Link to post
Share on other sites

Add me to the the list that will be inserted at some future point for adding to my NAS, (UNRAID based for the curious, not entirely sure where that falls on the data vs hardware thing @leadeater is talking about). NAS's have a "Big Drives go BRRRRRRR!" attitude.

Link to post
Share on other sites
16 minutes ago, leadeater said:

Ceph already does this, look up CRUSH Map.

Oh, that's pretty cool.

 

Looked into the map thing and it seems like it is explicitly mapped into hardware (not necessarily into individual drives though).

 

I guess for efficiency in data locality + network traffic, but kind of defeats the purpose of what I've imagined.

 

Using torrent you could have almost random distribution of sparsely used data copies plus local copies of frequently used data instead of specifying where to replicate them .

 

It also would depend on how many copies you do to ensure there won't be any data loss. I'm not into it, so I have no idea how many copies are required and how that scales with the amount of data being kept.

Link to post
Share on other sites
14 minutes ago, gabrielcarvfer said:

Looked into the map thing and it seems like it is explicitly mapped into hardware (not necessarily into individual drives though).

Correct, induvial disks are known as OSDs and these get grouped in to Placement Groups (PG). The CRUSH rules and Pools configuration allow to you define how data is distributed across the cluster, ours is done at the host level so any data copies or chunks of parity are spread across servers.

 

Data objects are placed in to PGs and the PG chooses the OSD with the least data on it (typically).

 

The cluster builds a map of the hardware and then you can create logical partitions of that, multiple as you can have different pools with different settings using different CRUSH rules, and then data objects are stored based on these rules and configurations. If hardware fails the cluster starts moving data around to ensure the data resiliency is retained, there is no waiting for hardware to be replaced. This does mean you cannot actually fill the entire cluster as you require empty space for those data management operations.

Link to post
Share on other sites
18 hours ago, Yebi said:

I'm definitely with Linus on this one (referring to that old WAN show clip about 20TB drives). The imbalance between capacity and speed is getting too big for HDDs this large to be a good choice. You don't want to spend two days rebuilding a failed drive

 

I look at this as a positive and will greatly lower the price of 16TB drives

CPU i7 4960x Ivy Bridge Extreme | 64GB Quad DDR-3 RAM | MBD Asus x79-Deluxe | RTX 2080 ti FE 11GB |
Thermaltake 850w PWS | ASUS ROG 27" IPS 1440p | | Win 7 pro x64 |

Link to post
Share on other sites
5 hours ago, Doobeedoo said:

It's a niche thing, reason SSHD are gone and no new ones are being made. Kinda no point, I had one of those, they were actually quite good, before I got an SSD eventually. But yeah, I'd rather go full SSD in time when it becomes even more affordable to stack TBs of it.

It's not a niche thing when you want a fast capacity drive. 2TB SSD's are still very expensive for majority of people. Where SSHD's could deliver 5x capacity at near SSD speeds for basically a price of a slightly more expensive HDD. It's still just incomparable. And with larger SSD cache on board they'd actually perform, because with garbage 8GB cache they had, it was terrible when they were new. At that time I was running a hybrid setup via software solution and a 32GB SSD cache. Which I upgraded to 256GB some time later. I've had OS, all the daily used apps and even massive games (several of them!) cached in it. Good luck ever doing that with 8GB cache without swapping data in in and hurtin performance. I don't believe it that SSHD's are pointless. If only they weren't designed by morons and never fixed accordingly later. They released dumb 8GB cache design, it didn't work all that well and people weren't convinced and they just scraped them entirely. Hell, even 128GB cache that would cost just additional 50€ at worst and would perform so much better than absolutely worthless 8GB cache...

AMD Ryzen 7 5800X | ASUS Strix X570-E | G.Skill 32GB 3600MHz CL16 | PALIT RTX 3080 10GB GamingPro | Samsung 850 Pro 2TB | Seagate Barracuda 8TB | Sound Blaster AE-9 MUSES

Link to post
Share on other sites
11 minutes ago, RejZoR said:

It's not a niche thing when you want a fast capacity drive. 2TB SSD's are still very expensive for majority of people. Where SSHD's could deliver 5x capacity at near SSD speeds for basically a price of a slightly more expensive HDD.

Something being niche is defined by how many people have it, the deployment scale, not how useful or beneficial it could be to people. SSD caching and SSHD (these are different) is certainly a niche thing, very few actually do it.

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Newegg

×