Jump to content

Moar storage - Seagate 24TB hard drives with HAMR technology to soon arrive in 2021.

AndreiArgeanu

Summary

 

 Seagate has revealed that they will be using HAMR technology in their upcoming hard drives which allows them to have a capacity of about 24TB for now.

 

Quotes

Quote

Seagate has just announced that it will be using Heat Assisted Magnetic Recording (HAMR) technology in its upcoming hard drives. This new tech greatly reduces grain size with better signal noise but will be priced higher than its Perpendicular Magnetic Recording (PMR) counterparts due to the required laser heating diode.

With this new HAMR technology, the hard drive makers intend to put out a 24TB storage drive for next year. It is also interesting to point out that while Seagate is betting on HAMR tech, its main competitor, Western Digital is banking on Microwave Assisted Magnetic Recording (MAMR) as its heads do not require the use of laser heating elements yet can still push storage capacity past the 20TB range.

Seagate CEO Dave Mosley commented on the company’s choice for using HAMR tech: “We know MAMR really well. It’s a viable technology, but it’s, again, a small turn of the crank. What we believe is that HAMR, largely because of the media technology, the ability to store in much, much smaller grain sizes with better signal noise, with much more permanence, is the right path.”

HAMR

My thoughts

 Well, hard drive technology is still surprisingly advancing, I believe it was last year in December that the WD 20TB and 18TB Ultrastar HDD's were announced and shipped to OEM's. Franky enough I'm more than curious to see how these MAMR and HAMR hard drives will work especially compared to your average hard drive.

 

Sources

 TechRadar Hypebeast

Link to comment
Share on other sites

Link to post
Share on other sites

Did they say how fast it would go though?

Specs: Motherboard: Asus X470-PLUS TUF gaming (Yes I know it's poor but I wasn't informed) RAM: Corsair VENGEANCE® LPX DDR4 3200Mhz CL16-18-18-36 2x8GB

            CPU: Ryzen 9 5900X          Case: Antec P8     PSU: Corsair RM850x                        Cooler: Antec K240 with two Noctura Industrial PPC 3000 PWM

            Drives: Samsung 970 EVO plus 250GB, Micron 1100 2TB, Seagate ST4000DM000/1F2168 GPU: EVGA RTX 2080 ti Black edition

Link to comment
Share on other sites

Link to post
Share on other sites

I'm definitely with Linus on this one (referring to that old WAN show clip about 20TB drives). The imbalance between capacity and speed is getting too big for HDDs this large to be a good choice. You don't want to spend two days rebuilding a failed drive

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, williamcll said:

Did they say how fast it would go though?

 

Quote

The Exos 20+ series will be compatible as drop-in drives for 3.5-inch bays. In terms of power consumption, you can count on the HDDs to require less than 12W. They will feature a 7200RPM spindle speed with a read/write speed higher than 261 MB/s. However, little is known currently about the Exos 20+'s performance in term of random read and write performance.

https://au.pcmag.com/hard-drives/68905/seagate-confirms-worlds-largest-hard-disk-drive-on-track

Link to comment
Share on other sites

Link to post
Share on other sites

I mean cool for very high capacity and especially these would need multi-actuators and larger caches. Have to increase speed and IOPS somehow with much larger capacity.

Though on consumer side, a much smaller capacity would probably be last of use for mass data storage as consumer can need, before SSDs can finally be found for much closer to HDD prices with multi TBs models being way cheaper.

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Corsair K63 Cherry MX red | Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Doobeedoo said:

I mean cool for very high capacity and especially these would need multi-actuators and larger caches. Have to increase speed and IOPS somehow with much larger capacity.

Though on consumer side, a much smaller capacity would probably be last of use for mass data storage as consumer can need, before SSDs can finally be found for much closer to HDD prices with multi TBs models being way cheaper.

Well, imagine pairing this single 24TB HDD with a 2TB or 4TB SSD which acts as a cache for it. 24TB total capacity, 2 or 4TB of commonly accessed stuff cached and accessible with SSD speeds. I've seen it at smaller scale using 2TB HDD and 256MB M.2 cache and it was spectacular. But people shockingly don't even consider it after everyone seen shitty SSHD drives with useless 8GB SSD cache on board. With 8GB cache you can basically cache just OS and few apps. With 256GB or 512GB you can cache entire OS, all the apps you use daily and also games that you play currently. I was using it for few years and I was getting 75-80% cache hit ratio. That's 75-80% of all reads were done from SSD. And you can sense that in speed as well as noise. First system boot or game boot and you could hear the grinding of HDD. After a while, everything quiets down and also becomes fully quiet on next launch. With some caching systems, it happens on the fly while using the apps or games, you don't even have to do restart of them to really bring out performance benefits. It just sucks they never moved SSHD's to something with larger caches. I still believe SSHD's are excellent option, but for some dumb reason, everyone making them cocked them up from day one and never fixed their dumb design. I just can't fathom how WD or Seagate couldn't figure it out. Or were unwilling.

Link to comment
Share on other sites

Link to post
Share on other sites

Cool but how long will it take to fill that thing with data?

Hi

 

Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler

hi

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

But why tho, is it significantly cheaper or more reliable than a 24tb ssd?

AMD blackout rig

 

cpu: ryzen 5 3600 @4.4ghz @1.35v

gpu: rx5700xt 2200mhz

ram: vengeance lpx c15 3200mhz

mobo: gigabyte b550 auros pro 

psu: cooler master mwe 650w

case: masterbox mbx520

fans:Noctua industrial 3000rpm x6

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, RejZoR said:

Well, imagine pairing this single 24TB HDD with a 2TB or 4TB SSD which acts as a cache for it. 24TB total capacity, 2 or 4TB of commonly accessed stuff cached and accessible with SSD speeds. I've seen it at smaller scale using 2TB HDD and 256MB M.2 cache and it was spectacular. But people shockingly don't even consider it after everyone seen shitty SSHD drives with useless 8GB SSD cache on board. With 8GB cache you can basically cache just OS and few apps. With 256GB or 512GB you can cache entire OS, all the apps you use daily and also games that you play currently. I was using it for few years and I was getting 75-80% cache hit ratio. That's 75-80% of all reads were done from SSD. And you can sense that in speed as well as noise. First system boot or game boot and you could hear the grinding of HDD. After a while, everything quiets down and also becomes fully quiet on next launch. With some caching systems, it happens on the fly while using the apps or games, you don't even have to do restart of them to really bring out performance benefits. It just sucks they never moved SSHD's to something with larger caches. I still believe SSHD's are excellent option, but for some dumb reason, everyone making them cocked them up from day one and never fixed their dumb design. I just can't fathom how WD or Seagate couldn't figure it out. Or were unwilling.

I think part of the reason why people don't choose a harddrive plus ssd solution is that ssd prices are so cheap now that you really aren't saving much money until you get to large capacity drives. I mean you can get a 1 tb ssd for super cheap and it is so much easier to just spend a little more on the ssd and never having to worry about setting up a cashe solution. 

Link to comment
Share on other sites

Link to post
Share on other sites

Imagine losing 24 terabytes of data from a single disk failure...

 

No but seriously though, I could see myself using these drives in my NAS if they weren't going to be horribly expensive. 

 

Sure they might take ages to fill, and if one breaks rebuilding the RAID is going to be slow as balls, but I design and make purchasing decisions based on how something will perform in the everyday use that's occurring 99% of the time.

I fill my NAS very slowly, maybe like a gig or so of data on average everyday. So write speeds isn't an issue.

I use dual parity as well so even when one drive dies, I am not in a super hurry to rebuild my array. Rebuilding the array will be slow, but it will very rarely happen (hopefully) and when it does it isn't a big deal. 

Link to comment
Share on other sites

Link to post
Share on other sites

Id love to get my hands on a few of these. I imagine drives will be eyewateringly expensive for a while but i just expanded my current 4x6tb array with another 5x10tb array. That should hopefully last me long enough to have the prices drop.

 

Though in 5 or so years id hope we come up with a better connector than SATA for data drives. Or a better solution with PCIE using up so many lanes on consumer CPUs for data bandwidth.

Primary:

Intel i5 4670K (3.8 GHz) | ASRock Extreme 4 Z87 | 16GB Crucial Ballistix Tactical LP 2x8GB | Gigabyte GTX980ti | Mushkin Enhanced Chronos 240GB | Corsair RM 850W | Nanoxia Deep Silence 1| Ducky Shine 3 | Corsair m95 | 2x Monoprice 1440p IPS Displays | Altec Lansing VS2321 | Sennheiser HD558 | Antlion ModMic

HTPC:

Intel NUC i5 D54250WYK | 4GB Kingston 1600MHz DDR3L | 256GB Crucial M4 mSATA SSD | Logitech K400

NAS:

Thecus n4800 | WD White Label 8tb x4 in raid 5

Phones:

Oneplux 6t (Mint), Nexus 5x 8.1.0 (wifi only), Nexus 4 (wifi only)

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, RejZoR said:

Well, imagine pairing this single 24TB HDD with a 2TB or 4TB SSD which acts as a cache for it. 24TB total capacity, 2 or 4TB of commonly accessed stuff cached and accessible with SSD speeds. I've seen it at smaller scale using 2TB HDD and 256MB M.2 cache and it was spectacular. But people shockingly don't even consider it after everyone seen shitty SSHD drives with useless 8GB SSD cache on board. With 8GB cache you can basically cache just OS and few apps. With 256GB or 512GB you can cache entire OS, all the apps you use daily and also games that you play currently. I was using it for few years and I was getting 75-80% cache hit ratio. That's 75-80% of all reads were done from SSD. And you can sense that in speed as well as noise. First system boot or game boot and you could hear the grinding of HDD. After a while, everything quiets down and also becomes fully quiet on next launch. With some caching systems, it happens on the fly while using the apps or games, you don't even have to do restart of them to really bring out performance benefits. It just sucks they never moved SSHD's to something with larger caches. I still believe SSHD's are excellent option, but for some dumb reason, everyone making them cocked them up from day one and never fixed their dumb design. I just can't fathom how WD or Seagate couldn't figure it out. Or were unwilling.

It's a niche thing, reason SSHD are gone and no new ones are being made. Kinda no point, I had one of those, they were actually quite good, before I got an SSD eventually. But yeah, I'd rather go full SSD in time when it becomes even more affordable to stack TBs of it.

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Corsair K63 Cherry MX red | Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, LAwLz said:

 

I fill my NAS very slowly, maybe like a gig or so of data on average everyday. So write speeds isn't an issue.

I use dual parity as well so even when one drive dies, I am not in a super hurry to rebuild my array. Rebuilding the array will be slow, but it will very rarely happen (hopefully) and when it does it isn't a big deal. 

You use dual parity (RAID6) so as to mitigate against a cascade failure and loss of the volume.

 

The more spindles you have in the array, the greater the chance of a 2nd drive loss while in the middle of rebuilding a previous failed drive's content from parity. Even more so if all of the drives are old and of the same age. The proverbial straw that breaks the camel's back with the extra stress from rebuilding.

 

So yeah, if you're running a RAID5 array, you're more vulnerable the longer it takes for the array to rebuild. Under that scenario, time isn't on your side.

 

I'm at the point where I'm recommending RAID10.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, StDragon said:

So yeah, if you're running a RAID5 array, you're more vulnerable the longer it takes for the array to rebuild. Under that scenario, time isn't on your side.

 

I'm at the point where I'm recommending RAID10.

Please no, RAID 10 on larger disk count arrays has a higher chance of array failure than RAID 6 does. RAID 10 you can lose everything with just two disks failing, both disks in a single mirror and that is the most likely thing to happen during a rebuild as the only other disk that is read and stressed during a rebuild is the partner disk to the one that has failed and when we are talking about very large disks with extended rebuild times you are asking for problems.

 

Use RAID 10 only when you need the performance, and that is only for random write IOPs anyway. RAID 6 not only get safer than RAID 10 when the number of disks increase, with a good LSI RAID card the performance in both read and write will surpass that of RAID 10, both of which don't take that many disks in total for the RAID card cache itself to become the bottleneck anyway but we're still in the high to multiple GB/s territory.

 

And the math to support the above:

Spoiler

image.thumb.png.bb180949af5e8abef09d260794783f41.png

 

 

 

image.thumb.png.0fb3930425c5834e40230c91eba990fa.png

 

RAID 6 is many orders of magnitude safer than RAID 10 is.

 

The better solution is to use software that works on the data layer not the disk layer so doesn't do any management that is tied to disks or pairs of disks and only treats the disks as storage buckets i.e. Ceph. If you want to use data copies rather than parity use something where you can control and define the placement of those copies over multiple disks so every disk evenly participates in both normal operation (read and writes) but also rebuild operations. Traditional RAID and even ZFS have this shortcoming of being disk centric rather than data centric for hardware reliability, you don't want anything like that when dealing with very large disks and large quantities of data. ZFS underneath say GlusterFS is or was common.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, gabrielcarvfer said:

Are they seriously going to continue this madness? God, there must be a way to do magnetic recording/reading without requiring moving parts close to the disk, and preferably one that could be done in parallel.

There's a relatively new technology in the works that are designed to use magnetic recording in a cell based approach with no moving parts. Its called FRAM or ferroelectric RAM which is a write destructive non volatile random access memory. It's designed as a contender for flash primarily due to faster write speeds (~1000x), lower power usage and greater read/write endurance at the cost of currently small storage densities, and higher cost. They're currently (to my knowledge) limited to embedded projects/products, but in the future you might see magnetic SSD's if they develop the technology further but I doubt it. Where it most likely see use is the program memory of microcontroller as TI has discovered that it only requires 2 extra masks in the manufacturing process to embed FRAM into the silicon compared to like 9 for flash. 

CPU: Intel i7 - 5820k @ 4.5GHz, Cooler: Corsair H80i, Motherboard: MSI X99S Gaming 7, RAM: Corsair Vengeance LPX 32GB DDR4 2666MHz CL16,

GPU: ASUS GTX 980 Strix, Case: Corsair 900D, PSU: Corsair AX860i 860W, Keyboard: Logitech G19, Mouse: Corsair M95, Storage: Intel 730 Series 480GB SSD, WD 1.5TB Black

Display: BenQ XL2730Z 2560x1440 144Hz

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, gabrielcarvfer said:

Couldn't it be used to replicate data blocks on a multitude of disks? Also could be used to maintain a distributed map of the blocks so no single point of failure.

Ceph already does this, look up CRUSH Map. GlusterFS also does data placement. There's a lot of competing software defined storage solutions on the market, it's where the industry is moving and basically has to because of the flaw with tying everything down to physical hardware.

Link to comment
Share on other sites

Link to post
Share on other sites

To point the obvious…this shouldn't be used as a single disk to dump all your stuff on and call it a day.  That would be asking for trouble.  But in a proper redundant setup, or as a rotated out backup disk of a set, this could be a great option.  One would never use groups of these for performance, but rather for capacity, which generally cares less about iops unless it is something db driven…and that'd be one hell of a db.

Link to comment
Share on other sites

Link to post
Share on other sites

Add me to the the list that will be inserted at some future point for adding to my NAS, (UNRAID based for the curious, not entirely sure where that falls on the data vs hardware thing @leadeater is talking about). NAS's have a "Big Drives go BRRRRRRR!" attitude.

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, gabrielcarvfer said:

Looked into the map thing and it seems like it is explicitly mapped into hardware (not necessarily into individual drives though).

Correct, induvial disks are known as OSDs and these get grouped in to Placement Groups (PG). The CRUSH rules and Pools configuration allow to you define how data is distributed across the cluster, ours is done at the host level so any data copies or chunks of parity are spread across servers.

 

Data objects are placed in to PGs and the PG chooses the OSD with the least data on it (typically).

 

The cluster builds a map of the hardware and then you can create logical partitions of that, multiple as you can have different pools with different settings using different CRUSH rules, and then data objects are stored based on these rules and configurations. If hardware fails the cluster starts moving data around to ensure the data resiliency is retained, there is no waiting for hardware to be replaced. This does mean you cannot actually fill the entire cluster as you require empty space for those data management operations.

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, Yebi said:

I'm definitely with Linus on this one (referring to that old WAN show clip about 20TB drives). The imbalance between capacity and speed is getting too big for HDDs this large to be a good choice. You don't want to spend two days rebuilding a failed drive

 

I look at this as a positive and will greatly lower the price of 16TB drives

CPU i7 4960x Ivy Bridge Extreme | 64GB Quad DDR-3 RAM | MBD Asus x79-Deluxe | RTX 2080 ti FE 11GB |
Thermaltake 850w PWS | ASUS ROG 27" IPS 1440p | | Win 7 pro x64 |

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Doobeedoo said:

It's a niche thing, reason SSHD are gone and no new ones are being made. Kinda no point, I had one of those, they were actually quite good, before I got an SSD eventually. But yeah, I'd rather go full SSD in time when it becomes even more affordable to stack TBs of it.

It's not a niche thing when you want a fast capacity drive. 2TB SSD's are still very expensive for majority of people. Where SSHD's could deliver 5x capacity at near SSD speeds for basically a price of a slightly more expensive HDD. It's still just incomparable. And with larger SSD cache on board they'd actually perform, because with garbage 8GB cache they had, it was terrible when they were new. At that time I was running a hybrid setup via software solution and a 32GB SSD cache. Which I upgraded to 256GB some time later. I've had OS, all the daily used apps and even massive games (several of them!) cached in it. Good luck ever doing that with 8GB cache without swapping data in in and hurtin performance. I don't believe it that SSHD's are pointless. If only they weren't designed by morons and never fixed accordingly later. They released dumb 8GB cache design, it didn't work all that well and people weren't convinced and they just scraped them entirely. Hell, even 128GB cache that would cost just additional 50€ at worst and would perform so much better than absolutely worthless 8GB cache...

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, RejZoR said:

It's not a niche thing when you want a fast capacity drive. 2TB SSD's are still very expensive for majority of people. Where SSHD's could deliver 5x capacity at near SSD speeds for basically a price of a slightly more expensive HDD.

Something being niche is defined by how many people have it, the deployment scale, not how useful or beneficial it could be to people. SSD caching and SSHD (these are different) is certainly a niche thing, very few actually do it.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, leadeater said:

Something being niche is defined by how many people have it, the deployment scale, not how useful or beneficial it could be to people. SSD caching and SSHD (these are different) is certainly a niche thing, very few actually do it.

It's a niche thing because it was designed by a moron, delivered by an idiot and never improved by a retard. Of course no one bought that hot garbage and it never became popular. Between absolutely useless SSD boot drives everyone was running and shuffling games and apps back and forth like idiots, SSHD's would do all this automatically while providing massive capacity at minimal premium. Instead people kept using dumb SSD boot drive method and some still do today. It's just beyond baffling.

 

SSHD's could easily be the next big thing yet vendors themselves fucked it up so hard it just never took off. And they would still be relevant TODAY if they did it right and people got used to them. Instead because of dumb execution about basically everything around SSHD's, they just flopped so hard no one even remembers them anymore. You can't just say "it's a niche thing" when it never even had a chance to take off. Not because the idea itself is bad, it's because execution was just straight out moronic. I mean 8GB cache for god sake, back when SSHD's with this crappy cache were somewhat relevant I had 2TB HDD paired with 32GB SSD cache and software was doing the caching. I was mostly playing Natural Selection 2 back then which is a +10GB game. Garbage SSHD's with pathetic 8GB SSD cache couldn't even cache this entire game. Yet my 32GB one could easily cache SEVERAL games as well as all the apps I used and also OS. Boot to desktop was as fast as now that I'm on pure 2TB SSD. Apps launched just as fast as now. I've had game launch speeds the same as now. It was so good I'm seriously thinking of doing it again as I bought 8TB HDD for data hoarding some time ago and it's hardly being used where 2TB SSD is sort of running out of space as I'm holding it back a bit. 8TB HDD paired with 2TB SSD would give me insane 8TB of space with 2TB SSD cache for everything I'd ever need. Good luck buying 8TB SSD without selling a kidney... Niche thing my bottom.

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, RejZoR said:

It's a niche thing because it was designed by a moron, delivered by an idiot and never improved by a retard. Of course no one bought that hot garbage and it never became popular. Between absolutely useless SSD boot drives everyone was running and shuffling games and apps back and forth like idiots, SSHD's would do all this automatically while providing massive capacity at minimal premium. Instead people kept using dumb SSD boot drive method and some still do today. It's just beyond baffling.

 

SSHD's could easily be the next big thing yet vendors themselves fucked it up so hard it just never took off. And they would still be relevant TODAY if they did it right and people got used to them. Instead because of dumb execution about basically everything around SSHD's, they just flopped so hard no one even remembers them anymore. You can't just say "it's a niche thing" when it never even had a chance to take off. Not because the idea itself is bad, it's because execution was just straight out moronic. I mean 8GB cache for god sake, back when SSHD's with this crappy cache were somewhat relevant I had 2TB HDD paired with 32GB SSD cache and software was doing the caching. I was mostly playing Natural Selection 2 back then which is a +10GB game. Garbage SSHD's with pathetic 8GB SSD cache couldn't even cache this entire game. Yet my 32GB one could easily cache SEVERAL games as well as all the apps I used and also OS. Boot to desktop was as fast as now that I'm on pure 2TB SSD. Apps launched just as fast as now. I've had game launch speeds the same as now. It was so good I'm seriously thinking of doing it again as I bought 8TB HDD for data hoarding some time ago and it's hardly being used where 2TB SSD is sort of running out of space as I'm holding it back a bit. 8TB HDD paired with 2TB SSD would give me insane 8TB of space with 2TB SSD cache for everything I'd ever need. Good luck buying 8TB SSD without selling a kidney... Niche thing my bottom.

No matter what you say it's still niche, that's exactly what it means. Even though software exist to do it, and for ages, very few use it. Also fyi it's very much easier to do it in software at the OS layer than it is to do at the SSHD controller layer, that's why it was dropped and never developed. SSDs are getting cheaper at a rate faster than SSHD will ever be truly useful to the masses and very few people actually need more than 2TB so there is no reason to develop a solution for the 0.01% when they can utilize already existing options like Primocache.

Link to comment
Share on other sites

Link to post
Share on other sites

Eh forget it. You're still not getting it. It would not be a niche thing if moronic SSD boot drive method wasn't so popular for some dumb reason and if they've done them right with larger cache. Having massive storage with near SSD speeds for real world use at fraction of a cost of full blown SSD, I can assure you SSHD's would took of and become the mainstream. Instead they were sort of alright in benchmarks, sort of managed to make system boot faster and that was about it because they just didn't have the SSD cache capacity to be useful for anything else. And SSHD being a single unit with no need for software or any fiddling, just plug and play it would work brilliantly. PrimoCache is great and offers better control for me as advanced user, but it can be fiddly and overwhelming for casual users. And it's casual users who would benefit from SSHD's the most. Just stick it in and you have big capacity with excellent day to day use speeds.

 

Besides, I don't expect anyone who hasn't used a properly functioning hybrid storage to understand any of it. I've seen it with my own eyes, used it for years and it was amazing. Only went with full SSD because of noise as I sleep in the same room as I have PC and it's on 24/7. And because I could (mind you that Samsung 850 Pro 2TB SSD was over 800€ when I bought it, hybrid setup would cost 1/4th and probably perform very close). I'm actually again thinking of pairing my 8TB HDD with 2TB SSD again... Just because I have both at hand and that 2TB SSD is getting a bit crowded...

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×