Jump to content

Seagate has created a HDD with transfer speeds to rival SATA-based solid state drives

PwnyTheTiger
17 minutes ago, Kilrah said:

I don't think that would work, mechanical damage sends debris flying off that would likely destroy the other set of heads within seconds/minutes.

Oh, not even with something like an internal Mini-

 that ... when it happens, cleans up the dust, then continues with the process of not allowing the user to access the drive until they have another one to copy the data to?  (And it would still not allow you to access the damaged head / platter ... unless the head is fine and only part of the platter is damaged, in which case it might allow you to access the undamaged parts, but not the damaged parts and a buffer zone around them.)
Also another thing I thought of ... if one (or more) of the heads is badly mangled enough, it would bring them off the platter into the parking area, and basically "snap off" the broken head and drop it into some kind of internal mini-wastebin, or something like that, then try to do the recovery / copy with the remaining good ones, at least as much as possible.
And if enough heads failed so that you couldn't recover all the data (but not all of them had failed), it would act like UnRAID, not normal RAID ... in that it wouldn't take down the ENTIRE drive, you could still recover the data from platters / heads that had not failed.

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, PianoPlayer88Key said:

that ... when it happens, cleans up the dust

Dust that is microscopic is enough to cause damage...

 

30 minutes ago, PianoPlayer88Key said:

if one (or more) of the heads is badly mangled enough, it would bring them off the platter into the parking area, and basically "snap off" the broken head and drop it into some kind of internal mini-wastebin

Talk about over-engineering...

 

Even if it worked your single "built-in RAID" drive would cost 5 times what 2 usual drives do and still not offer the same level of data safety , so nobody would care about them 🙂

F@H
Desktop: i9-13900K, ASUS Z790-E, 64GB DDR5-6000 CL36, RTX3080, 2TB MP600 Pro XT, 2TB SX8200Pro, 2x16TB Ironwolf RAID0, Corsair HX1200, Antec Vortex 360 AIO, Thermaltake Versa H25 TG, Samsung 4K curved 49" TV, 23" secondary, Mountain Everest Max

Mobile SFF rig: i9-9900K, Noctua NH-L9i, Asrock Z390 Phantom ITX-AC, 32GB, GTX1070, 2x1TB SX8200Pro RAID0, 2x5TB 2.5" HDD RAID0, Athena 500W Flex (Noctua fan), Custom 4.7l 3D printed case

 

Asus Zenbook UM325UA, Ryzen 7 5700u, 16GB, 1TB, OLED

 

GPD Win 2

Link to comment
Share on other sites

Link to post
Share on other sites

33 minutes ago, Kilrah said:

Even if it worked your single "built-in RAID" drive would cost 5 times what 2 usual drives do and still not offer the same level of data safety , so nobody would care about them 🙂

I was briefly thinking something along the lines of having raid 1 in the physical space of just 1 drive .... but then I had another storage / space - related thought ...
So, AFAIK, you can get 1TB MicroSD cards, with 2TB apparently on the way ... and the actual flash chip is smaller than the µSD card.  (Also I wouldn't be surprised if flash chips in some storage devices are even denser...)

 

With density like that... I wonder how much capacity you could fit in the physical size of...

  • 2.5" SSD (or HDD, primarily thinking the currently-common 7mm height, although maybe up to 15mm enterprise HDDs)
  • 3.5" HDD
  • 5.25" Half or Full-Height HDD
  • 8" FDD
  • 14" HDD from the 1970s or whenever... (I'll stop there lol)

That doesn't take into account the cost of the flash, of course, I'm just thinking capacity.
And if you wanted to get even more crazy with storage density ... I hear that DNA storage can supposedly do 215 petabytes per gram... (I wonder how that would convert to cubic mm ... and how much could you compress it, like could you exceed the ~19+ g/cm^3 of, say, gold, tungsten, 22.56 g/cm^3 of osmium, etc ... okay as wild of an imagination I may have, maybe the density of a black hole at the center would be a bit much lol)

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, PianoPlayer88Key said:

I haven't generally looked at the specific DWPD number

DWPD really is the only useful and proper endurance rating, it's the industry standard on the enterprise side. Consumer use TBW only because it's much easier to hide and prevent more proper comparisons. You can typically calculate DWPD by taking the TBW figure and the warranty length and dividing down.

 

The reason DWPD is a superior endurance rating is it does not change based on drive capacity, 0.5 DWPD is the same on a 512GB SSD as it is on a 4TB SSD. The total amount of data you can write is different but you can immediately tell by the DWPD figure how endurant the actual NAND implementation is so you can not waste your time looking at SSDs that are fundamentally low endurance.

 

The other thing to watch out for is SSD with 3 year warranty rather than 5 years. Two SSDs with the same TBW value one with 3 and one with 5 will both math down to the same DWPD value so on the face of it they are equivalent, they are not, buy the 5 year one.

 

Also remember that used capacity directly affects SSD wear and wear leveling ability. DWPD and TBW generally take in to account the SSD being very high capacity utilization. Samsung had either 512GB 840 Pro or 850 Pro SSDs in their endurance test labs with 8PB write on them without any reallocated or bad sectors, this is real world feasible if you don't leave your SSD 90% full it's entire life.

 

Old Write Intensive enterprise SSDs just as an FYI have a DWPD of ~10 so a 400GB S3700 would be able to write the entire capacity of itself 10 times every day for 5 years without exceeding it's expected drive endurance, that's 7.3PB. A modern ~6TB-~8TB Write Intensive SSD with DWPD of 10, well good luck actually wearing that out. There's a good reason the industry has moved away from SLC and eMLC in a lot of situation, modern TLC has been able to match the endurance of older MLC and nobody really needs large capacity SSDs with 30, 40 or 50 DWPD endurance. Most would rather what was already more than acceptable endurance at a cheaper price.

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, leadeater said:

Also remember that used capacity directly affects SSD wear and wear leveling ability. DWPD and TBW generally take in to account the SSD being very high capacity utilization. Samsung had either 512GB 840 Pro or 850 Pro SSDs in their endurance test labs with 8PB write on them without any reallocated or bad sectors, this is real world feasible if you don't leave your SSD 90% full it's entire life.

This. On the machines where I plot chia on the main system drive I made sure to free as much space as possible first so it's not always reusing the same small area and has room to spread the writes out.

F@H
Desktop: i9-13900K, ASUS Z790-E, 64GB DDR5-6000 CL36, RTX3080, 2TB MP600 Pro XT, 2TB SX8200Pro, 2x16TB Ironwolf RAID0, Corsair HX1200, Antec Vortex 360 AIO, Thermaltake Versa H25 TG, Samsung 4K curved 49" TV, 23" secondary, Mountain Everest Max

Mobile SFF rig: i9-9900K, Noctua NH-L9i, Asrock Z390 Phantom ITX-AC, 32GB, GTX1070, 2x1TB SX8200Pro RAID0, 2x5TB 2.5" HDD RAID0, Athena 500W Flex (Noctua fan), Custom 4.7l 3D printed case

 

Asus Zenbook UM325UA, Ryzen 7 5700u, 16GB, 1TB, OLED

 

GPD Win 2

Link to comment
Share on other sites

Link to post
Share on other sites

So to use this I'd need a Epyc or, umm, Intel motherboard?

Any suggestions which kind of cpu/mobo supports this and is ok for gaming? 

 

Or is this just too expensive?

I honestly never liked SATA ... lol (and I actually prefer spinning hard drives, seems more robust to me ironically)

 

TLDR: Whats the cheapest modern SAS motherboard + CPU that's also suitable for gaming?  

 

 

The direction tells you... the direction

-Scott Manley, 2021

 

Softwares used:

Corsair Link (Anime Edition) 

MSI Afterburner 

OpenRGB

Lively Wallpaper 

OBS Studio

Shutter Encoder

Avidemux

FSResizer

Audacity 

VLC

WMP

GIMP

HWiNFO64

Paint

3D Paint

GitHub Desktop 

Superposition 

Prime95

Aida64

GPUZ

CPUZ

Generic Logviewer

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 5/22/2021 at 4:37 AM, RejZoR said:

The whole Chia "it's not proof of work" principle is a big fat dumb lie. They say it's not proof of work yet it ALL depends on how much of work you put in it plotting that shit. And you'll be plotting space non stop in massive quantities. And since doing it on HDD's is impossibly slow, miners have this dumb idea of plotting on faster SSD's. Which wears them down in weeks if it's cheap TLC or months if it's higher end MLC. All this mining is a god damn cancer to entire computer industry and ecology and the sooner it dies, the better.

Chia is even worse than GPU mining IMO, since NAND degrades and wears down significantly compared to something like a GPU, which might not be in the best condition after mining, but should still be usable. The space plotting literally generates e-waste at a weekly rate.

Link to comment
Share on other sites

Link to post
Share on other sites

47 minutes ago, Mark Kaine said:

So to use this I'd need a Epyc or, umm, Intel motherboard?

No, you just add a SAS controller PCIe card.

F@H
Desktop: i9-13900K, ASUS Z790-E, 64GB DDR5-6000 CL36, RTX3080, 2TB MP600 Pro XT, 2TB SX8200Pro, 2x16TB Ironwolf RAID0, Corsair HX1200, Antec Vortex 360 AIO, Thermaltake Versa H25 TG, Samsung 4K curved 49" TV, 23" secondary, Mountain Everest Max

Mobile SFF rig: i9-9900K, Noctua NH-L9i, Asrock Z390 Phantom ITX-AC, 32GB, GTX1070, 2x1TB SX8200Pro RAID0, 2x5TB 2.5" HDD RAID0, Athena 500W Flex (Noctua fan), Custom 4.7l 3D printed case

 

Asus Zenbook UM325UA, Ryzen 7 5700u, 16GB, 1TB, OLED

 

GPD Win 2

Link to comment
Share on other sites

Link to post
Share on other sites

34 minutes ago, Kilrah said:

No, you just add a SAS controller PCIe card.

Oh, cool, I i guess the speed shouldnt be an issue for pcie?

 

I might do this when i build my second PC as that will have a bigger case, I couldnt really use any card in my current PC without choking the GPU I think.

 

The direction tells you... the direction

-Scott Manley, 2021

 

Softwares used:

Corsair Link (Anime Edition) 

MSI Afterburner 

OpenRGB

Lively Wallpaper 

OBS Studio

Shutter Encoder

Avidemux

FSResizer

Audacity 

VLC

WMP

GIMP

HWiNFO64

Paint

3D Paint

GitHub Desktop 

Superposition 

Prime95

Aida64

GPUZ

CPUZ

Generic Logviewer

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Mark Kaine said:

Oh, cool, I i guess the speed shouldnt be an issue for pcie?

 

I might do this when i build my second PC as that will have a bigger case, I couldnt really use any card in my current PC without choking the GPU I think.

 

Just remember the important caveat, if you were to do this Windows would just see two 7TB HDDs and each one of those would be no faster. You'd have to RAID the two logical 7TB HDDs to get any kind of speed up.

Link to comment
Share on other sites

Link to post
Share on other sites

20 hours ago, leadeater said:

Just remember the important caveat, if you were to do this Windows would just see two 7TB HDDs and each one of those would be no faster. You'd have to RAID the two logical 7TB HDDs to get any kind of speed up.

Such as with Storage Spaces?
I wonder if this is a possible way to get a drive formatted with ReFS on 10 Home/Pro without creating virtual disks?

elephants

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, comander said:

The cost delta up front isn't that big and power consumption becomes a concern. 

Not really, power is not that much of a problem at all. It's the rack space that everyone cares about as co-location is becoming an ever bigger thing now and you're either paying per U or per rack.

 

4 hours ago, comander said:

HDDs in the sense of serving many multiples of users are struggling. The number of use cases is dwindling as TCO of SSDs is improving faster than HDD. 

Well that very heavily depends, all our network shares, home drive and critical NFS mounts are all on SATA/NL-SAS with no performance problems at all. Can also sustain many tens of thousands of IOPs on it too. HDD arrays really are not that limited in performance, you really need to be doing something much higher demanding that just serving files to thousands of users to really need SSD. SSD's biggest benefit is small block I/O latency and IOPs which is where HDDs are the worst at, so databases and the like.

 

At some point HDDs won't make sense, dual actuator is more about fixing the capacity to performance discrepancy than anything else. SSD storage price is still many times more than HDD. Unlike the hyperscale and service hosters capacity growth hasn't actually accelerated that much, it's actually less than the capacity increases in HDDs until recently. There are places that have to buy more smaller HDDs rather than fewer large HDDs to keep the performance so are unable to benefit from the better price per capacity on larger HDDs.

 

4 hours ago, comander said:

For archival/backup storage HDDs still win. The issue is that streaming photos or videos is getting to be a less viable use case. At least at the scale of Youtube's most popular videos. The bulk of "basically never viewed content" is likely on HDD - that's fine. At some point though, you'll go from top 1% on NAND to top 10% on NAND to top 30% on NAND to... 

Anyone operating video streaming or CDN network that has a proper plan for this would be storing all the data in an object store that is primary HDD with metadata on SSD and then using separate caching nodes to serve frequent content and to pre-seed new content you expend to be in high demand at release time.

 

Youtube, Google, Netflix, Amazon Prime etc all utilize cache servers. Primary storage of the actual content is very unlikely to be on SSD as that makes no cost sense to do so.

 

The actual number of entities in the world at the scales you mentioned is extremely, extremely small though.

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, comander said:

Always be warry of marketing fluff. SSDs are and will for a long time be well away from HDDs in TCO except for customers that have to pay out big for rack space. For anyone that already has their own rack space this is essentially a non factor so there is no TCO gain moving from HDD to SSD.

 

To compete with HDDs SSDs essentially need to get cheaper on the order of like 6 times.

 

That story about 50TB and 100TB SAS archival SSDs looked more appealing than most, they were "slow" but huge and actually looking price competitive with HDDs, however I've not seen them as an option in anything just yet so I haven't been able to real world price a solution using them.

 

15 minutes ago, comander said:

2000 people sharing a network drive to read word files (read: probably 20-200 people doing sustained requests at any given moment) is a different workload than 1000x as many people going crazy for resources

We have between 35k-40k users and active concurrent users in the high thousands. Off these same HDDs we have NFS mounts supporting Linux application infrastructure like Moodle. We also have people video editing on this. You name it it's happening on these disks. We have Adobe Connect and Media Site live lecture recordings and playback going to it.

 

Peak real demand IOPs against these HDD aggregates was 112K and peak throughput 2.7GB/s, latency stays below 3ms. HDDs simply are not as slow as flash/NAND vendors trying to make out.

 

Sample of one of the four controller heads that front one of the HDD aggregates (6 in total).

image.thumb.png.8f076e4e5f68a4b613f3e54eb7ba554b.png

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, comander said:

I'm still going to speculate that from an IO demands perspective, that the demands per drive (especially at peak moments) in such an environment are orders of magnitude lower (10x-10,000x) than what hyper scalers

They are but the world is far bigger than just the hyperscalers 😉

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, comander said:

Who do you think the dual actuator drives are for?

Ah everyone, me...

 

Seagate Exos product range is one of the OEM for all the enterprise storage vendors, also we deploy our own Object Storage on commodity servers that could use these drives just as much as any other drive.

 

Why would Netapp or EMC not want to use these disks?

 

18 minutes ago, comander said:

buying 2x as many smaller drives is legitimately a thing. 

I know I literally also said this. Dual actuator exists for the sole purpose of solving the capacity to performance ratio issue.

 

Edit:

The problem is saying that SSDs are becoming cost competitive with HDDs when they are absolutely not even close yet. Trying to prove a use case that does actually make sense that only applies to about half the shipping volume of both HDDs and SSDs doesn't make any logical sense.

 

Sure we use SSD only storage for our VM platform and databases but it's highly highly unaffordable to put our bulk data storage on SSD, no matter how high the performance demand is because a half decent storage array with a good number of disks gives so much performance it's legitimately hard to exceed the capabilities of something like that even on a large scale enterprise network.

 

And the hyperscalers still buy more capacity in HDD than the do SSD.

 

Like I sad about 2 or 3 posts ago, store the data in HDD and cache it on a cache node servers exactly like Youtube does. Youtube's CDN network are all SSD nodes but they are just cache copies and can live and die. You could legitimately service this same workload with HDDs but it just doesn't make any sense to as the capacity to performance requirement actually does favor SSD by a long, long way.

 

Just remember, don't live and die on a single use-case or use-case example.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×