Jump to content

Seagate has created a HDD with transfer speeds to rival SATA-based solid state drives

PwnyTheTiger
13 hours ago, comander said:

Why do you need direct PCIe gen 4 for spinning rust? I'd rather dedicate precious PCIe lanes to an HBA that powers 20 drives instead of whatever it is you're thinking you're doing. A single PCIe 4.0 lane does ~30x the bandwidth of this drive.

 

Spinning rust just needs to be adequately performant for a given use case and store a bunch of stuff. "Can this get the frame in the video faster than the user's monitor refreshes? y/n?"

 

In a storage context, PCIe 4.0 is only really useful for large NAND arrays, and even then only questionably so (you end up with 8 channel DDR4 DRAM bottlenecking systems that have something like 64 lanes of PCIe 4.0 hitting the system concurrently). This doesn't even touch on CPU bottlenecking or tradeoffs between whether you're trying to encrypt/compress data or not. 

God, you really don't get sarcasm, do you? Now I need to add the /s to my post.

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, RejZoR said:

Which wears them down in weeks if it's cheap TLC or months if it's higher end MLC

It's not that bad, even the worst SSDs now have 0.2 DWPD endurance for either 3 year of 5 year warranty. Most are around 0.5 to 0.8 DWPD. So unless you are buying the cheapest worst trash SSDs you'll be able to write to them in full every day for a few years without any wear issues.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, RejZoR said:

Check Der8auer's video:

 

??? I think maybe you're replying in the wrong thread? This isn't talking about Chia here at all. This is about Seagate's new fast HDDs.

 

CPU: AMD Ryzen 9 5900X · Cooler: Artic Liquid Freezer II 280 · Motherboard: MSI MEG X570 Unify · RAM: G.skill Ripjaws V 2x16GB 3600MHz CL16 (2Rx8) · Graphics Card: ASUS GeForce RTX 3060 Ti TUF Gaming · Boot Drive: 500GB WD Black SN750 M.2 NVMe SSD · Game Drive: 2TB Crucial MX500 SATA SSD · PSU: Corsair White RM850x 850W 80+ Gold · Case: Corsair 4000D Airflow · Monitor: MSI Optix MAG342CQR 34” UWQHD 3440x1440 144Hz · Keyboard: Corsair K100 RGB Optical-Mechanical Gaming Keyboard (OPX Switch) · Mouse: Corsair Ironclaw RGB Wireless Gaming Mouse

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Chris Pratt said:

??? I think maybe you're replying in the wrong thread? This isn't talking about Chia here at all. This is about Seagate's new fast HDDs.

 

Yes we are, read a bit further back. I did quote the wrong person tho...

Link to comment
Share on other sites

Link to post
Share on other sites

48 minutes ago, RejZoR said:

They say it's not proof of work yet it ALL depends on how much of work you put in it plotting that shit. And you'll be plotting space non stop in massive quantities.

There is a one time computational cost (per capacity) but after that point the farming is near enough just the drive power plus a minimal system around it. If you're plotting continuously then either you're doing it wrong or you're continuously adding more capacity.

 

42 minutes ago, Chris Pratt said:

It would depend, I suppose, somewhat on how the drive reacts to partial failure. If one arm breaks, does the other keep working, with the drive now at half speed (or basically normal HDD speed)? Something tells me no.

 

If not, the two separate, slower drives, is still better from a failover perspective: one fails, the other still works.

My view as an outsider to the enterprise world is that a device works or doesn't work. You don't have any use for a device that kinda works, sometimes, perhaps. Any fault at all, it gets taken out and replaced. So in that sense you'd have to consider each of the new drives as a single unit. Someone better at statistics than me will have to crunch the numbers but I think one unit with slightly higher probability of failure is still better off overall than two units of lower probability.

 

39 minutes ago, leadeater said:

It's not that bad, even the worst SSDs now have 0.2 DWPD endurance for either 3 year of 5 year warranty. Most are around 0.5 to 0.8 DWPD. So unless you are buying the cheapest worst trash SSDs you'll be able to write to them in full every day for a few years without any wear issues.

I'm going to have to sit down and work it out for some example SSDs now...

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

It is neat, for increasing capacity and regular spinning speeds, IOPS needs to keep up so multi-actuator tech is very good. I'm sure we won't see it, but it would be awesome to see them make like 20K RPM multi-actuator drive with each platter being it's own actuator along with SS hybrid. Tech wise would be interesting to see such.

 

Then again, consumer wise we're already seeing SSDs being shipped more than HDDs and those that require a bit more storage for cheaper can get multi TB HDD and could be the last one.

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Ducky One 3 TKL (Cherry MX-Speed-Silver)Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, porina said:

There is a one time computational cost (per capacity) but after that point the farming is near enough just the drive power plus a minimal system around it.

That was the original idea, but won't work in practice. Like most if not all coins there's a "difficulty", the amount of coin generated per day from farming is fixed, so having more space gives you a bigger slice of the pie, so obviously some people invest tons to get as much plotting as possible, making it harder for anyone to compete with a fixed amount.

 

9 minutes ago, porina said:

If you're plotting continuously then either you're doing it wrong or you're continuously adding more capacity.

Since the network continuously grows so much you have no choice but to continue adding capacity or your gain chances quickly fall to nothing. Right now you have to plot 3.5TB per day just to keep up with the big players and newcomers.

F@H
Desktop: i9-13900K, ASUS Z790-E, 64GB DDR5-6000 CL36, RTX3080, 2TB MP600 Pro XT, 2TB SX8200Pro, 2x16TB Ironwolf RAID0, Corsair HX1200, Antec Vortex 360 AIO, Thermaltake Versa H25 TG, Samsung 4K curved 49" TV, 23" secondary, Mountain Everest Max

Mobile SFF rig: i9-9900K, Noctua NH-L9i, Asrock Z390 Phantom ITX-AC, 32GB, GTX1070, 2x1TB SX8200Pro RAID0, 2x5TB 2.5" HDD RAID0, Athena 500W Flex (Noctua fan), Custom 4.7l 3D printed case

 

Asus Zenbook UM325UA, Ryzen 7 5700u, 16GB, 1TB, OLED

 

GPD Win 2

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Kilrah said:

Since the network continuously grows so much you have no choice but to continue adding capacity or your gain chances quickly fall to nothing. Right now you have to plot 3.5TB per day just to keep up with the big players and newcomers.

I noticed... I think at my best I was estimating one hit a year, and that's now dropped to a hit in x years and that number is growing. Maybe it'll be more interesting for small players once they get pools working. 

 

Anyway, I was looking at it more from a small farmer perspective where there will be a practically finite capacity. Large scale farmers will have different considerations.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

And when pools come you have to replot everything so the energy making the first ones was wasted...

F@H
Desktop: i9-13900K, ASUS Z790-E, 64GB DDR5-6000 CL36, RTX3080, 2TB MP600 Pro XT, 2TB SX8200Pro, 2x16TB Ironwolf RAID0, Corsair HX1200, Antec Vortex 360 AIO, Thermaltake Versa H25 TG, Samsung 4K curved 49" TV, 23" secondary, Mountain Everest Max

Mobile SFF rig: i9-9900K, Noctua NH-L9i, Asrock Z390 Phantom ITX-AC, 32GB, GTX1070, 2x1TB SX8200Pro RAID0, 2x5TB 2.5" HDD RAID0, Athena 500W Flex (Noctua fan), Custom 4.7l 3D printed case

 

Asus Zenbook UM325UA, Ryzen 7 5700u, 16GB, 1TB, OLED

 

GPD Win 2

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, Kilrah said:

Since the network continuously grows so much you have no choice but to continue adding capacity or your gain chances quickly fall to nothing. Right now you have to plot 3.5TB per day just to keep up with the big players and newcomers.

So does that mean you have to also add 3.5TB capacity, hardware wise, or is this trashing existing plots and writing new ones on the same hardware?

Link to comment
Share on other sites

Link to post
Share on other sites

You need to add 35 more plots per day, so need 3.5TB worth of extra capacity (and the processing power to make them that fast...)

F@H
Desktop: i9-13900K, ASUS Z790-E, 64GB DDR5-6000 CL36, RTX3080, 2TB MP600 Pro XT, 2TB SX8200Pro, 2x16TB Ironwolf RAID0, Corsair HX1200, Antec Vortex 360 AIO, Thermaltake Versa H25 TG, Samsung 4K curved 49" TV, 23" secondary, Mountain Everest Max

Mobile SFF rig: i9-9900K, Noctua NH-L9i, Asrock Z390 Phantom ITX-AC, 32GB, GTX1070, 2x1TB SX8200Pro RAID0, 2x5TB 2.5" HDD RAID0, Athena 500W Flex (Noctua fan), Custom 4.7l 3D printed case

 

Asus Zenbook UM325UA, Ryzen 7 5700u, 16GB, 1TB, OLED

 

GPD Win 2

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Kilrah said:

You need to add 35 more plots per day, so need 3.5TB worth of extra capacity.

Hmm, keep meaning to give it a try. Sounds like I need to do it sooner than later lol

Link to comment
Share on other sites

Link to post
Share on other sites

I would think it makes no sense starting now unless you have MASSIVE plotting capacity (i.e. >100plots/day) and about 100TB of space, otherwise just wait for pools.

F@H
Desktop: i9-13900K, ASUS Z790-E, 64GB DDR5-6000 CL36, RTX3080, 2TB MP600 Pro XT, 2TB SX8200Pro, 2x16TB Ironwolf RAID0, Corsair HX1200, Antec Vortex 360 AIO, Thermaltake Versa H25 TG, Samsung 4K curved 49" TV, 23" secondary, Mountain Everest Max

Mobile SFF rig: i9-9900K, Noctua NH-L9i, Asrock Z390 Phantom ITX-AC, 32GB, GTX1070, 2x1TB SX8200Pro RAID0, 2x5TB 2.5" HDD RAID0, Athena 500W Flex (Noctua fan), Custom 4.7l 3D printed case

 

Asus Zenbook UM325UA, Ryzen 7 5700u, 16GB, 1TB, OLED

 

GPD Win 2

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Kilrah said:

I would think it makes no sense starting now unless you have MASSIVE plotting capacity (i.e. >100plots/day) and about 100TB of space, otherwise just wait for pools.

Oh it's fine, I have more than 100TB 👍

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah the big thing is plotting capacity, you'd really need those 100TB done now 🙂

 

The right moment to start was 1 month ago or earlier 😞

F@H
Desktop: i9-13900K, ASUS Z790-E, 64GB DDR5-6000 CL36, RTX3080, 2TB MP600 Pro XT, 2TB SX8200Pro, 2x16TB Ironwolf RAID0, Corsair HX1200, Antec Vortex 360 AIO, Thermaltake Versa H25 TG, Samsung 4K curved 49" TV, 23" secondary, Mountain Everest Max

Mobile SFF rig: i9-9900K, Noctua NH-L9i, Asrock Z390 Phantom ITX-AC, 32GB, GTX1070, 2x1TB SX8200Pro RAID0, 2x5TB 2.5" HDD RAID0, Athena 500W Flex (Noctua fan), Custom 4.7l 3D printed case

 

Asus Zenbook UM325UA, Ryzen 7 5700u, 16GB, 1TB, OLED

 

GPD Win 2

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

It's not that bad, even the worst SSDs now have 0.2 DWPD endurance for either 3 year of 5 year warranty. Most are around 0.5 to 0.8 DWPD. So unless you are buying the cheapest worst trash SSDs you'll be able to write to them in full every day for a few years without any wear issues.

Using the numbers given at https://www.chia.net/2021/02/22/plotting-basics.html I worked out the plotting life of a WD Green as an example of a low end SSD, and a 980 Pro as an example of a high end SSD. Assumes SSDs work for exactly their rated endurance then die, where in practice they may last much longer.

 

Normalised per TB of SSD space:

WD Green will do 189.5 plot files or about 20.6 TB of plot files.

980 Pro will do 341.1 plot files or about 37.1 TB of plot files.

 

I don't know what the sustained write speed of a WD Green is, but assuming you can saturate SATA you'll wear it out a 480GB model in just under 4 days. If it survives longer than that is due to low performance and only delaying the inevitable. Repeating similar with a 980 Pro 1 TB, which has TLC write speeds of 2000 MB/s, that'll wear out in 3.5 days.

 

Personally if I were to farm this on a large scale, I'd skip SSDs and just use a LOT of HDs in parallel. HDs in comparison wont wear out, and you just make the plotting operations more parallel to offset their lower performance.

 

13 minutes ago, leadeater said:

Hmm, keep meaning to give it a try. Sounds like I need to do it sooner than later lol

Good luck. Windows software is a pain and unstable IMO. Haven't looked at other options. Might pass on this one for now.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, porina said:

Good luck. Windows software is a pain and unstable IMO. Haven't looked at other options. Might pass on this one for now.

I'd probably use Linux, I don't know. Ubuntu desktop maybe just for the GUI lazy factor. Reason why I haven't started is for myself I'm planning on using hardware RAID with write-back cache and optimizing the plotting for the DRAM cache size on the card. I'm thinking maybe because I have access to these higher end RAID cards and the increased performance (800MB/s+ write) huge RAID 5 volumes is my best path.

 

24 HDDs per system, 3TB per HDD, maybe 3-4 in total plus a few other servers with 2.5" 10K SAS 900GB disks.

Link to comment
Share on other sites

Link to post
Share on other sites

Use RAID0 (or JBOD) instead of RAID5, redundancy is jsut a waste of space for that use case.

F@H
Desktop: i9-13900K, ASUS Z790-E, 64GB DDR5-6000 CL36, RTX3080, 2TB MP600 Pro XT, 2TB SX8200Pro, 2x16TB Ironwolf RAID0, Corsair HX1200, Antec Vortex 360 AIO, Thermaltake Versa H25 TG, Samsung 4K curved 49" TV, 23" secondary, Mountain Everest Max

Mobile SFF rig: i9-9900K, Noctua NH-L9i, Asrock Z390 Phantom ITX-AC, 32GB, GTX1070, 2x1TB SX8200Pro RAID0, 2x5TB 2.5" HDD RAID0, Athena 500W Flex (Noctua fan), Custom 4.7l 3D printed case

 

Asus Zenbook UM325UA, Ryzen 7 5700u, 16GB, 1TB, OLED

 

GPD Win 2

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, leadeater said:

optimizing the plotting for the DRAM cache size on the card

Actually, that gives me an idea. I don't know what hardware you have but the temporary files during plotting apparently total 239 GiB. If you have a 256 GB ram system you could use a ramdrive. You could only plot one at a time, but it would push the limit to the CPU in that case. 512 GB system should allow two in parallel, and so on.

 

I love the (mis)use of GB/GiB so even if I write using the more popular used units, I have taken into consideration the differences for my calculations in this thread.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, Kilrah said:

Use RAID0 (or JBOD) instead of RAID5, redundancy is jsut a waste of space for that use case.

Yea but I wouldn't want to replot either and it's just 1 3TB HDD anyway

Link to comment
Share on other sites

Link to post
Share on other sites

Every GB counts, and you can replot 3GB in less than a day or so in case of failure... but yes please do RAID5 and help us with the network size 😄

F@H
Desktop: i9-13900K, ASUS Z790-E, 64GB DDR5-6000 CL36, RTX3080, 2TB MP600 Pro XT, 2TB SX8200Pro, 2x16TB Ironwolf RAID0, Corsair HX1200, Antec Vortex 360 AIO, Thermaltake Versa H25 TG, Samsung 4K curved 49" TV, 23" secondary, Mountain Everest Max

Mobile SFF rig: i9-9900K, Noctua NH-L9i, Asrock Z390 Phantom ITX-AC, 32GB, GTX1070, 2x1TB SX8200Pro RAID0, 2x5TB 2.5" HDD RAID0, Athena 500W Flex (Noctua fan), Custom 4.7l 3D printed case

 

Asus Zenbook UM325UA, Ryzen 7 5700u, 16GB, 1TB, OLED

 

GPD Win 2

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Kilrah said:

Every GB counts, and you can replot 3GB in a day or so in case of failure... but yes please do RAID5 and help us with the network size 😄

These are all old HDDs that were running for I think 8 years, 5 easily. Also very heavily utilized, if they were new then sure RAID0 all the way but I'm expecting failures as we were having 1 fail each week before all of them were replaced/I took them home lol

Link to comment
Share on other sites

Link to post
Share on other sites

Just turn on compression. CPUs are a lot faster than disks, they can compress the data and write fewer bytes. While you're at it just make all writes sequential with a lookup table. Sequential writes are a lot faster than random writes. Reads are always quicker than writes.

Link to comment
Share on other sites

Link to post
Share on other sites

 

 

20 hours ago, SupaKomputa said:

How don't they think of this before? Love to see hdd is competitive.

 

$300 for 14TB is not bad at all.

 

That's a little less than what I paid last August for each of two Toshiba MG07ACA14TE drives.  (I'm not seeing any in stock right now though, even close to $500, at least of that model, unless you count 3rd-party sellers on Amazon.  I don't count them, at least for products that are normally supposed to be available through normal channels.)

 

 

 

 

20 hours ago, wat3rmelon_man2 said:

They made that in ‘00…

D7664723-5FA3-49D1-A089-0D9E55535202.thumb.jpeg.1a3879c4df16399b15a93cbb52d62742.jpeg290D83B8-2657-4034-9FB0-3D2B9F5EA4A8.thumb.jpeg.172a415903a251a24177a42cdbc6f07c.jpeg
Although it would be impractical, it would be cool to see a refresh of tiny HDDs…

Well, there was a little problem with those, though....

I've never seen that happen with an SSD / flash card ... although I did make the mistake a few years ago of killing a 256GB SD card 😞 by trying to pull it out of my laptop at some weird angle, without taking my laptop out of the case first, and I think I bent the card & broke something on it. 😕 

 

 

 

19 hours ago, comander said:

This is essentially internal RAID0. 

 

.....

 

Also useful for cases where you need to rebuild a RAID array relatively quickly - 2x the transfer speed means half the risk of catastrophic failure during rebuilds (assuming that the internal reliability of such a drive is still solid - we don't have historical failure rates on these yet). 

 

5 hours ago, Chris Pratt said:

It would depend, I suppose, somewhat on how the drive reacts to partial failure. If one arm breaks, does the other keep working, with the drive now at half speed (or basically normal HDD speed)? Something tells me no.

 

If not, the two separate, slower drives, is still better from a failover perspective: one fails, the other still works.

A little while ago, I was thinking of an internal RAID1 use case for hard drives...  (Although my idea wouldn't have necessarily used dual actuators.)

A user of one of those Internal-RAID1 drives would be using it normally, until, one day...

**WARNING!**  Your hard drive (Brand, Model, Serial) has suffered a catastrophic (physical / logical / whatever) failure!
Because your drive runs RAID1 internally, your existing / already-saved data is safe, for now.  However, we cannot in good faith allow you to continue writing to the drive, and must limit your ability to read from it as well.

How it goes from there would depend in part on how the drive is being used.  Two scenarios I've thought of are as a primary boot drive (less likely nowadays), and as a secondary storage drive.  (It could also be used as part of a RAID array, and for situations like that there should be the ability to disable the internal RAID.)

For a primary boot drive...
You have ___ GB of unsaved data.  Please insert a USB storage device with at least that much free space, so we can save then reboot your system....
Then, once the data is saved and it's ready to reboot ... If you have another device you can boot from it'd give you the option to do that, but if not...
(A lightweight Linux / Ubuntu environment comes up.) 
We have rebooted your system into a Linux distribution that's stored in flash / firmware on your hard drive, which is failing.  You can use this to shop online for a new hard drive or SSD, and to get some light work done online while you're waiting for the drive to arrive, however you will not be able to save data to the failing drive.  (You can also go to a local store and buy a drive if you prefer that.)  Once you have a new drive, connect it to the PC, and we will begin the data transfer.  (This may require a reboot, also you can use another PC if necessary.)

 

For a secondary storage drive, it wouldn't halt / force boot your system, but it would still not allow you to access the drive until you had plugged in another drive of equal or greater capacity, in which case you could continue as before and have it copy all the data to the new drive.
Also, in case you had a computer that literally only had room for one internal storage device (like a Dell D830 laptop my dad used for quite a few years), the drive would have the smarts to allow you to plug it, and a new drive, into an entirely different PC that could support multiple drives, and do the data transfer there.

 

Of course if ALL the heads crashed into all the platters simultaneously, this wouldn't recover from that.
Also, there would be an option for enthusiasts / experimenters / adventurous souls to bypass the warnings, and continue using the drive, like for testing, "killing in the name of science", etc... but any possible warranty / data recovery contract (like the one some Seagate IronWolf Pro drives come with, IIRC) would be void, and there would be multiple confirmations, authentications, etc to really confirm you wanted to do that. (It would not be as simple as answering yes to like 5 consecutive prompts.)

 

 

 

 

18 hours ago, comander said:

Spinning rust is an industry term. It's people joking about how slow hard drives are by modern standards. 

 

I've seen it used at at least 3 different fortune 500 companies that I've worked at and a fair bit on places like STH. 

Well, there's one way that I think even modern NVMe Gen4 SSDs can't touch some really old hard drives - and that is ... time it takes to write the entire capacity of the drive.

 

There's a Tom's Hardware article from, IIRC, 2006 (something about 15 years of hard drive history: capacity outran performance - would link it but I'm getting a 503 error on that and some other articles for some reason) ... and from what I remember, it took about 40 or so seconds to write the capacity of one 26MB platter on the 40MB drive.  (So I'm guessing it would have taken a little over a minute or so to write the entire drive.)

 

I'd like to see someone test with some old drives, like 5-20MB MFM drives or up to 80MB IDE drives.  I wonder if, for example, a 5MB MFM drive (with 5 Mbit/sec interface speed) would fill the drive in like 8 or 10 seconds ... (although that doesn't also factor in the actual data transfer rate possibly being slower ... I do think that the ST-506 interface was far more limiting than the SATA interface or even PATA, although back then I was too young to do hardly any technical stuff with them.)


The oldest working drive I have is an 8.4GB IBM Deskstar DTTA-350840.  (I have 2 of those drives - the other one is in a video below another quote in this post.)  IIRC, it took about 11 minutes or so to write the entire capacity of that drive.  Also I think I tested a 1TB Samsung 970 Evo (before I put stuff on it) and IIRC that took upwards of 15 minutes or more, but I don't remember for sure.

I'd guess that even the fastest modern high-capacity (8TB+) NVMe SSDs wouldn't come remotely close to touching the "speed" (time to write the entire drive) of, say, a 20MB PATA drive, or a 5MB MFM drive.

Also ... for a future storage interface (or a future generation of PCIe / NVMe) ... I'd like to see performance be based on --- not raw GB/s transfer speeds, but based on time to write the entire capacity / have it be able to scale with capacity.  That way, for example ... even if when it first comes out, the largest drive you could get is 16TB, and it writes its entire capacity in 8 seconds ... several years later when you can get, say, a 256TB or even a 1PB drive, it would still only take 8 seconds to write the entire drive.  (And for an SSD, it wouldn't matter if it's sequential vs random.)

 

 

 

 

5 hours ago, leadeater said:

It's not that bad, even the worst SSDs now have 0.2 DWPD endurance for either 3 year of 5 year warranty. Most are around 0.5 to 0.8 DWPD. So unless you are buying the cheapest worst trash SSDs you'll be able to write to them in full every day for a few years without any wear issues.

When I've looked at buying SSDs, I haven't generally looked at the specific DWPD number, but ... one thing I generally like to get is at least 1 PB of endurance per TB of capacity, when possible, or more.  I think I remember a few MLC SSDs a few years ago or so that basically worked out to about 1 DWPD over 5 years, which would be more than a PB / TB ratio, but I don't remember for sure.
Also IIRC I've heard of some enterprise SSDs a few years ago (and/or maybe Optane) that had tens of PB of endurance even with well under a TB of capacity.  I wonder what the "endurance" of a HDD would be though ... how much could you write to it until it fails like this one has?

Also there was a TechReport series of articles on an SSD Endurance Experiment several years ago ... I'd like to see that repeated with modern SSDs (from QLC DRAMless cheap knockoffs, all the way to MLC/SLC(?) enterprise SSDs), and also throw in a few hard drives and see how many TB / PB of writes THOSE can take before they fail like that IBM.

 

Also speaking of those enterprise SSDs, vs consumer SSDs .. what is it that gives them many times higher endurance?  Are they using different types of flash, or is there something else going on?

 

 

 

 

4 hours ago, leadeater said:

Hmm, keep meaning to give it a try. Sounds like I need to do it sooner than later lol

 

3 hours ago, Kilrah said:

Yeah the big thing is plotting capacity, you'd really need those 100TB done now 🙂

 

The right moment to start was 1 month ago or earlier 😞

 

Yeah ... I was briefly thinking maybe of doing a little Chia farming with a few drives I already have sitting around (not 100TB though, and if you count available capacity it's only 2x 14TB + 8TB + a few more TB on a couple other drives), but these 2 posts and few others tell me that nah, I may as well not bother.
And it reminded me of another thing ... For me, it's usually already too late by the time I find out about something like this, like Chia farming, Bitcoin/Etherium mining, etc, to have any benefit from it.  (And on top of that, I often don't like jumping onto something right away that's unproven / be an "early adopter of gen1 stuff" / etc.)

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, PianoPlayer88Key said:

A user of one of those Internal-RAID1 drives would be using it normally, until, one day...

**WARNING!**  Your hard drive (Brand, Model, Serial) has suffered a catastrophic (physical / logical / whatever) failure!
Because your drive runs RAID1 internally, your existing / already-saved data is safe, for now

I don't think that would work, mechanical damage sends debris flying off that would likely destroy the other set of heads within seconds/minutes.

F@H
Desktop: i9-13900K, ASUS Z790-E, 64GB DDR5-6000 CL36, RTX3080, 2TB MP600 Pro XT, 2TB SX8200Pro, 2x16TB Ironwolf RAID0, Corsair HX1200, Antec Vortex 360 AIO, Thermaltake Versa H25 TG, Samsung 4K curved 49" TV, 23" secondary, Mountain Everest Max

Mobile SFF rig: i9-9900K, Noctua NH-L9i, Asrock Z390 Phantom ITX-AC, 32GB, GTX1070, 2x1TB SX8200Pro RAID0, 2x5TB 2.5" HDD RAID0, Athena 500W Flex (Noctua fan), Custom 4.7l 3D printed case

 

Asus Zenbook UM325UA, Ryzen 7 5700u, 16GB, 1TB, OLED

 

GPD Win 2

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×