Jump to content

(edit: CES) Two arms are better than one - Seagate release dual actuator HDD for Microsoft

williamcll
4 hours ago, Mira Yurizaki said:

The increase in failure depends on how the system uses it. Look at airplanes, they duplicate a lot of critical features for the sole purpose of having a higher chance of survival.

There's only one actuator motor each so if one fails then one LUN is failed, having two non redundant things versus one there is always an increased risk of a failure compared to one. It's just a matter of impact for that failure, and for this type of disk it is worse than two actual disks.

 

I've dealt with basically exactly this already, Netapp has disk shelves with two disks per tray so if one disk fails you have to prepare the other disk for ejection which means moving all the data off it which can take hours, then and only then can you pull the tray out and replace the failed disk. This isn't any different, well worse actually, if one LUN fails you have to take both offline and replace the disk which means replacing both LUNs in the storage system. You're not just going to leave the system degraded by having one less LUN than you did before.

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/5/2019 at 8:27 AM, williamcll said:

image.thumb.png.a1485f9eb84b8fa4df960ee3f11589a1.png

Multi-actuators have been one of the methods for Hard disks to catch up to SSD's performance, Seagate demonstrated it a year back and now it is finally on production for enterprise use.

Source:https://www.storagereview.com/seagate_exos_2x14_doubles_iops_for_microsoft/
https://www.cxotoday.com/press-release/microsoft-nearly-doubles-iops-using-seagate-exos-with-mach-2-dual-actuator-technology/
https://mp.weixin.qq.com/s/tO0zt_bWs99-IOtvFcVMHA
Thoughts: It will be many more months before this technology will be available in the consumer market, and if it isn't as cheap as QLC/PLC SSDs I doubt it would be competitive. Would be nice to hear from the seagate forum affiliate about this.

 

Hmm, this is interesting, but it's not really any different from having two mechanical hard drives on top of each other in RAID 0 configuration.

 

Why not put a second set of actuators in the drive and put the platters in the middle, so you could double the read/write capacity. Therefor it operates like a RAID 1 configuration even though lacks the redundancy RAID 1 would have.

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/5/2019 at 11:00 AM, porina said:

You wouldn't need the actuators to move independently to "raid 0" it though. You can simply read from multiple heads at the same time. Why not read/write to all heads at the same time?

 

This is more in line with my thinking, but this would only offer a marginal latency improvement.

No you can't because the heads are all linked together and the data you need isn't in the same position in all the platters of the hard drive. Multiple actuators allow the heads to seek to different positions on the platter. 

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Kisai said:

 

Hmm, this is interesting, but it's not really any different from having two mechanical hard drives on top of each other in RAID 0 configuration.

 

Why not put a second set of actuators in the drive and put the platters in the middle, so you could double the read/write capacity. Therefor it operates like a RAID 1 configuration even though lacks the redundancy RAID 1 would have.

What? You just said it's like raid 0 (which doubles read and write speed) and then said it should be like raid 1, which doesn't. 

Link to comment
Share on other sites

Link to post
Share on other sites

54 minutes ago, bobhays said:

What? You just said it's like raid 0 (which doubles read and write speed) and then said it should be like raid 1, which doesn't. 

Two arms on the same side of the drive, that move independently, treats the top platters and bottom platters as effectively two drives that don't read the same bits, that's raid 0. It can read two sectors at a time, but only from different halves of the drive. Now if they somehow set the drive hardware/firmware to use it as Raid 1, then the write performance is not increased, only the read performance.

 

Two arms on opposite sides of the drives that read and write to the same platters doubles the read/write performance because it can deal with two different sectors at the same time.

 

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, Kisai said:

Hmm, this is interesting, but it's not really any different from having two mechanical hard drives on top of each other in RAID 0 configuration.

It takes up half the size, and about half the power consumption. Both of those matter a lot in datacenters, having close to the same performance in less volume and heat output than 2 drives is exactly what this is meant for.

F@H
Desktop: i9-13900K, ASUS Z790-E, 64GB DDR5-6000 CL36, RTX3080, 2TB MP600 Pro XT, 2TB SX8200Pro, 2x16TB Ironwolf RAID0, Corsair HX1200, Antec Vortex 360 AIO, Thermaltake Versa H25 TG, Samsung 4K curved 49" TV, 23" secondary, Mountain Everest Max

Mobile SFF rig: i9-9900K, Noctua NH-L9i, Asrock Z390 Phantom ITX-AC, 32GB, GTX1070, 2x1TB SX8200Pro RAID0, 2x5TB 2.5" HDD RAID0, Athena 500W Flex (Noctua fan), Custom 4.7l 3D printed case

 

Asus Zenbook UM325UA, Ryzen 7 5700u, 16GB, 1TB, OLED

 

GPD Win 2

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, bobhays said:

No you can't because the heads are all linked together and the data you need isn't in the same position in all the platters of the hard drive. Multiple actuators allow the heads to seek to different positions on the platter. 

 

Which raises the question, why instead of writing/reading data from one platter and head at a time don't they split it amongst all the heads and write to multiple different platters at the same time.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, bobhays said:

No you can't because the heads are all linked together and the data you need isn't in the same position in all the platters of the hard drive. Multiple actuators allow the heads to seek to different positions on the platter. 

The "raid 0" like comment was essentially striping across platters. The presumption is the disk logic would already be coded for this. This is trivial to implement.

 

If we have independent heads for each group of platters, this could only offer a minor improvement to random access, up to 2x in the absolute best case if the OS or application level is able to balance the load between them. Maybe there is some niche out there that has data that fits this scenario. It wont help large transfers in one or both groups and in worst case might increase latency if access operation requires data from both parts.

 

My proposal of two groups of heads accessing all platters would increase both throughput and latency, at the cost of more hardware. That would be more interesting.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, CarlBar said:

Which raises the question, why instead of writing/reading data from one platter and head at a time don't they split it amongst all the heads and write to multiple different platters at the same time.

Would only work for a very short time then not at all really. Filesystem allocation pretty much makes this not possible, not without doing what SSDs do with background GC and data placement. If you never deleted/modified a file then it would work otherwise sectors on each platter locality will not be aligned, filesystems aren't yet made to consider it and disk controllers don't have any upper layer understanding of the filesystem so cannot easily optimize data placement while not knowing which sectors actually relate to each other at the filesystem. This is how hybrid disks should have worked, xGB NAND flash with meta data supplied by the filesystem to assist optimal data placement on the disk platters so you could read and write from every head as everything would be perfectly aligned across platters and along sectors.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, leadeater said:

Would only work for a very short time then not at all really. Filesystem allocation pretty much makes this not possible, not without doing what SSDs do with background GC and data placement. If you never deleted/modified a file then it would work otherwise sectors on each platter locality will not be aligned, filesystems aren't yet made to consider it and disk controllers don't have any upper layer understanding of the filesystem so cannot easily optimize data placement while not knowing which sectors actually relate to each other at the filesystem. This is how hybrid disks should have worked, xGB NAND flash with meta data supplied by the filesystem to assist optimal data placement on the disk platters so you could read and write from every head as everything would be perfectly aligned across platters and along sectors.

 

Huh, so basically the OS has close to direct control of the heads and it doesn't know how to do this. I assumed HDD's worked through some kind of situation like this:

 

"OS: get ready to write some data HDD.

 

HDD: Ok ready.

 

OS: Ok here's the data write it and associate that with File Allocation Table Address X.

 

HDD: Runs off and writes it to the next free section of disk."

 

Guess not.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Windows knows what blocks are free and says "write this to block Y", but it has no idea what the drive will physically do to achieve that. But that's no issue, it's no different to RAID.

 

"RAIDing" the platters/platter sides would be possible but probably not as easily as that. If you go read some literature you'll see that platters are typically not aligned with one another, even if only due to manufacturing tolerances.

F@H
Desktop: i9-13900K, ASUS Z790-E, 64GB DDR5-6000 CL36, RTX3080, 2TB MP600 Pro XT, 2TB SX8200Pro, 2x16TB Ironwolf RAID0, Corsair HX1200, Antec Vortex 360 AIO, Thermaltake Versa H25 TG, Samsung 4K curved 49" TV, 23" secondary, Mountain Everest Max

Mobile SFF rig: i9-9900K, Noctua NH-L9i, Asrock Z390 Phantom ITX-AC, 32GB, GTX1070, 2x1TB SX8200Pro RAID0, 2x5TB 2.5" HDD RAID0, Athena 500W Flex (Noctua fan), Custom 4.7l 3D printed case

 

Asus Zenbook UM325UA, Ryzen 7 5700u, 16GB, 1TB, OLED

 

GPD Win 2

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, porina said:

Maybe there is some niche out there that has data that fits this scenario. It wont help large transfers in one or both groups and in worst case might increase latency if access operation requires data from both parts.

A lot of object based file systems can handle this, as well as Windows Storage Spaces. These storage subsystems treat data as chunks and use placement rules to place these chunks on storage devices so it's very easy to take a 10GB file, split it in to 256KB chunks and place on to disks. This is why I suspect Microsoft wanted the disk to appear as two LUNs so they could be treated as two storage devices and have chunks placed of them. This would also work for Ceph, Swift etc while not so well for hardware/software RAID/ZFS.

 

Single LUN best fit systems where there is direct ineraction and reliance on each disk in the system, RAID like. Multiple LUNs works well when the disks are actually independent storage containers. It's amusing how RAID has Independent in it but really isn't down here at this level.

 

A 4 platter, 4 actuator and 4 LUN disk would be awesome for Ceph where you can set resiliency domain to per host or rack so striping, mirrors and parity chunks are always spread across servers/racks and never on one of the co-dependent platters within the same disk. Actually 4 times the performance and throughput.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, CarlBar said:

Huh, so basically the OS has close to direct control of the heads and it doesn't know how to do this. I assumed HDD's worked through some kind of situation like this:

 

"OS: get ready to write some data HDD.

 

HDD: Ok ready.

 

OS: Ok here's the data write it and associate that with File Allocation Table Address X.

 

HDD: Runs off and writes it to the next free section of disk."

 

Guess not.

It does but the problem only starts when you delete data, filesystem just picks unallocated sectors without consideration to platters so if platter 1 has 20 sectors free on track 5 and platter 2 has 10 sectors free on track 1 and the OS writes a file requiring 30 sectors the data placement is not horizontally aligned anymore so without independent actuators you have to move to track position 5, read 20 sectors then move to track position 1 and read 10 sectors.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, leadeater said:

It's amusing how RAID has Independent in it but really isn't down here at this level.

I thought the "I" in RAID was for Inexpensive, although looking it up it looks like Independent is an alternative.

 

Thanks for the info. This kinda stuff is beyond me... as an enthusiast I just want to brute force it in hardware. Tinkering around in software is just a necessary evil in order to operate the hardware. I wish there was a modern day version of the battery backed up dram as storage cards... ram pricing is low enough to make it viable now.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

  

7 minutes ago, leadeater said:

It does but the problem only starts when you delete data, filesystem just picks unallocated sectors without consideration to platters so if platter 1 has 20 sectors free on track 5 and platter 2 has 10 sectors free on track 1 and the OS writes a file requiring 30 sectors the data placement is not horizontally aligned anymore so without independent actuators you have to move to track position 5, read 20 sectors then move to track position 1 and read 10 sectors.

 

But that's irrelevant in the considered scenario of RAIDing the platters. The OS sees one sector, it writes to that sector, the drive physically writes 1/nth of the data to each of the n platter sides that are under the head at the same time. Just like RAID0 with multiple drives.

F@H
Desktop: i9-13900K, ASUS Z790-E, 64GB DDR5-6000 CL36, RTX3080, 2TB MP600 Pro XT, 2TB SX8200Pro, 2x16TB Ironwolf RAID0, Corsair HX1200, Antec Vortex 360 AIO, Thermaltake Versa H25 TG, Samsung 4K curved 49" TV, 23" secondary, Mountain Everest Max

Mobile SFF rig: i9-9900K, Noctua NH-L9i, Asrock Z390 Phantom ITX-AC, 32GB, GTX1070, 2x1TB SX8200Pro RAID0, 2x5TB 2.5" HDD RAID0, Athena 500W Flex (Noctua fan), Custom 4.7l 3D printed case

 

Asus Zenbook UM325UA, Ryzen 7 5700u, 16GB, 1TB, OLED

 

GPD Win 2

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, porina said:

I thought the "I" in RAID was for Inexpensive, although looking it up it looks like Independent is an alternative.

It was originally Inexpensive then change with Independent. 10k and 15k RPM disks used later in PC history weren't at all inexpensive so the name didn't fit anymore.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, leadeater said:

It does but the problem only starts when you delete data, filesystem just picks unallocated sectors without consideration to platters so if platter 1 has 20 sectors free on track 5 and platter 2 has 10 sectors free on track 1 and the OS writes a file requiring 30 sectors the data placement is not horizontally aligned anymore so without independent actuators you have to move to track position 5, read 20 sectors then move to track position 1 and read 10 sectors.

 

What i meant was i assumed which piece of the physical drive gets written to is upto the drive, it only decides which part of the disk to rite to and assign an allocation table address to when it actually writes it.

 

In that scenario if a data write request or a delete request or a read request comes in the drive will be writing/deleting/reading equally from all platter so no misalignment can take place.s. The only real catch is you'd probably needs a number of platters that follows binary progression, (so 2 or 4 or 8 or 16 or e.t.c.).

Link to comment
Share on other sites

Link to post
Share on other sites

27 minutes ago, Kilrah said:

 But that's irrelevant in the considered scenario of RAIDing the platters. The OS sees one sector, it writes to that sector, the drive physically writes 1/nth of the data to each of the n platter sides that are under the head at the same time. Just like RAID0 with multiple drives.

I was just talking about disks as they are now not RAIDing platters. You can do that right now with this dual actuator disk, attach it to a SAS RAID card and it's two disks, just create a RAID 0 array and away you go. I know this is not what is being meant but to RAID horizontally down the platters as suggested then you need to actually change the way file systems allocate data to fit better with how HDDs operate. The concept being talked about, not really what I was at the time, already exists and has for a long time.

 

image.png.d33f189d7b86504ac7a9f959385f726a.png

 

image.png.2dc7408a2bc7496b53d9ac84130d5da6.png

https://sabercomlogica.com/en/ebook/hdd-physical-organization-chs/

 

A CHS is a way of addressing sectors that you can use to write to all platters at once, the problem comes when you don't fill the entire cylinder and again when data is deleted. You'll have situations where you're reading and writing to different number of platters so performance would fluctuate, I mean I wouldn't care but at the same time I would. I would not be amused in having to diagnose database performance issues that ultimately related back to times where only half the platters are active I/O compared to other times. It's much easier to add disks/spindles to get the performance you need than deal with internal data placement specifics of platters and heads. It's already a problem with RAID as it is and files smaller than stripe size but you do need to be mindful about making it worse or more common problem. 

Link to comment
Share on other sites

Link to post
Share on other sites

Don't see 2 actuators there... just a single actuator 3 platter drive.

F@H
Desktop: i9-13900K, ASUS Z790-E, 64GB DDR5-6000 CL36, RTX3080, 2TB MP600 Pro XT, 2TB SX8200Pro, 2x16TB Ironwolf RAID0, Corsair HX1200, Antec Vortex 360 AIO, Thermaltake Versa H25 TG, Samsung 4K curved 49" TV, 23" secondary, Mountain Everest Max

Mobile SFF rig: i9-9900K, Noctua NH-L9i, Asrock Z390 Phantom ITX-AC, 32GB, GTX1070, 2x1TB SX8200Pro RAID0, 2x5TB 2.5" HDD RAID0, Athena 500W Flex (Noctua fan), Custom 4.7l 3D printed case

 

Asus Zenbook UM325UA, Ryzen 7 5700u, 16GB, 1TB, OLED

 

GPD Win 2

Link to comment
Share on other sites

Link to post
Share on other sites

Back in the 70's? mainframes used fixed head DASD that where faster than the single actuator drives. I always wondered why that architecture never became prevalent.  I would think that rotational speed would be the biggest performance penalty and we still have that anyway. Anyone have any ideas why this didn't come to pass? 

 

 

The computer isn't the "Thing".....the computer is the "Thing" that gets you to the "Thing".  - excerpt from "Halt and Catch Fire".

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, middleclasspoor said:

Back in the 70's? mainframes used fixed head DASD that where faster than the single actuator drives. I always wondered why that architecture never became prevalent.  I would think that rotational speed would be the biggest performance penalty and we still have that anyway. Anyone have any ideas why this didn't come to pass? 

DASD itself does mean HDDs, along with CDs and other storage devices that aren't sequential access like tapes. Happen to know a more specific name of what you're thinking of?

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, leadeater said:

DASD itself does mean HDDs, along with CDs and other storage devices that aren't sequential access like tapes. Happen to know a more specific name of what you're thinking of?

*waits to see if drum memory will come back*
iirc the problem with what he is talking about is size limitations.  Fixed heads worked because the stripes were wider and you could have a head for each track.  Movable heads were a wild innovation because you could have one head that was large but read a tiny tiny track. That’s how one gets those terabyte platters.

 

One could split the difference and have a multiple head that moves. The problem iirc was weight and it turned out that a single head was faster because it was lighter.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×