Jump to content

Western Digital Readies 28TB HDD

darwin006

Summary

Western Digital is set to begin sampling its 28TB nearline hard drive (HDD) designed for hyperscale datacenters in the coming weeks, the company revealed at its earnings call.

 

Quotes

Quote

 The HDD will employ the company's energy-assisted perpendicular magnetic recording (ePMR) and UltraSMR (shingled magnetic recording) technologies to deliver the unprecedented capacity.

 

"This cutting-edge product is built upon the success of our ePMR and UltraSMR technologies with features and reliability trusted by our customers worldwide. We are staging this product for quick qualification and ramp as demand improves."

 

An intriguing aspect of Western Digital's 28TB HDD is that it relies on the company's 2nd generation ePMR technology with refined write heads to enable higher areal density and thinner tracks. As Western Digital's UltraSMR technology boosts areal density of a conventional magnetic recording (CMR) media by about 20%, the company needs a 24TB HDD to build a 28TB UltraSMR drive.

 

Western Digital is currently shipping its 26TB UltraSMR hard drives, introduced over a year ago, to a select clients among hyperscalers. The qualification process for these UltraSMR drives naturally took some considerable time, due to the need for hyperscalers to understand the new technology's behavior and performance since UltraSMR uses a host of hardware, firmware, and software innovations

 

Western Digital's 28TB hard drives will contend with Seagate's 32TB hard drives, which utilize heat-assisted magnetic recording technology (HAMR). Seagate's product, which is being evaluated right now, is expected to ramp in early 2024, promising higher capacity and performance, particularly in write operations

 

Western Digital's HAMR-based hard drives are at least 1.5 years away

 

 

My thoughts

SSD manufactures past HDD manufactures in drive capacity a while ago but large capacity SSDs continue to be around 10x as much per TB

HDD manufactures continue to find ways to increase drive capacity and decrease the cost of bulk data storage.

HDDs may not be something that you have in your PC anymore but for bulk storage in a NAS or data center they dont seem to be going anywhere anytime soon.

 

Sources

https://www.tomshardware.com/news/western-digital-readies-28tb-hdd

Link to comment
Share on other sites

Link to post
Share on other sites

How fast does this go?

Specs: Motherboard: Asus X470-PLUS TUF gaming (Yes I know it's poor but I wasn't informed) RAM: Corsair VENGEANCE® LPX DDR4 3200Mhz CL16-18-18-36 2x8GB

            CPU: Ryzen 9 5900X          Case: Antec P8     PSU: Corsair RM850x                        Cooler: Antec K240 with two Noctura Industrial PPC 3000 PWM

            Drives: Samsung 970 EVO plus 250GB, Micron 1100 2TB, Seagate ST4000DM000/1F2168 GPU: EVGA RTX 2080 ti Black edition

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, williamcll said:

How fast does this go?

If it's anything like the existing Red/Red Pro's, the drives often exhaust the cache and sound like they are power-cycling frequently.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, TrigrH said:

The speed is not the real question:

Does it take longer to write the entire disk? yes.

 

I'd love to just imagine a large business using these drives and then 10 years down the road having to shred all of the drives for security reasons like holy crap

never overclock your underwear

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, williamcll said:

How fast does this go?

3 hours ago, darwin006 said:

UltraSMR

more like seconds per MB am i right fellas?

Press quote to get a response from someone! | Check people's edited posts! | Be specific! | Trans Rights

I am human. I'm scared of the dark, and I get toothaches. My name is Frill. Don't pretend not to see me. I was born from the two of you.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, williamcll said:

How fast does this go?

For the market this is not a useful metric.

Even if it would write 1000 time slower than another drive the reason why someone buy these is because this gets [x] TB for [y] cubic centimeters.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, TrigrH said:

The speed is not the real question:

Does it take longer to write the entire disk? yes.

 

With a 100mib connection, and nonstop downloading, it would take about a month to fill it up.

 

╔═════════════╦═══════════════════════════════════════════╗
║__________________║ hardware_____________________________________________________ ║
╠═════════════╬═══════════════════════════════════════════╣
║ cpu ______________║ ryzen 9 5900x_________________________________________________ ║
╠═════════════╬═══════════════════════════════════════════╣
║ GPU______________║ ASUS strix LC RX6800xt______________________________________ _║
╠═════════════╬═══════════════════════════════════════════╣
║ motherboard_______ ║ asus crosshair formulla VIII______________________________________║
╠═════════════╬═══════════════════════════════════════════╣
║ memory___________║ CMW32GX4M2Z3600C18 ______________________________________║
╠═════════════╬═══════════════════════════════════════════╣
║ SSD______________║ Samsung 980 PRO 1TB_________________________________________ ║
╠═════════════╬═══════════════════════════════════════════╣
║ PSU______________║ Corsair RM850x 850W _______________________ __________________║
╠═════════════╬═══════════════════════════════════════════╣
║ CPU cooler _______ ║ Be Quiet be quiet! PURE LOOP 360mm ____________________________║
╠═════════════╬═══════════════════════════════════════════╣
║ Case_____________ ║ Thermaltake Core X71 __________________________________________║
╠═════════════╬═══════════════════════════════════════════╣
║ HDD_____________ ║ 2TB and 6TB HDD ____________________________________________║
╠═════════════╬═══════════════════════════════════════════╣
║ Front IO__________   ║ LG blu-ray drive & 3.5" card reader, [trough a 5.25 to 3.5 bay]__________║
╠═════════════╬═══════════════════════════════════════════╣ 
║ OS_______________ ║ Windows 10 PRO______________________________________________║
╚═════════════╩═══════════════════════════════════════════╝

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, darknessblade said:

With a 100mib connection, and nonstop downloading, it would take about a month to fill it up.

 

And for them who still use 56k modem (about 4.6 kbps), it would take 193 years 😅

PC #1 : Gigabyte Z170XP-SLI | i7-7700 | Cryorig C7 Cu | 32GB DDR4-2400 | LSI SAS 9211-8i | 240GB NVMe M.2 PCIe PNY CS2030 | SSD&HDDs 59.5TB total | Quantum LTO5 HH SAS drive | GC-Alpine Ridge | Corsair HX750i | Cooler Master Stacker STC-T01 | ASUS TUF Gaming VG27AQ 2560x1440 @ 60 Hz (plugged HDMI port, shared with PC #2) | Win10
PC #2 : Gigabyte MW70-3S0 | 2x E5-2689 v4 | 2x Intel BXSTS200C | 32GB DDR4-2400 ECC Reg | MSI RTX 3080 Ti Suprim X | 2x 1TB SSD SATA Samsung 870 EVO | Corsair AX1600i | Lian Li PC-A77 | ASUS TUF Gaming VG27AQ 2560x1440 @ 144 Hz (plugged DP port, shared with PC #1) | Win10
PC #3 : Mini PC Zotac 4K | Celeron N3150 | 8GB DDR3L 1600 | 250GB M.2 SATA WD Blue | Sound Blaster X-Fi Surround 5.1 Pro USB | Samsung Blu-ray writer USB | Genius SP-HF1800A | TV Panasonic TX-40DX600E UltraHD | Win10
PC #4 : ASUS P2B-F | PIII 500MHz | 512MB SDR 100 | Leadtek WinFast GeForce 256 SDR 32MB | 2x Guillemot Maxi Gamer 3D² 8MB in SLI | Creative Sound Blaster AWE64 ISA | 80GB HDD UATA | Fortron/Source FSP235-60GI | Zalman R1 | DELL E151FP 15" TFT 1024x768 | Win98SE

Laptop : Lenovo ThinkPad T460p | i7-6700HQ | 16GB DDR4 2133 | GeForce 940MX | 240GB SSD PNY CS900 | 14" IPS 1920x1080 | Win11

PC tablet : Fujitsu Point 1600 | PMMX 166MHz | 160MB EDO | 20GB HDD UATA | external floppy drive | 10.4" DSTN 800x600 touchscreen | AGFA SnapScan 1212u blue | Win98SE

Laptop collection #1 : IBM ThinkPad 340CSE | 486SLC2 66MHz | 12MB RAM | 360MB IDE | internal floppy drive | 10.4" DSTN 640x480 256 color | Win3.1 with MS-DOS 6.22

Laptop collection #2 : IBM ThinkPad 380E | PMMX 150MHz | 80MB EDO | NeoMagic MagicGraph128XD | 2.1GB IDE | internal floppy drive | internal CD-ROM drive | Intel PRO/100 Mobile PCMCIA | 12.1" FRSTN 800x600 16-bit color | Win98

Laptop collection #3 : Toshiba T2130CS | 486DX4 75MHz | 32MB EDO | 520MB IDE | internal floppy drive | 10.4" STN 640x480 256 color | Win3.1 with MS-DOS 6.22

And 6 others computers (Intel Compute Stick x5-Z8330, Giada Slim N10 WinXP, 2 Apple classic and 2 PC pocket WinCE)

Link to comment
Share on other sites

Link to post
Share on other sites

Resilvering an array with one of these dead is gonna be a nail biting few days during which another drive will be very likely to also fail.

Link to comment
Share on other sites

Link to post
Share on other sites

So HDDs are becoming the next Tape, cool.

CPU - Ryzen 7 3700X | RAM - 64 GB DDR4 3200MHz | GPU - Nvidia GTX 1660 ti | MOBO -  MSI B550 Gaming Plus

Link to comment
Share on other sites

Link to post
Share on other sites

On 8/3/2023 at 2:23 AM, Beskamir said:

Resilvering an array with one of these dead is gonna be a nail biting few days during which another drive will be very likely to also fail.

This is why I've been conservative with my drive capacities within my server. It becomes a slippery slope when you're trying to recreate data on a replacement drive since the transfer rates aren't going up at the same pace as the capacity. 

 

Basically, the drives are too big for their transfer speed capabilities, which during an array rebuild, results in a much higher chance of another drive failing. Now, one should have all their data backed up, but that creates another problem. 

"It pays to keep an open mind, but not so open your brain falls out." - Carl Sagan.

"I can explain it to you, but I can't understand it for you" - Edward I. Koch

Link to comment
Share on other sites

Link to post
Share on other sites

On 8/2/2023 at 9:09 AM, treestain said:

I'd love to just imagine a large business using these drives and then 10 years down the road having to shred all of the drives for security reasons like holy crap

They shred them mechanically. They just throw them in the shredder and tiny bits come out the other end. Good luck recovering data from that. No one is going to do software wipe on them as they won't be worth anything anyway as empty drives. And if they fail before, they go into the same shredder anyway since you can't wipe them in software, but the platters still contain data.

Link to comment
Share on other sites

Link to post
Share on other sites

On 8/2/2023 at 1:39 AM, TrigrH said:

The speed is not the real question:

For me it is. I'd like a larger HD here at some point. My Steam Library for example currently resides on a 22 TB WD Gold drive (the limit of what they offer at the moment).

Link to comment
Share on other sites

Link to post
Share on other sites

On 8/3/2023 at 6:23 PM, Beskamir said:

Resilvering an array with one of these dead is gonna be a nail biting few days during which another drive will be very likely to also fail.

Basically nobody is going to be using these in a RAID like setup. They'll go in to software defined storage systems like Ceph or Gluster and be independent disks with either replica data copies or erasure coding data chunks.

 

Even ZFS is not the best tool for the job past many hundred TB of data.

 

These will be used in systems that utilize many servers and will be many hundreds to thousands of disks. Disk level failure domains are not what is done here, lowest commonly done is server/host level i.e. I use EC 8+5 across 15 servers which means 5 servers can go offline and data is still readable and recoverable (each server has 24 HDDs).

 

Anyone actually wanting to use them with ZFS needs to wait for the ePMR variant.

Link to comment
Share on other sites

Link to post
Share on other sites

Basically, you're not going to be running any VMs or IOPS sensitive operations on UltraSMR drives. They're primarily for object storage; specifically archival retention and in many cases immutable backups.

And to @leadeater point, fault-tolerance is handled at the node level than the disk. aka, RAIN (Redundant Array of Independent Nodes). But I'm old-school, I feel more comfortable working with and troubleshooting classic RAID volumes (even ZFS and Btrfs types). That level of scale-out abstractions has many advantages, but for home/personal use, I'll stick with RAID.

Link to comment
Share on other sites

Link to post
Share on other sites

On 8/3/2023 at 8:23 AM, Beskamir said:

Resilvering an array with one of these dead is gonna be a nail biting few days

Its smr so you can safely assume several weeks of resilver time.....

(There is a test on youtube, a hdd afraction of the size of this thing tok more than a week to reailver....)

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Dean0919 said:

Same here. I tend to leave my games installed on even if I finished them and since I have 400 games on Steam, I'm still using HDD 😄

With a 1 or 2 TB SSD (could get away with 512GB or smaller, but at current prices why skimp?), something like PrimoCache would pair nicely with a Steam library on the HDD. Games would get cached intelligently depending on how often they're accessed. Old cached data would be flushed out to make room for any new content requested off the HDD once the SSD is full; the caching manages itself algorithmically.

Link to comment
Share on other sites

Link to post
Share on other sites

On 8/8/2023 at 9:29 AM, Godlygamer23 said:

This is why I've been conservative with my drive capacities within my server. It becomes a slippery slope when you're trying to recreate data on a replacement drive since the transfer rates aren't going up at the same pace as the capacity. 

 

Basically, the drives are too big for their transfer speed capabilities, which during an array rebuild, results in a much higher chance of another drive failing. Now, one should have all their data backed up, but that creates another problem. 

I'm not particularly concerned about 3 drives failing within a few days of each other.  This was being talked about when 4TB drives were coming out "omg raid is dead it'll literally be impossible to rebuild" and here we are...

 

In any case drives are so cehap I replicate the entire thing on another set of shittier drives.

Workstation:  13700k @ 5.5Ghz || Gigabyte Z790 Ultra || MSI Gaming Trio 4090 Shunt || TeamGroup DDR5-7800 @ 7000 || Corsair AX1500i@240V || whole-house loop.

LANRig/GuestGamingBox: 9900nonK || Gigabyte Z390 Master || ASUS TUF 3090 650W shunt || Corsair SF600 || CPU+GPU watercooled 280 rad pull only || whole-house loop.

Server Router (Untangle): 13600k @ Stock || ASRock Z690 ITX || All 10Gbe || 2x8GB 3200 || PicoPSU 150W 24pin + AX1200i on CPU|| whole-house loop

Server Compute/Storage: 10850K @ 5.1Ghz || Gigabyte Z490 Ultra || EVGA FTW3 3090 1000W || LSI 9280i-24 port || 4TB Samsung 860 Evo, 5x10TB Seagate Enterprise Raid 6, 4x8TB Seagate Archive Backup ||  whole-house loop.

Laptop: HP Elitebook 840 G8 (Intel 1185G7) + 3080Ti Thunderbolt Dock, Razer Blade Stealth 13" 2017 (Intel 8550U)

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, AnonymousGuy said:

I'm not particularly concerned about 3 drives failing within a few days of each other.  This was being talked about when 4TB drives were coming out "omg raid is dead it'll literally be impossible to rebuild" and here we are...

 

In any case drives are so cehap I replicate the entire thing on another set of shittier drives.

4TB is quite a bit different from 28TB. Quite literally, 28TB is 7x bigger than 4TB, so even if you filled only half of the drive's capacity, that's still potentially double the amount of time, maybe more. Things get worse if you happen to store a bunch of small files, which is where archive files/folders can come into play. 

 

Drives might be cheap, but data isn't. So if you decide to backup your data with shit drives, and your main drives fail during an array rebuild, and you try to get that data back from your 'shitter' drives, then those could fail too depending on how shitty they really are, and how often you use them. I personally back up my data to a 12TB WD Red Plus, and then encrypt and upload my really important files to Dropbox. 

"It pays to keep an open mind, but not so open your brain falls out." - Carl Sagan.

"I can explain it to you, but I can't understand it for you" - Edward I. Koch

Link to comment
Share on other sites

Link to post
Share on other sites

On 8/12/2023 at 8:43 AM, Godlygamer23 said:

 Things get worse if you happen to store a bunch of small files, which is where archive files/folders can come into play. 

The rebuild has no awareness about the filesystem or contents.  It's all bits.

Workstation:  13700k @ 5.5Ghz || Gigabyte Z790 Ultra || MSI Gaming Trio 4090 Shunt || TeamGroup DDR5-7800 @ 7000 || Corsair AX1500i@240V || whole-house loop.

LANRig/GuestGamingBox: 9900nonK || Gigabyte Z390 Master || ASUS TUF 3090 650W shunt || Corsair SF600 || CPU+GPU watercooled 280 rad pull only || whole-house loop.

Server Router (Untangle): 13600k @ Stock || ASRock Z690 ITX || All 10Gbe || 2x8GB 3200 || PicoPSU 150W 24pin + AX1200i on CPU|| whole-house loop

Server Compute/Storage: 10850K @ 5.1Ghz || Gigabyte Z490 Ultra || EVGA FTW3 3090 1000W || LSI 9280i-24 port || 4TB Samsung 860 Evo, 5x10TB Seagate Enterprise Raid 6, 4x8TB Seagate Archive Backup ||  whole-house loop.

Laptop: HP Elitebook 840 G8 (Intel 1185G7) + 3080Ti Thunderbolt Dock, Razer Blade Stealth 13" 2017 (Intel 8550U)

Link to comment
Share on other sites

Link to post
Share on other sites

Rebuilding a RAID array with these in it would probably take a few weeks eh.

 

If only high capacity SSDs were readily available for much cheaper than they are right now... Just recently there was that announcement about a 256TB SSD. Too bad it will likely cost as much as a high end car.

CPU: AMD Ryzen 3700x / GPU: Asus Radeon RX 6750XT OC 12GB / RAM: Corsair Vengeance LPX 2x8GB DDR4-3200
MOBO: MSI B450m Gaming Plus / NVME: Corsair MP510 240GB / Case: TT Core v21 / PSU: Seasonic 750W / OS: Win 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, AnonymousGuy said:

The rebuild has no awareness about the filesystem or contents.  It's all bits.

Not really, for good old RAID yes but for ZFS it does actually matter

 

Quote

ZFS uses variable-sized blocks. Therefore, for each recordsize wort of data, which can be anywhere from 4KB to 1MB, ZFS needs to consult the block pointer tree to see how data is laid out on disks. Because block pointer trees are often fragmented and files are often fragmented, there is quite a lot of head movement involved. Rotational hard drives perform much slower with a lot of head movement, so megabyte per second speed of the rebuild is slower than that of a traditional RAID. Now, ZFS only rebuilds the part of the array which is in use and it does not rebuild free space. Therefore, on lightly used pools it may actually complete faster than a traditional RAID. However, this advantage disappears as the pool fills up.

https://blocksandfiles.com/2022/06/20/resilvering/

Link to comment
Share on other sites

Link to post
Share on other sites

On 8/13/2023 at 4:55 PM, TetraSky said:

Rebuilding a RAID array with these in it would probably take a few weeks eh.

3 days.  That's 3x how long my 10TB drives take to rebuild.

On 8/13/2023 at 5:50 PM, leadeater said:

Not really, for good old RAID yes but for ZFS it does actually matter

 

https://blocksandfiles.com/2022/06/20/resilvering/

I'm a simple man running raid 6 with NTFS on top...and SMB (but hey at least we have dual 10Gb for SMB multithread...).  It's dumb and basic but it's all I really want to be bothered maintaining and setting up when I need the thing to run plex and a file host accesible to the world, and be accessible to other windows boxes.  And host 2 windows VMs.  And be available 24/7/365 where I can't take it down for a week to clean slate.  I'm in a mental state where I'm annoyed when I have to do SSL certificate renewal every year.

Workstation:  13700k @ 5.5Ghz || Gigabyte Z790 Ultra || MSI Gaming Trio 4090 Shunt || TeamGroup DDR5-7800 @ 7000 || Corsair AX1500i@240V || whole-house loop.

LANRig/GuestGamingBox: 9900nonK || Gigabyte Z390 Master || ASUS TUF 3090 650W shunt || Corsair SF600 || CPU+GPU watercooled 280 rad pull only || whole-house loop.

Server Router (Untangle): 13600k @ Stock || ASRock Z690 ITX || All 10Gbe || 2x8GB 3200 || PicoPSU 150W 24pin + AX1200i on CPU|| whole-house loop

Server Compute/Storage: 10850K @ 5.1Ghz || Gigabyte Z490 Ultra || EVGA FTW3 3090 1000W || LSI 9280i-24 port || 4TB Samsung 860 Evo, 5x10TB Seagate Enterprise Raid 6, 4x8TB Seagate Archive Backup ||  whole-house loop.

Laptop: HP Elitebook 840 G8 (Intel 1185G7) + 3080Ti Thunderbolt Dock, Razer Blade Stealth 13" 2017 (Intel 8550U)

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, AnonymousGuy said:

I'm a simple man running raid 6 with NTFS on top...and SMB (but hey at least we have dual 10Gb for SMB multithread...).  It's dumb and basic but it's all I really want to be bothered maintaining and setting up when I need the thing to run plex and a file host accesible to the world, and be accessible to other windows boxes.  And host 2 windows VMs.  And be available 24/7/365 where I can't take it down for a week to clean slate.  I'm in a mental state where I'm annoyed when I have to do SSL certificate renewal every year.

I still love all my hardware RAID cards and use them for various things, hard to beat them in low HDD count arrays.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×