Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Samsung Preps PCIe 4.0 and 5.0 SSDs With 176-Layer V-NAND

Summary

Samsung introduced the first mass-produced 3D NAND memory, dubbed V-NAND, in 2013, well ahead of its rivals. Samsung started with 24-layer V-NAND chips back then, and now having gained plenty of experience with multi-layer flash memory, it is on track to introduce 176-Layer V-NAND devices. But that's only the beginning —Samsung says it envisions V-NAND chips with more than 1,000 layers in the future. 

Samsung
Image credit: Samsung

 

Quotes

Quote

Samsung intends to begin producing consumer SSDs powered by its seventh-gen V-NAND memory that features 176 layers and, according to the company, the industry's smallest NAND memory cells. This new flash's interface boasts a 2000 MT/s data transfer rate, allowing Samsung to build ultra-fast SSDs with PCIe 4.0 and PCIe 5.0 interfaces. The drives will use an all-new controller 'optimized for multitasking huge workloads,' so expect a 980 Pro successor that demonstrates strong performance in workstation applications.

Over time, Samsung will introduce data center-grade SSDs based on its 176-Layer V-NAND memory. It's logical to expect the new drives to feature enhanced performance and higher capacities.While 176-layer V-NAND chips are nearing mass production, Samsung has already built the first samples of its eighth-gen V-NAND with over 200 layers. Samsung says that it will begin producing this new memory based on market demand. Companies typically introduce new types of NAND devices every 12 to 18 months, so you could make more or less educated guesses about Samsung's planned timeline for 200+ layer V-NAND.

There are several challenges that Samsung and other NAND makers face in their pursuit to increase the number of layers. Making NAND cells small (and layers thinner) requires using new materials to store charges reliably, and etching hundreds of layers is also challenging. Since it isn't feasible or economical to etch hundreds of layers (i.e., building a 1,000-layer 3D NAND wafer in a single pass), manufacturers use techniques like string stacking, which is also quite difficult to manufacture in high volume.

Finally, flash makers need to ensure that their 3D NAND stacks are thin enough to fit into smartphones and PCs. As a result, they can't simply increase the number of layers forever, but Samsung believes that 1,000+ layer chips are feasible.

image.png.899c0bfb7dbe442096fea3a26b3150e4.png

Image credit: Anadtech

My thoughts

Well, this is really interesting, we're already at 176-layers. To be honest, I have barely been in the tech space for a year, yet I can still see that this is a pretty big advancement. Now that article also mentions samsung making 1000+ layer SSDs but I can also say it'll be quite a while before we see that, but either way, this is still huge.Then again, Earlier this year SK Hynix said that it envisioned 3D NAND with over 600 layers, so Samsung is certainly not alone with its big plans for 3D NAND.   

 

Sources

https://www.tomshardware.com/news/samsung-envisions-1000-layer-v-nand

https://www.anandtech.com/show/16491/flash-memory-at-isscc-2021

Link to post
Share on other sites
50 minutes ago, J-from-Nucleon said:

Summary

Samsung introduced the first mass-produced 3D NAND memory, dubbed V-NAND, in 2013, well ahead of its rivals. Samsung started with 24-layer V-NAND chips back then, and now having gained plenty of experience with multi-layer flash memory, it is on track to introduce 176-Layer V-NAND devices. But that's only the beginning —Samsung says it envisions V-NAND chips with more than 1,000 layers in the future. 

Samsung
Image credit: Samsung

Quotes

 

My thoughts

Well, this is really interesting, we're already at 176-layers. To be honest, I have barely been in the tech space for a year, yet I can still see that this is a pretty big advancement. Now that article also mentions samsung making 1000+ layer SSDs but I can also say it'll be quite a while before we see that, but either way, this is still huge.Then again, Earlier this year SK Hynix said that it envisioned 3D NAND with over 600 layers, so Samsung is certainly not alone with its big plans for 3D NAND.   

 

Sources

https://www.tomshardware.com/news/samsung-envisions-1000-layer-v-nand

Might be might not be.  Just because they can fabricate something doesn’t mean it will work.  AMD used technology like this on their newest chip, but they used 3 layers and those layers were memory which doesn’t produce a lot of heat. Thermal stuff could be a more than major factor in a multilayer device

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to post
Share on other sites

As far as I know, they're making it by stacking 2  96 layer dies and making connections between them.  Saves silicon, less waste if there's a flaw when making the layers. 

 

Anandtech had an article recently with updates/news from ISSSC 2021 : https://www.anandtech.com/show/16491/flash-memory-at-isscc-2021

 

 

Quote

Samsung, SK hynix, and Kioxia/WD presented information about their upcoming generations of 3D TLC. Not shown here is Micron's 176L TLC, because they haven't released most of this data for their latest generation of 3D NAND.

 

Unsurprisingly, it looks likely that Samsung will again be in the lead for performance, with the lowest read latency and fastest write speeds. However, their bit density is still clearly lagging even though they're claiming a 70% jump with this generation. In the past, their lagging density hasn't been as much of a downside as it might appear at first glance, because Samsung has been able to avoid using string stacking and can manufacture a stack of 128 layers as a single deck while their competitors have all had to split their stack into two decks, increasing the number of fab steps required.

 

This might be the generation that brings Samsung's inevitable adoption of string stacking, but if that's the case then their lingering density disadvantage is rather disappointing. On the other hand, if they've managed to put off that transition for one more generation and achieved this kind of density increase only using a combination of other techniques (most notably a CMOS under Array layout), then it's a very impressive advance and it would be safe to say that Samsung is years ahead of the competition when it comes to the high aspect ratio etching of the vertical channels that is the most critical fab step in scaling 3D NAND. We'll know more once Samsung discloses the actual layer count, but they're keeping that secret for now—which hints that they don't expect to have the highest layer count to brag about.

 

The TLC parts described by SK hynix and Kioxia/WD look fairly similar, save for the big difference that SK hynix is talking about a 512Gb die and Kioxia is talking about a 1Tb die. Both designs look to have similar performance and density, though Kioxia is touting a higher NAND interface speed. Kioxia and Western Digital have put out a press release announcing 162-layer 3D NAND, so they're a bit behind SK hynix and Micron for total layer count. 

 

In general, Intel has been more focused on QLC NAND than any of its competitors. This 144L QLC is the first generation of 3D NAND Intel hasn't co-developed with Micron, and it is unique in several respects. Intel is taking its 3D NAND technology in different directions from the rest of the industry will have interesting ramifications for their agreement to sell the NAND flash business to SK hynix, but in the short term it seems like Intel is getting the NAND they want to be selling. With only 144 layers, Intel is almost certainly now in the last place for total layer count. Compared to 9x-layer QLC, Intel has much better performance and density—but QLC versions of the new TLC described by SK hynix and Kioxia should have comparable density. Intel has backed off from the frankly astronomical erase block size their 96L QLC used, but the 48MB block size of their new 144L QLC still seems a bit high.

 

There's A LOT more useful information in the article I linked to above.

 

image.png.899c0bfb7dbe442096fea3a26b3150e4.png

 

image.png.2ea73803e851d8173fcc1c7a5ea06c7f.png

Link to post
Share on other sites
26 minutes ago, mariushm said:

As far as I know, they're making it by stacking 2  96 layer dies and making connections between them.  Saves silicon, less waste if there's a flaw when making the layers. 

 

Anandtech had an article recently with updates/news from ISSSC 2021 : https://www.anandtech.com/show/16491/flash-memory-at-isscc-2021

 

 

 

There's A LOT more useful information in the article I linked to above.

 

image.png.899c0bfb7dbe442096fea3a26b3150e4.png

 

image.png.2ea73803e851d8173fcc1c7a5ea06c7f.png

Thanks, I'll add it to the sources

 

Link to post
Share on other sites
1 hour ago, Bombastinator said:

Might be might not be.  Just because they can fabricate something doesn’t mean it will work.  AMD used technology like this on their newest chip, but they used 3 layers and those layers were memory which doesn’t produce a lot of heat. Thermal stuff could be a more than major factor in a multilayer device

Don't confuse in-silicon layers, which is what Samsung are talking about, and layers of silicon chips. In this old Intel slide they say they may use over 20 layers. Numbers today for both AMD and Intel may differ, but they will be of that general magnitude.

https://download.intel.com/pressroom/kits/chipmaking/Making_of_a_Chip.pdf

 

AMD V-cache is only two layers of silicon, the base chip we're used to plus the new additional cache chip, with thermal coupling over the cores on either side.

 

Flash isn't exactly high heat either. I'd have to guess on average it is even lower than SRAM cache mainly as it is less intensely used. Look at M.2 drives. While some might have a heatsink, it is more for the benefit of the controller. The flash itself doesn't really need cooling in most cases. Just don't cook them by parking a 24/7 used GPU on top like I did.

Desktop Gaming system: Asrock Z370 Pro4, i7-8086k, Noctua D15, Corsair Vengeance Pro RGB 3200 2x16GB, Gigabyte 2070, NZXT E850 PSU, Cooler Master MasterBox 5, Optane 900p 280GB, Crucial MX200 1TB, Sandisk 960GB, Acer Predator XB241YU 1440p144 G-sync
TV Gaming system: Asus B560M-A, i7-11700k, Arctic eSports 34 duo, Corsair Vengeance Pro RGB 3200 2x16GB, MSI 3070 Gaming Trio X, EVGA Supernova G2L 850W, Anidees Ai Crystal, Samsung 980 Pro 2TB, LG OLED55B9PLA 4k120 G-Sync Compatible
Streaming system: Asus X299 TUF mark 2, i9-7920X, Noctua D15, Corsair Vengeance LPX RGB 3000 8x8GB, Asus Strix 1080 Ti, Corsair HX1000i, GameMax Abyss, Samsung 970 Evo 500GB, Crucial BX500 1TB, BenQ XL2411 1080p144 + HP LP2475w 1200p60
Former Main system: Asus Maximus VIII Hero, i7-6700k, Noctua D14, G.Skill Ripjaws V 3200 2x8GB, GTX 980 Ti FE, Corsair HX750i, In Win 303 NVIDIA, Samsung SM951 512GB, WD Blue 1TB, Acer RT280k 4k60 FreeSync [link]
Gaming laptop: Lenovo Legion, 5800H, DDR4 3200C22 2x8GB, RTX 3070, 512 GB SSD, 165 Hz IPS panel


 

Link to post
Share on other sites

C:\ = TLC

D:\ and above = QLC

 

🙂

 

Link to post
Share on other sites
5 hours ago, porina said:

The flash itself doesn't really need cooling in most cases

NAND also doesn't like being too cool either so it's actually best to only thermally connect the controller to any heatsinks and let the flash be at like 30C-40C.

 

Quote

NAND is subject to two competing factors relative to temperature. At high temperature, programming and erasing a NAND cell is relatively less stressful to its structure, but data retention of a NAND cell suffers. At low temperature, data retention of the NAND cell is enhanced but the relative stress to the cell structure due to program and erase operations increases.

https://www.eeweb.com/industrial-temperature-and-nand-flash-in-ssd-products/

Link to post
Share on other sites

Benefit of V-NAND was that they didn't sacrifice durability for cost and space by using more cells. The approach is supposedly by using thicker NAND gates that take longer to wear with writes, lasting much longer as a result. Stuffing more of them increases density, but since they are thinner/smaller they have less material and wear out faster as a result.

AMD Ryzen 7 5800X | ASUS Strix X570-E | G.Skill 32GB 3733MHz CL16 | PALIT RTX 3080 10GB GamingPro | Samsung 850 Pro 2TB | Seagate Barracuda 8TB | Sound Blaster AE-9 MUSES Edition | Altec Lansing MX5021 Nichicon/MUSES Edition

Link to post
Share on other sites
4 hours ago, leadeater said:

NAND also doesn't like being too cool either so it's actually best to only thermally connect the controller to any heatsinks and let the flash be at like 30C-40C.

 

https://www.eeweb.com/industrial-temperature-and-nand-flash-in-ssd-products/

I have some questions about data retention with an active drive in use:

 

My SSDs (M.2) hang anywhere from 52c and reach near 70c depending on where on the MB they're at and if under a GPU from extended gaming. As referenced, it's less stressful on NAND, but at the cost of data retention. So here's where things get unclear for me.

  • Does data retention metric apply to a drive in offline storage (unplugged or turned off)?
  • If it also applies to active SSDs in use, does the firmware force a periodic re-read in the background (aka data patrolling) to recharge the cells?
  • Does a refresh only occur when the firmware picks up the cell data and re-write it to another block during a wear leveling operation?
  • Will performing a periodic full backup suffice as that forces reading all memory cells of interest that contains data thus increasing retention?

 

Link to post
Share on other sites

Neat, excited to see next gen SSDs from them.

I'd like to see them release their Z-SSD for consumers too.

Ryzen 7 3800X | X570 Aorus Elite | G.Skill 16GB 3200MHz C16 | Radeon RX 5700 XT | Samsung 850 PRO 256GB | Mouse: Zowie S1 | OS: Windows 10

Link to post
Share on other sites
3 hours ago, StDragon said:

Does data retention metric apply to a drive in offline storage (unplugged or turned off)?

Mostly but it also applies to bit flipping as well but I think the time frame is a bit long for that to be too much of a problem unless very hot.

 

3 hours ago, StDragon said:

If it also applies to active SSDs in use, does the firmware force a periodic re-read in the background (aka data patrolling) to recharge the cells?

Not like DRAM I don't think so but the controller can correct errors but it can't correct too many. So if only a few bits a wrong it'll be able to correct it but there is a limit to how well it can do that.

 

3 hours ago, StDragon said:

Does a refresh only occur when the firmware picks up the cell data and re-write it to another block during a wear leveling operation?

 

3 hours ago, StDragon said:

Will performing a periodic full backup suffice as that forces reading all memory cells of interest that contains data thus increasing retention?

1dhf6d.jpg

 

 

Link to post
Share on other sites
20 minutes ago, leadeater said:

Not like DRAM I don't think so but the controller can correct errors but it can't correct too many. So if only a few bits a wrong it'll be able to correct it but there is a limit to how well it can do that.

I guess my vernacular is a bit out of date. Substitute "patrolling" for scrubbing. Anyways, I ran across a similar response on Reddit by wtallis

 

Quote

"Power them on and read your data. Flash memory cannot be refreshed simply by supplying power to the chip. If you want to forestall data degradation, you need to actively read each byte you care about, so that the SSD controller can notice correctable errors and re-write that data before it gets bad enough to cause uncorrectable errors. Many SSDs will do at least some background data scrubbing, but if you're using a SSD for long-term data storage, you really shouldn't rely on that unobservable, undocumented process to maintain data integrity." -wtallis

 

Fascinating. Would really wish the vendors to be a bit more transparent about this process, but I can imagine the firmware algos being deemed intellectual property and not shared within the industry. 😞

Link to post
Share on other sites

Another thing worth mentioning is new NVMe specification that brings ZNS (Zoned Namespace) where OS gets direct access to NAND chips (and not strictly handing over data to SSD controller and leaving it do the data management). Which apparently means new NVMe SSD's could technically come literally as M.2 stick with NAND on it and nothing else. It could be a controller-less, DRAM-less SSD where CPU and OS do all the data management and lack of DRAM is compensated by virtual DRAM which gets shared from main RAM pool which is already a NVMe's feature. What this means is we could see super dense M.2 sticks because of extra space you gain by removing controller and DRAM as well as dramatically drop the price. I'm just unsure how much of an impact this will have on system's CPU and performance effect of it. I wonder if there is any info if Samsung is planning on supporting these new NVMe specifications including these features. Could be really exciting time for this, assuming shitty Chia doesn't gain massive traction and fucks up entire SSD market *thanks cryptominers very much*

AMD Ryzen 7 5800X | ASUS Strix X570-E | G.Skill 32GB 3733MHz CL16 | PALIT RTX 3080 10GB GamingPro | Samsung 850 Pro 2TB | Seagate Barracuda 8TB | Sound Blaster AE-9 MUSES Edition | Altec Lansing MX5021 Nichicon/MUSES Edition

Link to post
Share on other sites
12 minutes ago, RejZoR said:

Which apparently means new NVMe SSD's could technically come literally as M.2 stick with NAND on it and nothing else. It could be a controller-less, DRAM-less SSD where CPU and OS do all the data management and lack of DRAM is compensated by virtual DRAM which gets shared from main RAM pool which is already a NVMe's feature

It can't be controllerless as you still need an NVMe interface and firmware, at best it'll become a raw PCIe device with no storage specific logic other than a PCIe device descriptor of "Storage Device" so the system knows it's storage and not say a NIC or a GPU or w/e.

Link to post
Share on other sites
22 minutes ago, leadeater said:

It can't be controllerless as you still need an NVMe interface and firmware, at best it'll become a raw PCIe device with no storage specific logic other than a PCIe device descriptor of "Storage Device" so the system knows it's storage and not say a NIC or a GPU or w/e.

I don't know the specifics, but I guess new NVMe specification allows this sort of things that weren't possible before.

AMD Ryzen 7 5800X | ASUS Strix X570-E | G.Skill 32GB 3733MHz CL16 | PALIT RTX 3080 10GB GamingPro | Samsung 850 Pro 2TB | Seagate Barracuda 8TB | Sound Blaster AE-9 MUSES Edition | Altec Lansing MX5021 Nichicon/MUSES Edition

Link to post
Share on other sites
24 minutes ago, RejZoR said:

I don't know the specifics, but I guess new NVMe specification allows this sort of things that weren't possible before.

True, it's just that something actually has to implement the NVMe and PCIe protocols as the NAND flash chips are just flash memory and don't do anything other than that as yet. Even if NAND got on-die PCIe & NVMe you'd still need a PCIe switch chip between them and the CPU for multiple NAND chip SSDs. Things could change for sure but something actually has to talk both PCIe and NVMe at all in the first place.

 

We've seen AI cores being proposed as part of HBM chips so more feature rich NAND chips isn't our of the question just going off that.

Link to post
Share on other sites
57 minutes ago, RejZoR said:

Another thing worth mentioning is new NVMe specification that brings ZNS (Zoned Namespace) where OS gets direct access to NAND chips (and not strictly handing over data to SSD controller and leaving it do the data management).

Had to look it up and my interpretation of what it is differs from that somewhat. It seems to be a different controller, not no controller. The zones can be mapped to specific applications who are responsible for writing to it in a particular way to take advantage of it. I can't see it being useful for regular users since the applications need to be aware of and benefit from it. It looks like a niche solution for some specific tasks.

Desktop Gaming system: Asrock Z370 Pro4, i7-8086k, Noctua D15, Corsair Vengeance Pro RGB 3200 2x16GB, Gigabyte 2070, NZXT E850 PSU, Cooler Master MasterBox 5, Optane 900p 280GB, Crucial MX200 1TB, Sandisk 960GB, Acer Predator XB241YU 1440p144 G-sync
TV Gaming system: Asus B560M-A, i7-11700k, Arctic eSports 34 duo, Corsair Vengeance Pro RGB 3200 2x16GB, MSI 3070 Gaming Trio X, EVGA Supernova G2L 850W, Anidees Ai Crystal, Samsung 980 Pro 2TB, LG OLED55B9PLA 4k120 G-Sync Compatible
Streaming system: Asus X299 TUF mark 2, i9-7920X, Noctua D15, Corsair Vengeance LPX RGB 3000 8x8GB, Asus Strix 1080 Ti, Corsair HX1000i, GameMax Abyss, Samsung 970 Evo 500GB, Crucial BX500 1TB, BenQ XL2411 1080p144 + HP LP2475w 1200p60
Former Main system: Asus Maximus VIII Hero, i7-6700k, Noctua D14, G.Skill Ripjaws V 3200 2x8GB, GTX 980 Ti FE, Corsair HX750i, In Win 303 NVIDIA, Samsung SM951 512GB, WD Blue 1TB, Acer RT280k 4k60 FreeSync [link]
Gaming laptop: Lenovo Legion, 5800H, DDR4 3200C22 2x8GB, RTX 3070, 512 GB SSD, 165 Hz IPS panel


 

Link to post
Share on other sites
7 minutes ago, porina said:

Had to look it up and my interpretation of what it is differs from that somewhat. It seems to be a different controller, not no controller. The zones can be mapped to specific applications who are responsible for writing to it in a particular way to take advantage of it. I can't see it being useful for regular users since the applications need to be aware of and benefit from it. It looks like a niche solution for some specific tasks.

You're right it is mostly targeted at enterprise and NVMeoF (Storage Fabrics) but there is a use case in the spec for usage within the storage device itself to better handle SLC caching that is currently done today. The birth place of this was actually in SMR HDDs, as they have a CMR/PMR zone that writes get staged in to before the disk controller looks at were to place the data, read out any data that needs to be overwritten then writes the block of data to the SMR zone.

 

NVMe Zones allows for a more standardized within the NVMe spec to handle SLC caching.

 

https://www.anandtech.com/show/15959/nvme-zoned-namespaces-explained Check the last page for a "where it's at today" overview, no surprises it's already supported in the latest Linux Kernel and there are ways to partition out the zones for filesystems that are not zone aware as well.

 

Basically OS's and their filesystems need to become Zone aware and handle this transparently which I see happening at least on the Linux side of things.

Link to post
Share on other sites
10 minutes ago, leadeater said:

Basically OS's and their filesystems need to become Zone aware and handle this transparently which I see happening at least on the Linux side of things.

That article put a different view on it, and I'm more confused now. If you view it as a specific solution it makes sense. Does it also work as a general solution? I suppose the question is, what happens if you have a Zoned SSD, with OS support, but without application support? Can the OS take over and use it pretty much like current device managed SSDs? I'm really questioning if this is suitable for consumer devices any time soon.

 

Part of me is also concerned - do I want Windows to manage things at this level at all? My gut feeling is that any OS wobblies are far more likely to cause data corruption than if the drive was doing its own thing. I guess you could argue that the OS could send bad data to a drive already, but the data within the drive would be logically consistent. I'm not sure that would remain the case if the OS takes more control.

Desktop Gaming system: Asrock Z370 Pro4, i7-8086k, Noctua D15, Corsair Vengeance Pro RGB 3200 2x16GB, Gigabyte 2070, NZXT E850 PSU, Cooler Master MasterBox 5, Optane 900p 280GB, Crucial MX200 1TB, Sandisk 960GB, Acer Predator XB241YU 1440p144 G-sync
TV Gaming system: Asus B560M-A, i7-11700k, Arctic eSports 34 duo, Corsair Vengeance Pro RGB 3200 2x16GB, MSI 3070 Gaming Trio X, EVGA Supernova G2L 850W, Anidees Ai Crystal, Samsung 980 Pro 2TB, LG OLED55B9PLA 4k120 G-Sync Compatible
Streaming system: Asus X299 TUF mark 2, i9-7920X, Noctua D15, Corsair Vengeance LPX RGB 3000 8x8GB, Asus Strix 1080 Ti, Corsair HX1000i, GameMax Abyss, Samsung 970 Evo 500GB, Crucial BX500 1TB, BenQ XL2411 1080p144 + HP LP2475w 1200p60
Former Main system: Asus Maximus VIII Hero, i7-6700k, Noctua D14, G.Skill Ripjaws V 3200 2x8GB, GTX 980 Ti FE, Corsair HX750i, In Win 303 NVIDIA, Samsung SM951 512GB, WD Blue 1TB, Acer RT280k 4k60 FreeSync [link]
Gaming laptop: Lenovo Legion, 5800H, DDR4 3200C22 2x8GB, RTX 3070, 512 GB SSD, 165 Hz IPS panel


 

Link to post
Share on other sites
2 minutes ago, porina said:

Part of me is also concerned - do I want Windows to manage things at this level at all? My gut feeling is that any OS wobblies are far more likely to cause data corruption than if the drive was doing its own thing.

Well that isn't really any different to how it is today, NTFS is still fundamentally an OS thing after all. Unlike SMR HDDs a Zoned NVMe SSD wouldn't be overlapping data so readability is much more straight forward. 

 

You could do it fully device managed with OS/Filesystem exposed metadata to allow for tuning and optimization or you could do device assisted where the device does the moving of the data between zones but only at the request of the filesystem, much like a TRIM command.

 

9 minutes ago, porina said:

I suppose the question is, what happens if you have a Zoned SSD, with OS support, but without application support?

Shouldn't matter as it's not any different to a RAID logical volume, or ZFS, Storage Spaces Virtual Disk or some other type of abstraction as long as there is a filesystem on top of it any application can write data to it. Direct application support is more if you want to bypass the filesystem entirely and write directly to the device, Ceph would be an example of that.

Link to post
Share on other sites
2 minutes ago, leadeater said:

Well that isn't really any different to how it is today, NTFS is still fundamentally an OS thing after all. Unlike SMR HDDs a Zoned NVMe SSD wouldn't be overlapping data so readability is much more straight forward. 

I was thinking that right now, filesystem level corruption does not lead to device level data corruption. Data on device is still logically consistent (even if the filesystem data is wrong). Basically the lower level the OS has, the lower the level any potential corruption could get.

 

2 minutes ago, leadeater said:

Direct application support is more if you want to bypass the filesystem entirely and write directly to the device

I think that might have been the point I didn't get, seeing it more all or nothing. Still, I'm not sure I'm sold on this, but my preferred solution would be to move away from flash and a lot of these current problems disappear. Affordable Optane for everyone please.

Desktop Gaming system: Asrock Z370 Pro4, i7-8086k, Noctua D15, Corsair Vengeance Pro RGB 3200 2x16GB, Gigabyte 2070, NZXT E850 PSU, Cooler Master MasterBox 5, Optane 900p 280GB, Crucial MX200 1TB, Sandisk 960GB, Acer Predator XB241YU 1440p144 G-sync
TV Gaming system: Asus B560M-A, i7-11700k, Arctic eSports 34 duo, Corsair Vengeance Pro RGB 3200 2x16GB, MSI 3070 Gaming Trio X, EVGA Supernova G2L 850W, Anidees Ai Crystal, Samsung 980 Pro 2TB, LG OLED55B9PLA 4k120 G-Sync Compatible
Streaming system: Asus X299 TUF mark 2, i9-7920X, Noctua D15, Corsair Vengeance LPX RGB 3000 8x8GB, Asus Strix 1080 Ti, Corsair HX1000i, GameMax Abyss, Samsung 970 Evo 500GB, Crucial BX500 1TB, BenQ XL2411 1080p144 + HP LP2475w 1200p60
Former Main system: Asus Maximus VIII Hero, i7-6700k, Noctua D14, G.Skill Ripjaws V 3200 2x8GB, GTX 980 Ti FE, Corsair HX750i, In Win 303 NVIDIA, Samsung SM951 512GB, WD Blue 1TB, Acer RT280k 4k60 FreeSync [link]
Gaming laptop: Lenovo Legion, 5800H, DDR4 3200C22 2x8GB, RTX 3070, 512 GB SSD, 165 Hz IPS panel


 

Link to post
Share on other sites
6 minutes ago, porina said:

I was thinking that right now, filesystem level corruption does not lead to device level data corruption. Data on device is still logically consistent (even if the filesystem data is wrong). Basically the lower level the OS has, the lower the level any potential corruption could get.

Well I would expect that the main data zone would be able to be read as if there was no zoning happening at all, since the SLC cache zone is just acting as a write cache. So the only risk is data not yet moved to that zone but the main data zone and the filesystem as a whole would be largely intact and readable. Personally I'd also have it this way for legacy fallback so if you have an old version of the filesystem that is not zone aware you'll see all the data but the SSD will just be much slower to write to.

 

Edit:

Maybe even dual mode where the device operating in fully device managed if the OS doesn't report to it zone support. Device managed is the more wide scale easy and safe option on the consumer side of things though.

Link to post
Share on other sites
8 minutes ago, porina said:

I think that might have been the point I didn't get, seeing it more all or nothing. Still, I'm not sure I'm sold on this, but my preferred solution would be to move away from flash and a lot of these current problems disappear. Affordable Optane for everyone please.

Why not an NVMe that has an Optane Zone and a NAND Zone 🙂

Link to post
Share on other sites
6 minutes ago, leadeater said:

Why not an NVMe that has an Optane Zone and a NAND Zone 🙂

I think Intel already makes a M.2 SSD that has both on it, separately addressable. Since we don't have zones, it requires software level solutions to manage the cache and store portions.

 

Personally I'm still not big on this. It is complexity, which to me is risk. I'm ok with built in caches like SSDs and SSHDs, but the more exposed it is to OS or even user level, the less I want to know about it.

Desktop Gaming system: Asrock Z370 Pro4, i7-8086k, Noctua D15, Corsair Vengeance Pro RGB 3200 2x16GB, Gigabyte 2070, NZXT E850 PSU, Cooler Master MasterBox 5, Optane 900p 280GB, Crucial MX200 1TB, Sandisk 960GB, Acer Predator XB241YU 1440p144 G-sync
TV Gaming system: Asus B560M-A, i7-11700k, Arctic eSports 34 duo, Corsair Vengeance Pro RGB 3200 2x16GB, MSI 3070 Gaming Trio X, EVGA Supernova G2L 850W, Anidees Ai Crystal, Samsung 980 Pro 2TB, LG OLED55B9PLA 4k120 G-Sync Compatible
Streaming system: Asus X299 TUF mark 2, i9-7920X, Noctua D15, Corsair Vengeance LPX RGB 3000 8x8GB, Asus Strix 1080 Ti, Corsair HX1000i, GameMax Abyss, Samsung 970 Evo 500GB, Crucial BX500 1TB, BenQ XL2411 1080p144 + HP LP2475w 1200p60
Former Main system: Asus Maximus VIII Hero, i7-6700k, Noctua D14, G.Skill Ripjaws V 3200 2x8GB, GTX 980 Ti FE, Corsair HX750i, In Win 303 NVIDIA, Samsung SM951 512GB, WD Blue 1TB, Acer RT280k 4k60 FreeSync [link]
Gaming laptop: Lenovo Legion, 5800H, DDR4 3200C22 2x8GB, RTX 3070, 512 GB SSD, 165 Hz IPS panel


 

Link to post
Share on other sites
2 hours ago, porina said:

I think Intel already makes a M.2 SSD that has both on it, separately addressable. Since we don't have zones, it requires software level solutions to manage the cache and store portions.

 

Personally I'm still not big on this. It is complexity, which to me is risk. I'm ok with built in caches like SSDs and SSHDs, but the more exposed it is to OS or even user level, the less I want to know about it.

This whole zone thing is really for tiered storage in a pool of NAND based storage. Each storage unit in the pool isn't aware of each other's own utilization metrics and performance characteristics, so that get's managed by the OS as to where the data is to be physically placed. Some of that might be local, while other data might be placed in another box linked with NVMeOF in another physical cabinet.

 

 

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×