Jump to content

Straight out of the NVMe onto the GPU - Microsoft releases their DirectStorage API

igormp

Summary

Microsoft finally released their DirectStorage API to the public, making it possible for any dev to try it out with their games and programs.

 

Quotes

Quote

Starting today, Windows games can ship with DirectStorage. This public SDK release begins a new era of fast load times and detailed worlds in PC games by allowing developers to more fully utilize the speed of the latest storage devices. In September 2020, we announced DirectStorage would be coming to Windows, and after collecting feedback throughout our developer preview, we are making this API available to all of our partners to ship with their games. Check out the announcement blog for an in-depth exploration of the inspiration for DirectStorage and how it will benefit Windows games.

 

My thoughts

It's nice to see PCIe P2P DMA finally accessible through an easy-to-use abstraction layer, let's see how people make use of it. I wonder if it'll be possible to use this API with non-DirectX related stuff, such as databases or other stuff that may want to transfer data between an NVMe and something like a NIC (something that's already possible on Linux).

 

Now all that's left is to see applications making actual use of it, and how well it'll catch on as time goes.

 

Sources

https://devblogs.microsoft.com/directx/directstorage-api-available-on-pc/

https://www.phoronix.com/scan.php?page=news_item&px=Microsoft-DirectStorage-API

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

Nice. I wonder how long it'll take before it starts to get adoption. I imagine quite some time, seeing as most people make games for the lowest common denominator, and fast storage (NVME/PCIe) won't be widely adopted for a while yet.

CPU: Ryzen 9 5900 Cooler: EVGA CLC280 Motherboard: Gigabyte B550i Pro AX RAM: Kingston Hyper X 32GB 3200mhz

Storage: WD 750 SE 500GB, WD 730 SE 1TB GPU: EVGA RTX 3070 Ti PSU: Corsair SF750 Case: Streacom DA2

Monitor: LG 27GL83B Mouse: Razer Basilisk V2 Keyboard: G.Skill KM780 Cherry MX Red Speakers: Mackie CR5BT

 

MiniPC - Sold for $100 Profit

Spoiler

CPU: Intel i3 4160 Cooler: Integrated Motherboard: Integrated

RAM: G.Skill RipJaws 16GB DDR3 Storage: Transcend MSA370 128GB GPU: Intel 4400 Graphics

PSU: Integrated Case: Shuttle XPC Slim

Monitor: LG 29WK500 Mouse: G.Skill MX780 Keyboard: G.Skill KM780 Cherry MX Red

 

Budget Rig 1 - Sold For $750 Profit

Spoiler

CPU: Intel i5 7600k Cooler: CryOrig H7 Motherboard: MSI Z270 M5

RAM: Crucial LPX 16GB DDR4 Storage: Intel S3510 800GB GPU: Nvidia GTX 980

PSU: Corsair CX650M Case: EVGA DG73

Monitor: LG 29WK500 Mouse: G.Skill MX780 Keyboard: G.Skill KM780 Cherry MX Red

 

OG Gaming Rig - Gone

Spoiler

 

CPU: Intel i5 4690k Cooler: Corsair H100i V2 Motherboard: MSI Z97i AC ITX

RAM: Crucial Ballistix 16GB DDR3 Storage: Kingston Fury 240GB GPU: Asus Strix GTX 970

PSU: Thermaltake TR2 Case: Phanteks Enthoo Evolv ITX

Monitor: Dell P2214H x2 Mouse: Logitech MX Master Keyboard: G.Skill KM780 Cherry MX Red

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, dizmo said:

Nice. I wonder how long it'll take before it starts to get adoption. I imagine quite some time, seeing as most people make games for the lowest common denominator, and fast storage (NVME/PCIe) won't be widely adopted for a while yet.

Yup like how games used to be coded for 16kb of RAM so you wouldn't be special if you bought a $600 64mb expansion cartridge, it'd run the same...

 

I'm interested in when CAD software and video editors will see adoption of this. They already assume you have a powerful machine... 

Also wouldn't be surprised if there isn't at least one obscure AIB card with an NVME and GPU on the same board like Nvidia did with a few cards for PhysX

Link to comment
Share on other sites

Link to post
Share on other sites

A bit disappointing there's no game showcase to get things going. Game devs most likely would have had preview versions, and I assume the same tech is already on xbox and a close parallel on PS. So this shouldn't be totally new to them.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

I wonder how well this'd work on QLC SSD. Because my P1 has far more writes left in it than my SX8200 Pro.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, Mel0nMan said:

 

I'm interested in when CAD software and video editors will see adoption of this. They already assume you have a powerful machine... 

 

They probably won't. This is a feature that relies on the interaction between disk and GPU, which is something that only machine learning and procedurally generated games (eg Minecraft) can really take advantage of. Most other games and software only load to the GPU once, they aren't constantly swapping out textures every few seconds. That's why "loading tunnels" were a thing 12 years ago in games. To make the game seamless you'd have a game do this:

 

Area A (eg combat zone) -> Area B (enemies despawn as you enter) -> Area C (city with no combat) 

 

Because the entire zone's textures and geometry had to exist in the GPU, what most games did (and how Unity and Unreal 4.x games are designed for) is the game world is just a series of big and small cubes with specific exits. You can not make a "minecraft" inside Unity or Unreal because that's not how those game engines are designed to be operated, and you'd be swimming upstream in re-implementing things.

 

No, MMORPG/MMOFPS type of open-worlds are the going be the primary beneficiary of this feature, because it removes the obstacle of needing to have "zones" at all, they can move to minecraft-ish procedurally generated worlds where the data on the disk is simply the cache of what you've seen. It'll still be synchronized against a server, but it could now be done without everyone needing to have TB's of disk space for the entire game world if they never visit areas they don't play in. All those fixed reusable assets? They can stay on disk.

 

Imagine a MMO where you no longer need to download GB's of patches of static assets. Instead it just streams things as you get near them, and the GPU can just use what it needs when it gets near the location it needs it. This is in fact how games made in the early 90/console's did things. As soon as you moved more than X many tiles in any direction, it would actually be moving the pointers of where to "start" drawing, but the tiles were still in the video memory. The character sprites were still in the center of the screen, they haven't actually moved, the world did. 

 

Another thing that this might open the door for, provided internet bandwidth actually gets that good, is streaming avatar data. Instead of sending the actual model and texture data for the GPU to render, it would instead send a 2D or 3D "texture" pre-baked from the client side. Think primative holodeck.

 

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, Kisai said:

They probably won't. This is a feature that relies on the interaction between disk and GPU, which is something that only machine learning and procedurally generated games (eg Minecraft) can really take advantage of. Most other games and software only load to the GPU once, they aren't constantly swapping out textures every few seconds. That's why "loading tunnels" were a thing 12 years ago in games. To make the game seamless you'd have a game do this:

I think you should re-watch the DirectX DirectStorage announcement and the PS5 Unreal Engine technical demo. There entire point is to not do these pre-loading and do streaming directly to the GPU as when needed. It's an entire fundamental change, how things are done now have no relation to DirectStorage other than to show how it was done vs how it can now be done.

 

https://www.pcgamer.com/au/fast-ssd-storage-is-key-to-the-unreal-engine-5-demos-super-detailed-scenes/

 

https://devblogs.microsoft.com/directx/directstorage-is-coming-to-pc/

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, leadeater said:

There entire point is to not do these pre-loading and do streaming directly to the GPU as when needed.

I think you need to read the entire post before making quick judgements.

 

I know what the point of Directstorage is. The question was "can it be used for something other than games?" Which the answer is "probably not"

 

AutoCAD, 3D modeling software, and such samples what it want to show in the view port. When you're working on a CAD blueprint for a factory with a hundred other people, those segments of the blueprints are being updated asynchronously, over the network, but the part YOU are working on is local to your PC until you hit save. If two people work on the same part, then some conflict resolution has to happen. CAD and modeling stuff pretty much isn't going to take advantage of it.

 

Unity and Unreal (4.x and earlier) were designed for "box shaped" levels/stages/zones. They can not be retroactively changed to take advantage of Directstorage, because assuming you could, all that would happen is a reduction in the initial load time of the stage, nothing else would change.

 

Machine Learning, possibly could take advantage of it, but that would depend on the characteristics of the machine learning. 

 

https://github.com/microsoft/DirectStorage

 

Quote

DirectStorage only supports read operations.

 

It also doesn't do decompression in hardware, that's done on the CPU. So that remains a bottleneck.

 

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, Kisai said:

I know what the point of Directstorage is. The question was "can it be used for something other than games?" Which the answer is "probably not"

Well you did a lot of talking about games for a response to CAD 🤷‍♂️

 

But it was your comments about games that was also not correct.

 

16 minutes ago, Kisai said:

Unity and Unreal (4.x and earlier) were designed for "box shaped" levels/stages/zones. They can not be retroactively changed to take advantage of Directstorage, because assuming you could, all that would happen is a reduction in the initial load time of the stage, nothing else would change.

Excuse me? You didn't read my sources then, Unreal literally has been updated to support this model.

 

16 minutes ago, Kisai said:

It also doesn't do decompression in hardware, that's done on the CPU. So that remains a bottleneck.

Wrong, again actually read my sources.

 

image.thumb.png.16c926875ec6f8d1a6e4cb1728eeb754.png

Link to comment
Share on other sites

Link to post
Share on other sites

26 minutes ago, leadeater said:

 

 

Excuse me? You didn't read my sources then, Unreal literally has been updated to support this model.

What part of "Unreal 4.x" are you incapable of reading? I know 5 exists, you're just never going to get anyone to upgrade to it from a 4.x project. You may as well start from scratch.

 

26 minutes ago, leadeater said:

Wrong, again actually read my sources.

 

image.thumb.png.16c926875ec6f8d1a6e4cb1728eeb754.png

Again, you're not reading anything linked.

 

https://github.com/microsoft/DirectStorage/blob/main/Samples/MiniEngine/DirectStorage/README.md

 

Quote

The sample also supports zlib decompression, using DirectStorage’s custom decompression queue. Zlib decompression isn’t particularly fast, so this sample uses multiple threads to decompress data in parallel.

https://github.com/microsoft/DirectStorage/blob/main/Samples/MiniEngine/DirectStorage/ModelViewerDecompressionDiagram.png

Quote

Of note here, the destination buffer provided by DirectStorage is likely to be in an upload heap. This means that it is write combined memory, which should only be written to sequentially, and not read from. It turns out that zlib’s uncompress function reads from the buffer it is writing to. For this reason, we decompress to a temporary buffer that we then memcpy to the destination buffer. If there is an error then this is recorded in the ErrorCode atomic.

That is literately describing a bottleneck.

 

You can lossy compress textures to gain a read/decompression parallelization advantage. You can not do that with geometry, which if you make it lossy, will be useless.

 

The way games are designed, have to be fundamentally be changed to take advantage of DirectStorage. You can not simply change the filesystem abstraction and expect performance changes. You'd have to be making the active decision to use parallelized compression routines, where each chunk is indexed. Lossless compression can not be compressed in parallel as a single blob, it has to be cut up, which removes the compression efficiency from losing the compression dictionaries between compressible chunks, which then also means the compressible blocks have to be big enough to not make the block bigger than the input.

 

MMO games, already do this, because otherwise you'd be downloading patches bigger than the content already on disk. But games that ship on disk? They've being doing the opposite, and arranging files on the filesystem so that the files most frequently accessed together.

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Dabombinable said:

I wonder how well this'd work on QLC SSD. Because my P1 has far more writes left in it than my SX8200 Pro.

QLC or cacheless shouldn't be a problem. At the point where you're playing the game you're mostly just reading, not writing.

 

It'll be interesting if this actually does anything and is not just marketing mumbo-jumbo used to market a new console generation. If you're on a SATA SSD loading times are already insignificant imo.

If someone did not use reason to reach their conclusion in the first place, you cannot use reason to convince them otherwise.

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, Kisai said:

Again, you're not reading anything linked.

I know because that's can example project, I'm linking you to the official guy from Microsoft talking about it who literally states multiple times decompression is done on the GPU. So it's done on the GPU 🤦‍♂️

 

It being done another way in another sample project is not the DirectStorage specification nor what it supports and can do.

 

Your sources do not override the official statements from Microsoft and the people from the project working on this.

 

image.thumb.png.ea1b97706581c3098c6cf6e194da1b0e.png

 

How many times do you need me to link to direct first party from the developers of DirectStorage that decompression is done on the GPU?

 

 

14 minutes ago, Kisai said:

What part of "Unreal 4.x" are you incapable of reading? I know 5 exists, you're just never going to get anyone to upgrade to it from a 4.x project. You may as well start from scratch.

That's a pretty wild statement when all the console development is done on UE5 rather than UE4 (major devs ofc) and basically every AAA game is console first, so nope plenty have already changed to UE5.

 

Eh if you want to keep being wrong, your choice. Spreading misinformation or incorrect information however is not cool.

Link to comment
Share on other sites

Link to post
Share on other sites

55 minutes ago, leadeater said:

 

 

Eh if you want to keep being wrong, your choice. Spreading misinformation or incorrect information however is not cool.

Mr.Leadeater sir, you come into these threads sometimes as the aggressive nerd nitpicking typos as grievous injury to the English language. That is not what's happening, and why you insist on gaslighting people into trying to make them believe something they didn't say is just insulting.

 

The objective fact is, that nobody is going to "port" a UE4 game to UE5 just to take advantage of DirectStorage. You're going to see new games that started on UE5, back when UE5 was still in development two years ago. It's the same for anything else like Unity or Godot. You pick the version of the engine you're going to use, and you are stuck with it f-o-r-e-v-e-r. Because API feature creep is a thing, and software developers have this asinine obsession with breaking things just to take advantage of the latest C++ feature we didn't need.

 

Go ahead, go try and recompile that 12+ year old game using today's DirectX 12 SDK. I can guarantee you it will not work. UE4 is from 2003. It's an established, nearly 20-year-old production pipeline, that Epic has maintained and kept most of the feature creep out of. 

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Dabombinable said:

I wonder how well this'd work on QLC SSD. Because my P1 has far more writes left in it than my SX8200 Pro.

As long as it has sufficient read speeds it should be fine. Outside of installing or patches, write perf doesn't really matter. Games don't generally do many writes. Maybe a little for logging, state information or saves.

 

2 hours ago, Stahlmann said:

It'll be interesting if this actually does anything and is not just marketing mumbo-jumbo used to market a new console generation. If you're on a SATA SSD loading times are already insignificant imo.

The problem we have on PC is that we're not making the most out of technology, due to the double edge sword of wide compatibility including scaling down to much older hardware.

 

Perhaps not the best example, but on my PS4 I swapped the included HD for a SSD. The difference was underwhelming. Because software expected to be on a slow HD, it was heavily compressed. With the storage bottleneck largely removed, it immediately hit a processing bottleneck. On the PS5, where a fast SSD is the norm and not an exception, it is a different experience. I've played Gran Turismo through the years. Load times were quite meh on PS4. Now I've got the current PS5 version load times are insignificant.

 

Another test, this time on PC, is FFXIV loading times. Could I reduce it? HD to SATA SSD was a big jump. SATA SSD to flash NVMe was a small jump. Flash to Optane was a tiny jump, with hardly any more difference to ramdisk. The limitation? Again, it came down to the CPU. I don't know how possible it is, but I'd love if game devs would offer a "SSD optimised repack" of data files option, to lower the CPU load and make more of storage speeds even if there might be some cost to data size. Outside of game storage, when testing Optane you could see the performance change not insignificantly with CPU speed.

 

Another example, look at the disk benchmark released not that long ago by UL (3DMark). They profiled actual game loads on PC, and they're basically very low compared to peak SSDs speeds. Thus not as much difference as you might expect between SATA and NVMe. We need a change in how storage is used to make use of the faster speeds.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Kisai said:

The objective fact is, that nobody is going to "port" a UE4 game to UE5 just to take advantage of DirectStorage.

True that is a point, but it's also not a big factor when most will be already developing new games in UE5 right now anyway. The list of upcoming UE5 games is quite decent and even an existing game, Fortnite, is going to be updated from UE4 to UE5. Not that I expect that to be common though so you have a point there.

 

I hardly think existing games and projects underway, not using new versions of UE or Unity, but not finished/released is of much relevance.

 

1 hour ago, Kisai said:

That is not what's happening, and why you insist on gaslighting people into trying to make them believe something they didn't say is just insulting.

Your original post was just full of errors and completely missing the mark and downplaying what DirectStorage is and will do. You went through great pains to say or imply only games like Minecraft or MMO's will benefit from this which just is not correct. This is the information I am talking about that was wrong. All I did was point you to official information about what it is and what it's going to do, I think you should take a look in the mirror to how you react to criticism. Attitudes are like springboards, they bounce back. You get only what is given to you by the way you reply back and argue.

 

Saying something that is incorrect is not a problem, nobody is perfect. Arguing in the face of evidence that what you have said or are saying is just really silly.

 

4 hours ago, Kisai said:

I know what the point of Directstorage is. The question was "can it be used for something other than games?" Which the answer is "probably not"

As to this point, which you actually talked about very little and I do not even disagree with which is why I didn't, I don't think CAD software is limited by the things DirectStorage tries to address. So whether it could or could not I'm unclear of the benefits with such software at all.

 

Only thing I can think of is when zooming and moving anything not seen is released and not in GPU memory and is streamed back in as required, so the only benefit could be something like needing less GPU memory for projects however the maximum usage possible I think would be the same. So I can't see much benefit.

Link to comment
Share on other sites

Link to post
Share on other sites

34 minutes ago, porina said:

I don't know how possible it is, but I'd love if game devs would offer a "SSD optimised repack" of data files option, to lower the CPU load and make more of storage speeds even if there might be some cost to data size. Outside of game storage, when testing Optane you could see the performance change not insignificantly with CPU speed.

I've seen games that include "HDD modes" such as Cyberpunk. Idk what these modes do though.

If someone did not use reason to reach their conclusion in the first place, you cannot use reason to convince them otherwise.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Kisai said:

which is something that only machine learning

No sane person would use windows for that, and that's already a thing on Linux. As an example, Facebook already uses it between NICs and GPUs (since they load data from storage servers) for both training and inference. 

 

4 hours ago, Kisai said:

Machine Learning, possibly could take advantage of it, but that would depend on the characteristics of the machine learning. 

As I said before, already being done. 

 

4 hours ago, Kisai said:

It also doesn't do decompression in hardware, that's done on the CPU. So that remains a bottleneck.

It's not unusual for people to hand roll their own gpu-based decompression. RTX IO was mean to be a "standard" for that. 

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

Finally! Now for this to catch addoption across the board.

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Ducky One 3 TKL (Cherry MX-Speed-Silver)Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, igormp said:

RTX IO was mean to be a "standard" for that. 

*becomes a big difference in performance per card*

Link to comment
Share on other sites

Link to post
Share on other sites

If there only was some system working with a unified memory architecture this API wouldn’t be needed…

 

 

…oh hey wait a minute!

Link to comment
Share on other sites

Link to post
Share on other sites

GPU-decompression being late looks just about right for the pieces to fall together into place circa Q3-Q4 when PCIe Gen 5 SSDs launch.

Link to comment
Share on other sites

Link to post
Share on other sites

42 minutes ago, Spindel said:

If there only was some system working with a unified memory architecture this API wouldn’t be needed…

 

 

…oh hey wait a minute!

Except that doesn't have anything to do with this...

 

Unified memory support exists on current Windows and Linux APIs as well also. Things don't "just work" because you put memory on package, Metal had to be updated to actually take advantage of that too. But still completely different things.

 

"Announcing a new fancy screwdriver"

"XYZ already has a hammer"
eh? 🙃

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, igormp said:

No sane person would use windows for that, and that's already a thing on Linux. As an example, Facebook already uses it between NICs and GPUs (since they load data from storage servers) for both training and inference. 

Right. Yet Microsoft offers their own ML offerings cause they want to play in the sandbox too. 

 

3 hours ago, igormp said:

As I said before, already being done. 

 

It's not unusual for people to hand roll their own gpu-based decompression. RTX IO was mean to be a "standard" for that. 

I think until there is a standard mechanism for parallel lossless compression in GPU hardware, it's probably a liability. Considering that NVIDIA will also depreciate entire classes of (mobile) hardware a year in advance of doing the same with the desktop hardware.

 

One thing that annoys the every loving heck out of me is how "progress" in the computer space is how often we trade crappy lossy compression for lossless-yet-requires-something-more-powerful. That's how we went from 2MB lossless flash files becoming 300MB 1080p videos of lossy compression. Sometimes the thing that already existed was the best way to "that thing" but then some trendy innovation starts the process over again. 3D models with lossless textures where it matters (eg eyes) vs hitting everything with the same ASTC or worse, S3TC just to meet a GPU memory target. 

 

Anyway my point before we derailed, was that potentially one "could" use it for ML, but it seems like a rather stubborn reason to. It's a pain in the butt just to load Tensorflow on a Windows PC because there is missing support for it on Windows that exists on Linux. Pytorch, works the same on Linux and Windows, but many of the stuff written on top? Forget about it. Maybe this can be revisited if GPU's ever implement some hardware accelerated zlib-lzo-lzw-lzma family decompression engine. It's not like people haven't tried before.

 

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, leadeater said:

Unreal literally has been updated to support this model.

Ok, but does this really only work for nvme, not for normal/sata ssds?

 

i mean on paper it looks good but im weary about the actual implementations. If a game is designed around this tech, normal ssds or even spinning rust users (gasp) will suffer for it im afraid. You kinda cant have it both ways, either its optimized for streaming asetts or it isnt and streaming assets just doesnt work well on regular hardware - so far. Unless this is also optimized for slower ssds.  Hence my initial question . 

The direction tells you... the direction

-Scott Manley, 2021

 

Softwares used:

Corsair Link (Anime Edition) 

MSI Afterburner 

OpenRGB

Lively Wallpaper 

OBS Studio

Shutter Encoder

Avidemux

FSResizer

Audacity 

VLC

WMP

GIMP

HWiNFO64

Paint

3D Paint

GitHub Desktop 

Superposition 

Prime95

Aida64

GPUZ

CPUZ

Generic Logviewer

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×