Jump to content

Samsung Preps PCIe 4.0 and 5.0 SSDs With 176-Layer V-NAND

Lightwreather
2 minutes ago, leadeater said:

No not yet, maybe in semi close future so likely a good idea to invest in something NVMe now if buying but since technology moves fast getting a good SATA SSD and then dealing with needing or benefiting from NVMe later is probably going to see you out better in the long with with ending up with a larger capacity and faster NVMe drive.

Had a thought specifically about ms flt sim 2000.  might be stupid. Has anyone tested the game with a really really fast storage drive?  The things I remember being unusual about the game is it’s incredibly massive map libraries and people having a lot of trouble getting usable frame rates out of it.  I’m wondering if the secret to good frame rate is getting those maps out of storage fast enough.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, Bombastinator said:

Had a thought specifically about ms flt sim 2000.  might be stupid. Has anyone tested the game with a really really fast storage drive?  The things I remember being unusual about the game is it’s incredibly massive map libraries and people having a lot of trouble getting usable frame rates out of it.  I’m wondering if the secret to good frame rate is getting those maps out of storage fast enough.

I have seen tests for that game but I think that was mostly game loading time tests. What you'd be wanting to look at is much harder to cleanly isolate and test.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Kisai said:

NVMe is literately "raw PCIe", you don't get any faster than that unless you put it in the DIMM RAM slot, and that doesn't make it faster than PCIe.

It isn't raw PCIe. It is a protocol layer on top of PCIe, with all the stuff that comes with it. PCIe may be the eventual limit, but NVMe also adds its own requirements on top of that.

 

3 hours ago, Kisai said:

6.6GB is roughly the same or worse performance as NVMe PCIe 4.0 (2GB/lane). In a system where the memory bandwidth is 40GB/sec. 

Like Leadeater said, Optane's strength isn't sequential transfers, which you can improve easily in various ways if you need it. If there is a big demand it could be addressed in future product versions as things are not constant. Small transfer reads are the strength, writes can get buffered and written in bulk. As example, the 900p 280GB I have does over 250 MB/s 4k-q1 reads. The best flash SSD I own, a 980 Pro 2TB, does 90MB/s. Mid range NVMe of the era were around 60 MB/s from memory. I think Optane could go even faster as the CPU speed was influencing the results. This also translates to lower latency, although I don't have specific measurements for it. I wonder if anyone has a similar measurement for Optane DIMMs? 

 

3 hours ago, Kisai said:

Given, I don't there there are any current retail NVMe products that actually hit 7.8GB/sec read performance. WD Black SN850 hits 7.0GB/sec. Sabrent Rocket 4 Plus, hits 6.6GB/sec. Both 96-layer TLC. Samsung 980 Pro is 6.9GB/sec and apparently MLC.

The 980 Pro 2TB I have benches around 7GB/s reads, 5GB/s writes. It'll do. We have to be careful, as the raw data rate of PCIe may not all go to user data. There's likely a lot of metadata and/or signalling going on which will consume a bit. See it like raw network bandwidth vs what you can actually download once you have packet headers and error correction going on too.

 

1 hour ago, Bombastinator said:

Have data rates higher than sata 6 become functionally useful yet for gaming?  For Anything else not enterprise level?  Last I looked one could get nvme drives that could saturate pcie3x4 but they were expensive.  Has pcie4x4 become saturatable by nvme drives? 

RTX-IO or whatever the generic implementation is call is probably the most likely use case for a gamer. I don't know if any games are already programmed with it. It was a selling point of PS5 I recall.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, comander said:

As an aside - I expect cost/capacity will improve relative to NAND (just in the long run) and CXL is a thing and that will help. 

I wonder, do Intel have a source of Optane at the moment? I think their supply agreement with Micron ended, who own the existing factory Intel's earlier products used. They have the right to manufacture it themselves but I don't recall any reports that they did.

 

8 minutes ago, comander said:

I can totally see a nearish future where it's normal for systems to have an extra optane drive for data caching and RAM spillover (page file). Software needs to get better as well.  

Not sure about that myself. Flash is still good enough, cheap enough for the masses, and most aren't burning through ram no matter how many Chrome tabs are open. Intel's attempt at Optane Memory cache modules was more targeted at HD containing systems. It seem too niche to do much with it unless it is something radically different. Also can you imagine AMD buying Optane off Intel? I don't think Micron ever productised it themselves and they're the only other possible supplier.

 

I had wondered in the past if Optane could work in a low cost system. This might sound counter-intuitive, but imagine an OS that only had one tier covering what we currently use for ram+storage. Optane could do both. It'll be worse than ram, but then think about not differentiating the stored code and code in memory. They are the same. Store and execute directly. In case you worry ram performance would hinder performance, that could be mitigated to a fair degree by generous caches. 

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, comander said:

I have yet to fill the 118GB cache drive on my NAS. 

If you're in a single user environment it's fine. 

 

FWIW ZFS is set up to keep large contiguous files on the main storage and smaller blocks (small files, metadata, etc.) in RAM/cache. This means that latency/IO sensitive operations mostly land in cache and bandwidth intensive operations hit the storage array. If you have 4x HDD @ 200MBps and they're not IO bound, you'll often find yourself hitting ~500-800MBps read speed since they're not IO bound. 800MBps is faster than a SATA SDD. It definitely falls apart if you're writing a bunch of small files though - though I don't have a ZIL SLOG (loosely speaking, a write buffer) installed on my NAS since I don't care that much. 

Not if you play 2 or 3 new games at the same time and you don't want cache to constantly get flushed and repopulated.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, RejZoR said:

Not if you play 2 or 3 new games at the same time and you don't want cache to constantly get flushed and repopulated.

Just because a game is tens or hundreds of GB in size doesn't mean all of it is needed all the time. In practice you'll only need a small subset, such as core game code + whatever is needed for the current game part. How big that is might be harder to quantify, but where games have limited demo versions and/or stand alone benchmarks they tend to be in the low single digit GB range. That could be indicative of how much active data might be in flight at a given time.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, porina said:

Just because a game is tens or hundreds of GB in size doesn't mean all of it is needed all the time. In practice you'll only need a small subset, such as core game code + whatever is needed for the current game part. How big that is might be harder to quantify, but where games have limited demo versions and/or stand alone benchmarks they tend to be in the low single digit GB range. That could be indicative of how much active data might be in flight at a given time.

You mean it doesn't need to load all the massive textures, the part that makes all the games so big? It's also what will take you the longest to load.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, RejZoR said:

You mean it doesn't need to load all the massive textures, the part that makes all the games so big? It's also what will take you the longest to load.

There's multiple levels, some will be in the open, some in closed corridors, some buildings appear only one some maps/levels .. the game loads only the stuff it's on the level you load, or within the area you will see when you spawn, and then as you get close to stuff that's not yet loaded, resources are "streamed" in

 

Also, games will tend to store textures in the "ultra quality" or "very high" quality format, and they'll convert them down to lower quality textures if your quality settings are set to high or medium or low... for example a texture of a billboard on the street or the face of a building may be 4096x2048 pixels and use 5-10 MB but if you configure medium quality, the game may read those 5-10 MB of texture from the level / textures file, resize it to 1024x512 and shrink it down to maybe 1 MB and load only 1 MB into the video card. This way, more textures could fit in 4 GB VRAM. 

 

It's not really... most game developers already try to arrange the textures and sounds and all that within the game files so that they're read sequentially to reduce the need for mechanical drives to seek assets (textures, sound effects etc) within those big game files. Usually the game can seek once within a file and then sequentially read all the textures needed with maybe a few jumps known in advance by reading some index or a header for that file. 

It's conversions of textures, compiling shaders, loading stuff into the video card that can take time ... and some of these things have to be done differently depending on video card - nvidia cards like textures in some formats, amd cards may prefer different texture formats, or some formats work better with some shaders or effects (ex think borderlands cell shading may work better if the other game stuff is presented in a specific way) ... or some textures may need additional processing or color correction if user disables or reduces shades ... 

SSDs can get you 3-4 GB/s read speeds ... a game loads maybe 2-4 GB of content during a level load (some load more) ... if raw speed was the issue the game would load in 1 second on SSD but it would load in 20-30 seconds from mechanical drive.  

 

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, Bombastinator said:

data rates higher than sata 6 become functionally useful yet for gaming?  

Ummmm. Depends. Killing Floor 2 loads way faster on a Mvme drive VS even a SATA based SSD. That’s the only benefit I have seen. 

I just want to sit back and watch the world burn. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, RejZoR said:

You mean it doesn't need to load all the massive textures, the part that makes all the games so big? It's also what will take you the longest to load.

Think of it as load on demand. I don't know how big a big game is these days, but say you have a 100GB game. Do you need all of it at once? Big no. If you were to read all of it, where would it go? Last time I looked most people typically have 16GB of ram or less in a gaming system.

 

1 hour ago, mariushm said:

It's not really... most game developers already try to arrange the textures and sounds and all that within the game files so that they're read sequentially to reduce the need for mechanical drives to seek assets (textures, sound effects etc) within those big game files. Usually the game can seek once within a file and then sequentially read all the textures needed with maybe a few jumps known in advance by reading some index or a header for that file. 

Don't know how true it is, it was said "made for HD" games would also duplicate assets so that levels can load together and not require a lot of random seeks. If games moved to "made for SSD" where the seek cost is much smaller, that could result in some space saving.

 

1 hour ago, mariushm said:

It's conversions of textures, compiling shaders, loading stuff into the video card that can take time

This is kinda annoying, as it is created on the assumption HD is slow, compression is not as slow, so CPU decompression time saves HD loading time. Problem is, remove the HD bottleneck and you're left with the CPU choking. Again, if games are designed for SSD, I hope that could be significantly improved. For example, use a faster compression method even if it uses more space.

 

I don't know how much difference it makes in practice. If big enough, I'd love it if game devs offered a tool to repack the stored files depending on your storage medium. That would only be a stepping stone on the way to assuming everyone is on SSD though.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, porina said:

This is kinda annoying, as it is created on the assumption HD is slow, compression is not as slow, so CPU decompression time saves HD loading time. Problem is, remove the HD bottleneck and you're left with the CPU choking.

^THIS. So this!
 

I've played several titles where I witnessed this with dual screen. On one I left the Task Manager open; on the other loading a game. There's that initial burst of reading from the SSD, then it drops to idle while one CPU core pegs to 100% (a fraction of the overall CPU). While the CPU is pegged on that one core, the other screen is just at the game loading phase.

 

Aside from the whole question of "why can't the CPU do multi-threaded decompression", it's proof positive that many games are CPU bound on the decompression phase of loading.

I'm not a game dev, so I don't know why things are done the way they are. But I'm also aware that compressing images that are already compressed doesn't save much extra space, and only serves to burn more CPU cycles on decompression. So, maybe they already store game asset data in a hybrid mode where textures aren't "double-wrapped" in compression (image file itself, then in the BLOB file database too). Or maybe they're just lazy and ball it all into a compressed archive??

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, porina said:

Think of it as load on demand. I don't know how big a big game is these days, but say you have a 100GB game. Do you need all of it at once? Big no. If you were to read all of it, where would it go? Last time I looked most people typically have 16GB of ram or less in a gaming system.

 

Don't know how true it is, it was said "made for HD" games would also duplicate assets so that levels can load together and not require a lot of random seeks. If games moved to "made for SSD" where the seek cost is much smaller, that could result in some space saving.

 

This is kinda annoying, as it is created on the assumption HD is slow, compression is not as slow, so CPU decompression time saves HD loading time. Problem is, remove the HD bottleneck and you're left with the CPU choking. Again, if games are designed for SSD, I hope that could be significantly improved. For example, use a faster compression method even if it uses more space.

 

I don't know how much difference it makes in practice. If big enough, I'd love it if game devs offered a tool to repack the stored files depending on your storage medium. That would only be a stepping stone on the way to assuming everyone is on SSD though.

I've been using Hybrid storage way before it was cool, before it stopped being cool, before large SSD's were cheap. I have A LOT of experience with it and cache can get populated very quickly with modern games and like I said, you don't want it flushing data from cache and replace it with new one constantly because of capacity limits. Defeats the purpose of a cache and the data in it should really get flushed because of not being used and not because of space restrictions. Meaning it gets flushed when you're not really using those apps and games anymore. If it replaces relevant data, you'll be pretty hurt performance wise.

 

As for loading, I disagree. Load times difference can be huge and it'll often also eliminate hitching and stuttering as well as other issues.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, StDragon said:

^THIS. So this!
 

I've played several titles where I witnessed this with dual screen. On one I left the Task Manager open; on the other loading a game. There's that initial burst of reading from the SSD, then it drops to idle while one CPU core pegs to 100% (a fraction of the overall CPU). While the CPU is pegged on that one core, the other screen is just at the game loading phase.

Yet, how much time did it spend reading? Like given a NVMe transfer rate is probably faster than an entire game's size, short of some very big ones, even then, assuming a game was 100GB, that is going to be 20 seconds if it read the entire thing.

 

Quote

Aside from the whole question of "why can't the CPU do multi-threaded decompression", it's proof positive that many games are CPU bound on the decompression phase of loading.

 

Because multi-threaded lossless compression and decompression is impossible. You either lose a square compression (Eg half for 2 cores or 75% for 4 cores) if you divide the workload, and lose any compression between those workloads, or you go with a lossy compression which can be decoded by the GPU, which is often just throwing away the lowest-precision bits (which makes things muddy, rather than blurry.)

 

Textures can be lossless or lossy, depending on how the user is intended to see it, but you aren't going to run lossy compression on the level or model geometry. Likewise, the vast majority of games with voice acting, the voice acting can be as much as half the game, and many games pick a lossy audio format just because the lossy artifacts aren't really perceptable when there is other background noises in the game. Yet, in order to play those, they have to be converted back to PCM. No sound card/chip actually decodes AAC, OPUS, MP3, or OGG Vorbis. 

 

What you are actually seeing is not just the decompression, but the index/archiving/decryption. Like a large game will ship with a 4GB file for the base game, and then additional 4GB files for later levels, and small patches that "replace" files in the base game (eg DLC, bug fixes, etc) so that has to be done at load time, determining which version of a file to load.

Quote


I'm not a game dev, so I don't know why things are done the way they are. But I'm also aware that compressing images that are already compressed doesn't save much extra space, and only serves to burn more CPU cycles on decompression. So, maybe they already store game asset data in a hybrid mode where textures aren't "double-wrapped" in compression (image file itself, then in the BLOB file database too). Or maybe they're just lazy and ball it all into a compressed archive??

Because then people would rip/strip/data-mine the game easily. If you remember how pirates operate, they basically strip protection mechanisms from the game binary, and then re-pack games by using more obscure, aggressive compression schemes. Back in the 90's they would go so far as to strip audio from games to reduce cd-rom sized games down to 1/4 of their size.

 

Today, game modders will unlock dlc without paying for it, or mod textures/models to be ... less appropriate for the rating it shipped with. Heck, there are games on steam that are R-18 games that have been "Stripped"/censored by the developer, but they offer patches on their official website to turn it back into the R-18 version, which goes back to that statement I mentioned earlier about "which version" of the data archive files to use. The shipped version might have made all the R-18 content "skipped" over in the game code, but the assets themselves are only blanked out. The patch "replaces" these blank images.

 

You also have regional censorship like Australia, Germany or Japan where certain rules exist regarding blood, that don't exist in American content. So some modders will take the least-censored version of each part and make a patch to un-censor the other region's game.

 

But it's up to the game developer to make it so the game shipped can't be "uncensored" by simply deleting a "censor" patch archive.

 

One game I know of, basically has "an index" file that goes along with the archive file, so that the game can seek directly to those points in the archive, but the archive itself is actually not encrypted or compressed.

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, AndreiArgeanu said:

I don't think the xbox 360 version of a certain game was considered inferior to a comparable PS3 title on Blu-ray disc mainly because of the actual hardware. Because the PS3 was so difficult to develop for, most third party games, especially games released in the first half of the generation most of the time either ran better or looked better on the Xbox 360.

Final Fantasy XIII and Portal 2(PS3 better), vs Mass Effect 3 and GTA IV being inferior on the PS3.

 

Ultimately it likely came down to the requirement to play the games from the disc. I'm sure there are certain games which are essentially identical on every version because they aren't handicapped by load time or memory. If you have to keep loading models and textures from the disc, then it's going to result in a lot of pop-in effects like you'd see in a over-crowded MMORPG.

 

The thing is, a lot of games of that generation designed "levels" as cubes, and thus the entire level is in memory, and usually partitioned by a door somewhere to load the next area. What we're supposed to see are seamless worlds (which is something that GTA has always managed, but at the expense of actual scale.)

 

Then you have things like Minecraft, which the world is procedurally generated as you go, so further you go from the seed point, the bigger the game (on disk) gets. So eventually you will hit a point where traversing the world becomes difficult because much of the world now exists on the disk, and you run into limitations imposed by that. Assuming you could hit every corner of the minecraft world, the server would need 409 petabytes. The client may only ever have a a few dozen chunks of the world loaded, but the server needs to maintain the state of the entire game. Presumably it has to maintain a copy of everything all connected clients are seeing. I can't imagine there is a logical way to organize a minecraft world on disk to keep it fast on a mechanical disk, but a SSD doesn't have that problem.

Link to comment
Share on other sites

Link to post
Share on other sites

33 minutes ago, Kisai said:

The thing is, a lot of games of that generation designed "levels" as cubes, and thus the entire level is in memory, and usually partitioned by a door somewhere to load the next area. What we're supposed to see are seamless worlds (which is something that GTA has always managed, but at the expense of actual scale.)

Handled by LOD. It's never been a problem for the racing games I've played. Burnout Paradise City being an example. And it's impressive that the amount of area traversal that occurs due to the nature of the sport; lots of unique buildings, bridges, roads, landmarks..etc.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, comander said:

3 games is not going to get the cache flushed. Large texture files are not going to land in the cache. Information about where they're stored probably will. 

 

I want to emphasize the paradigm is IOPs heavy stuff goes in the cache, streaming  stuff goes in the cache. 

 

Fair warning, my context is comparing RAM hit rates on a ZFS NAS. I REALLY went out of my way to make cache nurses happen. 

Not what I experienced with PrimoCache. It'll cache anything that gets accessed repeatedly. What kind of magic sauce they are using is unknown, but I've seen cache get filled pretty nicely and I wasn't even playing CoD's that are massive.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, comander said:

Filled cache is good.

What you SHOULD be concerned with though isn't that the cache is full. It's the cache hit rates. 

 

If you're getting a cache hit rate of 50% you'll be doing half the work on the cache and half on the disk/array.  Ideally both are doing what they're good at and you'll get MORE than 2x the performance of either part on it's own. 

 

In ZFS land I'm usually seeing 50% on newly opened stuff (mostly metadata) and 70-100% on the stuff that I use a ton. 

I had 75-80% cache hit ratio.

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, RejZoR said:

I had 75-80% cache hit ratio.

Just out of curiosity, how much RAM did you allocate to PrimoCache? Whereabouts did you see diminishing returns as you've allocated more (if applicable changes were made).

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, StDragon said:

Just out of curiosity, how much RAM did you allocate to PrimoCache? Whereabouts did you see diminishing returns as you've allocated more (if applicable changes were made).

Zero. I had only L2 SSD cache to accelerate the WD Caviar Black 2TB HDD drive that I had back then.

Link to comment
Share on other sites

Link to post
Share on other sites

 

16 hours ago, Kisai said:

Because multi-threaded lossless compression and decompression is impossible. You either lose a square compression (Eg half for 2 cores or 75% for 4 cores) if you divide the workload, and lose any compression between those workloads, or you go with a lossy compression which can be decoded by the GPU, which is often just throwing away the lowest-precision bits (which makes things muddy, rather than blurry.)

I guess I don't understand the above so let's break it down.

 

Let's focus purely on decompression for now, since that's what a gamer would hit most of the time. I'm not sure, would you ever need to recompress the files? Maybe for a patch for example. Even if so, I'd argue that's less performance critical than when in game.

 

So why can't you use multiple threads for decompression? If there is something about data dependency that means it can't work across threads efficiently, then obvious solution is that a single thread deals with a single file, and you have multiple threads, each working independently on different files. Presumably there will have to be some constraint as you hit limits elsewhere. The biggest relative gains will be at the start anyway. Going to 2 threads from 1 will up to double your rate, halve the time. More threads will continue to help, but the impact each makes on the overall time gets smaller.

 

16 hours ago, Kisai said:

What you are actually seeing is not just the decompression, but the index/archiving/decryption.

I kinda considered that bundled together since for a regular user they're not separable anyway.

 

14 hours ago, RejZoR said:

I had 75-80% cache hit ratio.

Doesn't sound bad to me. Don't let perfect get in the way of good enough. Cache is about improving performance. Sure, 100% hit rate might be perfect, but any amount above zero is giving you a benefit.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

@porina

75% cache hit rate is amazing. This means 75% of all requests were fetched from dramatically faster SSD cache and remaining 25% was fetched from slower HDD, most likely first time events where they had to be read from HDD.

 

It was also very noticeable acoustically. On first system boot, HDD made a lot of grinding noise and took some time to boot. After 2 restarts, system booted almost as fast as on pure SSD now and it was dead silent, no grinding at all. Need for Speed 2015 (the reboot) had issue where HDD's loaded maps too slowly on-the-fly and people were falling through ground because of it. I never had such issues with hybrid storage because it was basically as fast as SSD. And in online games, I nearly always joined first or among first on the server. But that was when SSD's were stupid expensive and didn't come in large capacities. Few years later I bought the Sammy 850 Pro 2TB for back then ridiculous 850€, but while stupid expensive, it was one of best long term investments. It's been like 6 years since that and I still have this 2TB beast and it's still serving me great. It lost 80% of value now so selling it off is pointless, but it's still SSD and 2TB is very decent capacity. It being SATA isn't really an issue for now so it's still good.

 

Will go M.2 NVMe when DirectStorage actually gets used and just use this as extra storage drive or something.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×