Jump to content

Kioxia's PCIe 5.0 SSD Hits 14,000 MBps

Alder Lake
47 minutes ago, porina said:

You could say the same for transfer rate too

heh true

 

47 minutes ago, porina said:

If access latency remains constant, a faster transfer rate would mean the transfer part completes sooner.

Not sure I understand this? If latency stayed the same and same with the block size the transfer time would be unchanged, otherwise it's not block size for block size consistent.

 

Do you mean if the latency did not increase as the block size increased? That would be a benefit yea however larger block sizes isn't where NAND/Flash struggles or where Optane or PMem etc excel at.

 

  • Latency is the time taken to process a single storage transaction at a given block size.
  • IOPs is the aggregate of transactions completed in one second for a given block size or mixed block sizes.
  • Bandwidth is the total amount of data (each block) for each transaction in one second, by totaling each transactions block size within the second of time. However actually doing it this way would be prohibitive and resource intensive so a simple total amount of data is sampled and derived by the sample length.

 

Either way latency or block size has to change for IOPs or Bandwidth to change, if you want a faster storage device then you want to reduce the latency at the same block sizes.

 

And as it stands for home usage Optane isn't much faster than current NAND and barely for Z-NAND

V2csc2e9GWf2zKa2dxs729-970-80.png.webp

 

QD1 and QD2 is pretty much all anyone should care about on their gaming PC/Laptop. The Samsung Z-NAND and Toshiba NAND based SSDs are equally as good as Optane at these low queue depths, no reason to shell out tons of money on Optane when NAND can offer equivalent performance for the same/similar workload.

 

But that's for writes, Optanes best case, so lets look at read performance

8k2jPoVeVAJaMzK83ZVhAK-970-80.png.webp

 

Looks like Z-NAND is/can give Optane a very good run for it's money here. NAND struggles a bit more here due to DRAM not being as effective as it is for writes.

 

Z-NAND is based off of V-NAND however operates as SLC, thing is if regular SLC SSDs are too expensive then an even more expensive variant of it isn't going to be any more attractive to the budget conscious buyer.

 

What the above graphs don't show is consistency of latency over time during a workload, this is where cheap consumer SSDs do terribly at. They burst very well and for the most part this is all that is ever required. Start re-downloading your entire Steam library because you did a reinstall and you have actually fast internet, the SSD will actually slow down which is a tad amusing really.

 

P.S. Also every time I go to use the word excel my brain BSOD's and goes "that's not a word that's an application!", oddly annoying lol.

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, leadeater said:

heh true

 

Not sure I understand this? If latency stayed the same and same with the block size the transfer time would be unchanged, otherwise it's not block size for block size consistent.

 

Do you mean if the latency did not increase as the block size increased? That would be a benefit yea however larger block sizes isn't where NAND/Flash struggles or where Optane or PMem etc excel at.

 

  • Latency is the time taken to process a single storage transaction at a given block size.
  • IOPs is the aggregate of transactions completed in one second for a given block size or mixed block sizes.
  • Bandwidth is the total amount of data (each block) for each transaction in one second, by totaling each transactions block size within the second of time. However actually doing it this way would be prohibitive and resource intensive so a simple total amount of data is sampled and derived by the sample length.

 

Either way latency or block size has to change for IOPs or Bandwidth to change, if you want a faster storage device then you want to reduce the latency at the same block sizes.

 

And as it stands for home usage Optane isn't much faster than current NAND and barely for Z-NAND

V2csc2e9GWf2zKa2dxs729-970-80.png.webp

 

QD1 and QD2 is pretty much all anyone should care about on their gaming PC/Laptop. The Samsung Z-NAND and Toshiba NAND based SSDs are equally as good as Optane at these low queue depths, no reason to shell out tons of money on Optane when NAND can offer equivalent performance for the same/similar workload.

 

But that's for writes, Optanes best case, so lets look at read performance

8k2jPoVeVAJaMzK83ZVhAK-970-80.png.webp

 

Looks like Z-NAND is/can give Optane a very good run for it's money here. NAND struggles a bit more here due to DRAM not being as effective as it is for writes.

 

Z-NAND is based off of V-NAND however operates as SLC, thing is if regular SLC SSDs are too expensive then an even more expensive variant of it isn't going to be any more attractive to the budget conscious buyer.

 

What the above graphs don't show is consistency of latency over time during a workload, this is where cheap consumer SSDs do terribly at. They burst very well and for the most part this is all that is ever required. Start re-downloading your entire Steam library because you did a reinstall and you have actually fast internet, the SSD will actually slow down which is a tad amusing really.

 

P.S. Also every time I go to use the word excel my brain BSOD's and goes "that's not a word that's an application!", oddly annoying lol.

good to know we're not getting screwed as badly as I thought

░█▀▀█ ▒█░░░ ▒█▀▀▄ ▒█▀▀▀ ▒█▀▀█   ▒█░░░ ░█▀▀█ ▒█░▄▀ ▒█▀▀▀ 
▒█▄▄█ ▒█░░░ ▒█░▒█ ▒█▀▀▀ ▒█▄▄▀   ▒█░░░ ▒█▄▄█ ▒█▀▄░ ▒█▀▀▀ 
▒█░▒█ ▒█▄▄█ ▒█▄▄▀ ▒█▄▄▄ ▒█░▒█   ▒█▄▄█ ▒█░▒█ ▒█░▒█ ▒█▄▄▄

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, leadeater said:

Not sure I understand this? If latency stayed the same and same with the block size the transfer time would be unchanged, otherwise it's not block size for block size consistent.

The way I'm looking at this is to split the performance into two parts:

A, the time it takes to start transferring data. I don't know if there is a proper term for it, so I called it access time.

B, the time it takes to actually transfer the data when it is flowing. This is related to the max transfer rate.

 

A practical transfer would consider both A and B for a total transfer time, and presumably that is what you're thinking of. If latency or bandwidth helps more depends which one is more dominant. For big transfers you care more about bandwidth. For small transfers, the latency would dominate.

 

So my earlier statement was saying if A was constant, you can still gain by improving B.

 

30 minutes ago, leadeater said:

But that's for writes, Optanes best case, so lets look at read performance

 

Looks like Z-NAND is/can give Optane a very good run for it's money here. NAND struggles a bit more here due to DRAM not being as effective as it is for writes.

Reads are Optane's strength over flash. This is why I bought a 900p to play with. I can only speak of my own experience, but using CrystalDiskMark 4k QD1 random read test, an average SSD of the era when Optane was introduced was 50-60MB/s. The best flash SSD I've owned I think was hitting around 80MB/s. An unoptimised system with Optane was easily moving over 250MB/s. I say unoptimised, because Optane is so fast the CPU is a significant performance impactor. I don't recall the exact numbers but it will be on a thread somewhere on this forum. I saw a significant gain from overclocking a CPU from stock 4 GHz to 5 GHz for example, and you can see a drop in performance if you use chipset connected PCIe lanes as opposed to CPU lanes.

 

However despite all that, I can't say I feel the difference in practice for normal consumer use. The Optane 900p is still the boot drive of my Coffee Lake gaming system which I'm looking to retire. I can feel a difference between a cheap flash SSD and a mid range flash SSD, but beyond that it is "good enough" for general use.

 

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, porina said:

However despite all that, I can't say I feel the difference in practice for normal consumer use. The Optane 900p is still the boot drive of my Coffee Lake gaming system which I'm looking to retire. I can feel a difference between a cheap flash SSD and a mid range flash SSD, but beyond that it is "good enough" for general use.

that's kind of how tech is, though it makes me question, why are you upgrading anyway?

░█▀▀█ ▒█░░░ ▒█▀▀▄ ▒█▀▀▀ ▒█▀▀█   ▒█░░░ ░█▀▀█ ▒█░▄▀ ▒█▀▀▀ 
▒█▄▄█ ▒█░░░ ▒█░▒█ ▒█▀▀▀ ▒█▄▄▀   ▒█░░░ ▒█▄▄█ ▒█▀▄░ ▒█▀▀▀ 
▒█░▒█ ▒█▄▄█ ▒█▄▄▀ ▒█▄▄▄ ▒█░▒█   ▒█▄▄█ ▒█░▒█ ▒█░▒█ ▒█▄▄▄

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

They burst very well and for the most part this is all that is ever required. Start re-downloading your entire Steam library because you did a reinstall and you have actually fast internet, the SSD will actually slow down which is a tad amusing really.

Oh no! 🙄 /sarcasm

Be a crying shame if I could only write back at 1 to 1.5GB/s after the SLC cache fills up. What am I going to do with all that time on my hands? 🤣

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Alder Lake said:

that's kind of how tech is, though it makes me question, why are you upgrading anyway?

The system as it was, I'd only consider to be "good" tier for forward looking gaming. I've since moved to a newer system in my sig, and haven't got around to clearing out the old system before selling it as parts. I'm keeping the 900p as it remains the most responsive SSD I own to this day and may use it for experiments. Decided a 980 Pro was good enough for the new system, cheaper, and much higher capacity.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Alder Lake said:

there is one thing, what if RAM replaces nvme drives instead?

 

 

at the end of this generation (DDR5), DIMMs will have as much as 128GB (I think), so with 4 slots, you can have half a terabyte of RAM in your system, and next GEN that might double (or more) again

 

it's almost like a battle of, "will RAM have enough storage to act as a storage, or will NVME drives have enough speed to act as RAM?"

by then we will see game/ program sizes jump up again and we are back to square one.

3 hours ago, Alder Lake said:

watch DDR6 have a solution for this, something like "semi-random-access-memory" SRAM (unless that's already taken), where it would have 128gb per stick, and would be somewhere between the speeds of RAM and NVME, and stay even after a complete shutdown, that way things are still snappy on startup

you kinda just described optane, Sram is the basis for all current cpu caches, registers, and basically anything that needs to store data at the same frequency that the cpu runs.

1 hour ago, porina said:

The way I'm looking at this is to split the performance into two parts:

A, the time it takes to start transferring data. I don't know if there is a proper term for it, so I called it access time.

B, the time it takes to actually transfer the data when it is flowing. This is related to the max transfer rate.

 

A practical transfer would consider both A and B for a total transfer time, and presumably that is what you're thinking of. If latency or bandwidth helps more depends which one is more dominant. For big transfers you care more about bandwidth. For small transfers, the latency would dominate.

 

So my earlier statement was saying if A was constant, you can still gain by improving B.

 

Reads are Optane's strength over flash. This is why I bought a 900p to play with. I can only speak of my own experience, but using CrystalDiskMark 4k QD1 random read test, an average SSD of the era when Optane was introduced was 50-60MB/s. The best flash SSD I've owned I think was hitting around 80MB/s. An unoptimised system with Optane was easily moving over 250MB/s. I say unoptimised, because Optane is so fast the CPU is a significant performance impactor. I don't recall the exact numbers but it will be on a thread somewhere on this forum. I saw a significant gain from overclocking a CPU from stock 4 GHz to 5 GHz for example, and you can see a drop in performance if you use chipset connected PCIe lanes as opposed to CPU lanes.

 

However despite all that, I can't say I feel the difference in practice for normal consumer use. The Optane 900p is still the boot drive of my Coffee Lake gaming system which I'm looking to retire. I can feel a difference between a cheap flash SSD and a mid range flash SSD, but beyond that it is "good enough" for general use.

 

problem is there is a good chance the reason you dont see a big difference is that nothing is coded to expect a drive that fast, so many assumptions start to fail and performance is lost, sometimes even bugs apear.

 

Imo we are going to a place where we have:

On die: L1, L2, L3 caches

over the die: more L3

same substrate hbm like memory serving as ram but slightly faster

ddrX: extra ram for expansion without changing your cpu

CXL: optane and maybe extra ram, which could eventually be used to keep a copy of the whole system state, so as to allow one to have nearly instantaneous shutdowns and startups.

SSD: for semi fast longer term storage.

HDD: good old spinning mirrors (sounds fancier than spinning rust and seems more accurate), probably with 2 independent R/W heads, and maybe even independent arm movement per platter (independent arms stacked on top of each other one for each platter), too store all our "research" what ever it might be

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, porina said:

So my earlier statement was saying if A was constant, you can still gain by improving B.

That's still the product of the latency of each transaction at the given block size. No matter what you are doing it's transactional it's just a question of how large those transactions are, for example MSSQL always writes data in 64KB chunks where as a filesystem transaction could be much larger. That's why you want to align your filesystem allocation size to your most common write size so the data doesn't have to be re-allocated which increases end to end latency, like say your writes are 1MB and your filesystem allocation size is 4KB (NTFS default btw) then each write is cached in to system memory and then the 1MB is split in to 4KB and actually written. This actually further matters for what the sector size of the physical media is, 512B or 4KB, and since most stuff now days is 4KB then a 4KB allocation size is a really good default because if you send 64KB allocations of data to the physical medium then it's controller has to cache it to memory split it and actually write it.

 

All these inefficacies are usually masked by queue depths higher than 1 which is why a lot of bandwidth focused benchmarks set the QD unrealistically high so there is never any ideal time when the above is happening as there will always be an inflight I/O.

 

It's actually an interesting thing to play around with and measure the differences using IOmeter however now days it largely doesn't matter because SSDs are so fast but it did matter before when HDDs RAIDs were the go to thing as you also had to factor in stripe size and stripe width in to the equation.

 

4 hours ago, porina said:

Reads are Optane's strength over flash

You should look at the Read graph again. QD1, QD2 and QD4(less so but rather close) are effectively the same between Z-NAND (Flash) and Optane then Optane pushes ahead in QD8 and QD16 and then finally Z-NAND overtakes for anything above QD16.

 

It's actually writes where Optane dominates, anything above QD2 and it's WAY in the lead, it's not even a competition.

 

I'd like to say we'd get a decent benefit when Z-NAND comes to the consumer market but I honestly just don't think it will, being SLC it's highly unattractive to the buyer and for Samsung.

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, leadeater said:

You should look at the Read graph again. QD1, QD2 and QD4(less so but rather close) are effectively the same between Z-NAND (Flash) and Optane then Optane pushes ahead in QD8 and QD16 and then finally Z-NAND overtakes for anything above QD16.

 

It's actually writes where Optane dominates, anything above QD2 and it's WAY in the lead, it's not even a competition.

I do not recognise those results. My focus was primarily on QD1 reads. Different methodologies and scenarios will lead to differing results. My understanding is based on reviews I saw around the time the 900p was released, as well as personal testing once I got one myself, which agreed with what I had seen. I have to admit, I didn't pay particular attention to my own write tests since it was not a critical performance area for me. Due to the cached nature of writes even flash doesn't totally suck at it. Small random writes will be batched together and be more sequential like than otherwise.

 

I found my old post, and unfortunately it appears I never went back to fill in the details, and the chances of me finding it now are practically zero. Still it gives a taste of how it performed in practice at the time. There is a link to a white paper on the system configuration for testing Optane which is where I got the idea to overclock CPU to see how performance changes.

 

 

A source I likely would have used then was Anandtech: see how reads destroy flash in QD1 random 4k reads, whereas writes are also good but less so.

burst-rr.png.2a1d672d3038924f3dbeab9fc0f1583f.png

https://www.anandtech.com/show/12136/the-intel-optane-ssd-900p-480gb-review/5

 

13 minutes ago, leadeater said:

I'd like to say we'd get a decent benefit when Z-NAND comes to the consumer market but I honestly just don't think it will, being SLC it's highly unattractive to the buyer and for Samsung.

Z-NAND was an interesting attempt to see if flash can approach the performance of Optane, and my impression at the time, it looked rather unremarkable in comparison. I don't know if it has improved since, or if it even still exists. Enterprise use cases where it might make sense could remain, but it doesn't seem like it will ever be a consumer technology. Pricing matters far more and I already questioned my own sanity for buying a 980 Pro which was getting on to 2x that of a budget SSD of same capacity.

 

I'm sad to say I also doubt Optane will break through to mainstream either. Not sure Intel even has a manufacturing source given they split ways with Micron, who sold off the fab making it. While Intel has the rights to make their own I don't recall it ever stated they actually are. Some people worked out Micron probably made a loss on the tech.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

25 minutes ago, porina said:

A source I likely would have used then was Anandtech: see how reads destroy flash in QD1 random 4k reads, whereas writes are also good but less so.

Well sure that is true, the graphs I gave show the same thing however mine shows Z-NAND as well and when pondering is Flash really a limiting factor the answer is no not really. The issue remains with both Z-NAND and Optane, they both cost more than NAND and offer marginal benefit to the average consumer so unless something comes along with Z-NAND or Optane performance at the cost of NAND then it's DOA just like Z-NAND and Optane were.

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, leadeater said:

Well sure that is true, the graphs I gave show the same thing however mine shows Z-NAND as well and when pondering is Flash really a limiting factor the answer is no not really.

The AT link you later posted raises the glass half empty of half full perspective! I probably saw that in the past and that may have lead to my current thinking that Z-NAND is still not quite at Optane levels of performance, in that AT link for 4k random reads about 4.5x that of the best "regular" SSD, and Optane about 8x. So no small gap there. I'm disregarding other use cases and scenarios, but again that was the one I was most interested in when buying Optane.

 

10 hours ago, leadeater said:

The issue remains with both Z-NAND and Optane, they both cost more than NAND and offer marginal benefit to the average consumer so unless something comes along with Z-NAND or Optane performance at the cost of NAND then it's DOA just like Z-NAND and Optane were.

No arguments on this point. At least Optane made it to consumer offerings. I don't recall the exact numbers, but at the time Optane was around 4x the cost/capacity of a regular SSD, but I don't recall if that was for a high end one (Samsung Pro tier), or a more higher-average SSD (Samsung Evo tier). Keep in mind we can see over 2x price range between a top end and entry level SSD, per capacity.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, porina said:

At least Optane made it to consumer offerings

Very good point, like you could buy Z-NAND products but also nobody actually would. That would bring sanity in to question lol. Maybe used on ebay or something but certainly not new.

 

2 hours ago, porina said:

I don't recall the exact numbers, but at the time Optane was around 4x the cost/capacity of a regular SSD, but I don't recall if that was for a high end one (Samsung Pro tier), or a more higher-average SSD (Samsung Evo tier)

I don't remember either, other than being very unattractive capacity wise. I could think of really good ways to utilize a small Optane device now but I'm still unsure if I would actually go for it. I spend far too much on buying PC junk and server equipment that I never really end up using so buying even more and those being at a higher costs would just make less sense.

 

What I need to do is convince my work to give me a rack then I'd actually use it, I don't due to power costs.

 

Far as which is the superior technology that would be Optane however it's nearly double the $/GB of Z-NAND, so if you are in the performance extreme market Z-NAND does have an appeal factor. Even at work we wouldn't consider it though, or Optane, unless they came with something or we drastically change our infrastructure archecture which ain't going to happen on the corporate application support side of things. HPC maybe but we struggle to get funding for GPUs so splashing out on fancy and expensive SSDs is not an option 😅

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, comander said:

I know these are best case scenario numbers but... 

DDR2-800 in dual channel reads at 6400MBps. 
DDR3-1600 reads at 12800MBps. 

In best case scenarios, this can saturate system memory from not THAT long ago. 

It's quite long ago actually. I had DDR2 back with Core 2 Duo E4300 and DDR3 with Core i7 920. E4300 is 15 years old now and 5820K is 13 years... This is very long period in tech world...

 

It's nice to see we're approaching RAM speeds with non-volatile storage tho. Imagine we had garbage HDD's back then that could barrely do 80MB/s sequential reads...

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, RejZoR said:

It's quite long ago actually. I had DDR2 back with Core 2 Duo E4300 and DDR3 with Core i7 920. E4300 is 15 years old now and 5820K is 13 years... This is very long period in tech world...

I'm using quad channel DDR3 today, ok fine I'm old I get it! 😭

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, leadeater said:

I'm using quad channel DDR3 today, ok fine I'm old I get it! 😭

Quad channel? They had quad with DDR3 already? I thought X58 was the first beyond dual and it was just triple channel.

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, RejZoR said:

Quad channel? They had quad with DDR3 already? I thought X58 was the first beyond dual and it was just triple channel.

X79 was still DDR3 so 4930K is quad channel DDR3

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

Very good point, like you could buy Z-NAND products but also nobody actually would. That would bring sanity in to question lol. Maybe used on ebay or something but certainly not new.

Used enterprise products can be interesting as an alternative performance option, but unfortunately where I live the used market isn't very big so interesting things are few and far between, and go for way over what you might see in the US for example.

 

1 hour ago, leadeater said:

I could think of really good ways to utilize a small Optane device now but I'm still unsure if I would actually go for it.

I vaguely recall the smaller consumer "Optane Memory" devices weren't so hot in some performance areas, presumably as it struggled to scale down to lower capacities.

 

1 hour ago, leadeater said:

I spend far too much on buying PC junk and server equipment that I never really end up using so buying even more and those being at a higher costs would just make less sense.

A common problem around here I feel 😄 

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

22 hours ago, Forbidden Wafer said:

Mine have been below 100% for a long while, so I hope that won't be the case.

I doubt it'll happen in your case then. It makes sense that it happened for my 840 EVOs since the one I used in my main machine had something like 62TBW and the other one I had is at something like 40TBW.

Main rig on profile

VAULT - File Server

Spoiler

Intel Core i5 11400 w/ Shadow Rock LP, 2x16GB SP GAMING 3200MHz CL16, ASUS PRIME Z590-A, 2x LSI 9211-8i, Fractal Define 7, 256GB Team MP33, 3x 6TB WD Red Pro (general storage), 3x 1TB Seagate Barracuda (dumping ground), 3x 8TB WD White-Label (Plex) (all 3 arrays in their respective Windows Parity storage spaces), Corsair RM750x, Windows 11 Education

Sleeper HP Pavilion A6137C

Spoiler

Intel Core i7 6700K @ 4.4GHz, 4x8GB G.SKILL Ares 1800MHz CL10, ASUS Z170M-E D3, 128GB Team MP33, 1TB Seagate Barracuda, 320GB Samsung Spinpoint (for video capture), MSI GTX 970 100ME, EVGA 650G1, Windows 10 Pro

Mac Mini (Late 2020)

Spoiler

Apple M1, 8GB RAM, 256GB, macOS Sonoma

Consoles: Softmodded 1.4 Xbox w/ 500GB HDD, Xbox 360 Elite 120GB Falcon, XB1X w/2TB MX500, Xbox Series X, PS1 1001, PS2 Slim 70000 w/ FreeMcBoot, PS4 Pro 7015B 1TB (retired), PS5 Digital, Nintendo Switch OLED, Nintendo Wii RVL-001 (black)

Link to comment
Share on other sites

Link to post
Share on other sites

On 12/13/2021 at 4:58 PM, PeachGr said:

I see nvme drives to replace ram in the future on office desktops, or work as ram extensions, same way they do on the recent phones

Nah why? Ram is faster

CPU: Ryzen 5800X3D | Motherboard: Gigabyte B550 Elite V2 | RAM: G.Skill Aegis 2x16gb 3200 @3600mhz | PSU: EVGA SuperNova 750 G3 | Monitor: LG 27GL850-B , Samsung C27HG70 | 
GPU: Red Devil RX 7900XT | Sound: Odac + Fiio E09K | Case: Fractal Design R6 TG Blackout |Storage: MP510 960gb and 860 Evo 500gb | Cooling: CPU: Noctua NH-D15 with one fan

FS in Denmark/EU:

Asus Dual GTX 1060 3GB. Used maximum 4 months total. Looks like new. Card never opened. Give me a price. 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, DoctorNick said:

Nah why? Ram is faster

essentially because that could change

░█▀▀█ ▒█░░░ ▒█▀▀▄ ▒█▀▀▀ ▒█▀▀█   ▒█░░░ ░█▀▀█ ▒█░▄▀ ▒█▀▀▀ 
▒█▄▄█ ▒█░░░ ▒█░▒█ ▒█▀▀▀ ▒█▄▄▀   ▒█░░░ ▒█▄▄█ ▒█▀▄░ ▒█▀▀▀ 
▒█░▒█ ▒█▄▄█ ▒█▄▄▀ ▒█▄▄▄ ▒█░▒█   ▒█▄▄█ ▒█░▒█ ▒█░▒█ ▒█▄▄▄

Link to comment
Share on other sites

Link to post
Share on other sites

People in this forum always forget that such stuff is focused towards data centers and other HPC environments, not their gaming PCs. Maybe in 10~15 years this will trickle down with an actual use case for end-consumers.

 

23 hours ago, PeachGr said:

As you said, they are on a short of integrated GPU, and it came recently on some android phones, like Samsung and Xiaomi ones

Is it this thing?

https://www.techradar.com/news/explained-virtual-ram-expansion-on-smartphones

 

If so, that's regular swap or pagefile that already existed in PCs for a long time, and whenever you fill up your ram and need to use those, performance sucks, you won't notice that much on a phone since you don't really multitask that much on it.

 

22 hours ago, porina said:

An unoptimised system with Optane was easily moving over 250MB/s. I say unoptimised, because Optane is so fast the CPU is a significant performance impactor. I don't recall the exact numbers but it will be on a thread somewhere on this forum. I saw a significant gain from overclocking a CPU from stock 4 GHz to 5 GHz for example, and you can see a drop in performance if you use chipset connected PCIe lanes as opposed to CPU lanes.

With linux and the latest io_uring patches you can almost saturate an optane drive:

He managed even higher IOPS with Alder Lake.

 

 

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, porina said:

Used enterprise products can be interesting as an alternative performance option, but unfortunately where I live the used market isn't very big so interesting things are few and far between, and go for way over what you might see in the US for example.

Far better than me. I pay to get stuff shipped from US, UK and China to NZ lol. Not so bad if it's a CPU, a lot worse if it's an entire chassis part assembly for the second CPU option of your IBM  server. Yes the second socket is on a separate board, sooooo happy about that.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, leadeater said:

Far better than me. I pay to get stuff shipped from US, UK and China to NZ lol. Not so bad if it's a CPU, a lot worse if it's an entire chassis part assembly for the second CPU option of your IBM  server. Yes the second socket is on a separate board, sooooo happy about that.

sounds like shipping costs are a pain in Australia

░█▀▀█ ▒█░░░ ▒█▀▀▄ ▒█▀▀▀ ▒█▀▀█   ▒█░░░ ░█▀▀█ ▒█░▄▀ ▒█▀▀▀ 
▒█▄▄█ ▒█░░░ ▒█░▒█ ▒█▀▀▀ ▒█▄▄▀   ▒█░░░ ▒█▄▄█ ▒█▀▄░ ▒█▀▀▀ 
▒█░▒█ ▒█▄▄█ ▒█▄▄▀ ▒█▄▄▄ ▒█░▒█   ▒█▄▄█ ▒█░▒█ ▒█░▒█ ▒█▄▄▄

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×