Jump to content

AMD Ryzen has issues with high frequency RAM! - Fix will come in 1-2 months

kladzen
27 minutes ago, kladzen said:

Just saw that Biostar has memory support for 3600 mhz kits with the amd ryzen..

 

Biostar x370 gt7 - http://www.biostar.com.tw/app/en/mb/introduction.php?S_ID=874#memorysupport

 

Search for kit:  F4-3600C17D-8GVK

Interesting, because MBO specs say that 2667 is max. Perhaps it will downclock it? 

The ability to google properly is a skill of its own. 

Link to comment
Share on other sites

Link to post
Share on other sites

Unbenannt.JPG

 

Edit: I see its been said here already whoops

RyzenAir : AMD R5 3600 | AsRock AB350M Pro4 | 32gb Aegis DDR4 3000 | GTX 1070 FE | Fractal Design Node 804
RyzenITX : Ryzen 7 1700 | GA-AB350N-Gaming WIFI | 16gb DDR4 2666 | GTX 1060 | Cougar QBX 

 

PSU Tier list

 

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, kladzen said:

Just saw that Biostar has memory support for 3600 mhz kits with the amd ryzen..

 

Biostar x370 gt7 - http://www.biostar.com.tw/app/en/mb/introduction.php?S_ID=874#memorysupport

 

Search for kit:  F4-3600C17D-8GVK

The problem is the speed limit with 4 sticks vs 2 sticks of ram.
Most motherboard manufactures only name kits of 4 dimms with lower speeds ~2400-2666mhz but not above and that is what the asus representative said that with 4 dimms the speed is limited.

Link to comment
Share on other sites

Link to post
Share on other sites

Yo, not saying we should believe this.
BUT... Corsair is showing a Ryzen build running with all the memory slots populated at 3000MHz, albeit on the MSI Titanium motherboard.
 

As always, take it with a grain of salt, reviews are just a couple of days away.

Also, Steve from HardwareUnboxed, says he was having problem with ASUS Motherboards(no mention about what in specific).
 

___________________________________________________________________________________________

So just putting it our there -- maybe... just maybe -- the memory issue can be pinned down to ASUS only?
Becasue so far from whatever other reports I've read, nothing seems to suggest that other mobos are suffering with the same issue. Could be wrong...
Man this wait is killing me... f**k NDAs & embargoes LOL!

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, blackmambo said:

Yo, not saying we should believe this.
BUT... Corsair is showing a Ryzen build running with all the memory slots populated at 3000MHz, albeit on the MSI Titanium motherboard.
 

As always, take it with a grain of salt, reviews are just a couple of days away.

Also, Steve from HardwareUnboxed, says he was having problem with ASUS Motherboards(no mention about what in specific).
 

___________________________________________________________________________________________

So just putting it our there -- maybe... just maybe -- the memory issue can be pinned down to ASUS only?
Becasue so far from whatever other reports I've read, nothing seems to suggest that other mobos are suffering with the same issue. Could be wrong...
Man this wait is killing me... f**k NDAs & embargoes LOL!

I'd say Asus is just having motherboard issues again-just like in the days of LGA775 and the P45 chipset (it takes some really weird/illogical settings+stick installation order to get it running dual channel DDR2 1066 correctly on most Asus P45 motherboards)

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

Dunno if it's been posted already, but: http://semiaccurate.com/forums/showpost.php?p=283673&postcount=7443

 

TL:dR Ryzen's IMC is beyond amazeballs (we're talking like 98%+ efficiency here) at talking to memory resulting in much higher actual GB/sec throughput at the same memory clocks than Intel's variant (at around 75% efficiency) which means that even at lower clocked memory it'll still kick butt.

Ye ole' train

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, blackmambo said:

Yo, not saying we should believe this.
BUT... Corsair is showing a Ryzen build running with all the memory slots populated at 3000MHz, albeit on the MSI Titanium motherboard.
 

 

Interestingly they say indeed 3000mhz in the video with 4 dimms, but on their site: (http://www.corsair.com/ryzen) they only have 4 dimm kits up to 2400mhz and dual kits up to 3000mhz.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Dabombinable said:

I'd say Asus is just having motherboard issues again-just like in the days of LGA775 and the P45 chipset (it takes some really weird/illogical settings+stick installation order to get it running dual channel DDR2 1066 correctly on most Asus P45 motherboards)

That's what I'd like to believe.

Not saying that's the case, but all things are pretty much pointing towards ASUS being incompetent.

1 minute ago, TommyNL said:

Interestingly they say indeed 3000mhz in the video with 4 dimms, but on their site: (http://www.corsair.com/ryzen) they only have 4 dimm kits up to 2400mhz and dual kits up to 3000mhz.

IKR, that's exactly why I said shouldn't believe it until reviews are out. But I mean... part of me wants to give Corsair the benefit of doubt -- why would they risk their reputation by releasing such blatant misinformation.

Link to comment
Share on other sites

Link to post
Share on other sites

On 2/26/2017 at 3:19 PM, MageTank said:

If you could specify exactly where I was nitpicking, i'd gladly admit to it. Last I checked, the person you quoted made a general statement (the first half being true the vast majority of the time). You seem so keen on explaining without providing any sources for people to study up on. If your intent is to offer clarification, provide the means for people to understand what you are saying. Dumbing it down to "CPU Overhead" is easier, because it's a terminology that's commonly used to describe the type of CPU bottleneck that holds a GPU back. The entire industry uses it, as does the people that researched memory and it's impact on gaming performance. Look back as far as a few years ago, and you see titles like this: http://www.overclock.net/t/1487162/an-independent-study-does-the-speed-of-ram-directly-affect-fps-during-high-cpu-overhead-scenarios

http://www.overclock.net/t/1586767/digital-foundry-memory-overclocking-and-how-it-affects-fps-in-8-different-games

Looking at the videos on that second link, you have DigitalFoundry using the words "CPU Overhead". Why? Because a blanket term is easier to convey than trying to explain CPU I/O to people in graphic detail. While it might not be the "correct" terminology to use, as long as it conveys the same information, it's good enough.

 

If you have information as to why memory bandwidth is helping alleviate CPU overhead, I'd appreciate it. I know that it helps, but I personally lack the understanding of why it helps. All I know is, when a specific part of a game is holding my GPU back on the CPU side of things, faster memory makes a very noticeable difference in minimum framerates. If we teach people why that is, then we can finally put that old LTT video to rest. 

Yeah sure. You say that in the context of gaming then CPU overhead is I/O overhead. I do know where you are getting at.

 

How do you conclude it is true in the vast majority of time? I would make the bet that in the vast majority of games, you wouldn't see much benefits going from 1886MHz to 2400MHz. Sure for new big titles that isn't going to be true.

 

It might be easier, but it clearly gives the wrong idea for those not in the know. So without any further explanation, it leaves people with the wrong impression. Maybe those sites should be more descriptive in their reviews and also include memory in their analysis instead of dumping it down to CPU/GPU bottleneck. I mean system today are guaranteed to have CPU, GPU, memory, storage and network. All can have a say in the final performance.

 

Also, the gaming industry aren't famous for its correctness, the hardware review industry is mostly filled with random people unboxing and reading what it says on the box.

It clearly doesn't give convey the same information, and this very thread is a proof of that. 

 

I can't be certain, because I haven't run any kind of performance analysis on any game, but my guess would be; The game code has many serialized I/O request which can halt the CPU. Faster memory brings down the real-time latency IIRC. A stupid example would be having a loop with one or more I/O request for each run.

 

On 2/26/2017 at 10:04 PM, Ryan_Vickers said:

Well that's a pretty big discovery for me... seems the tech industry is rampant with people calling things the wrong thing.

That is how language evolve. People misusing words enough and suddenly you change the very meaning of the word. 

 

On 2/26/2017 at 10:15 PM, Dabombinable said:

You are CPU bound as the CPU can't process instructions fast enough, and has to start queuing everything into system memory-thus the more bandwidth+lower the latency the better. And if in the same situation people were I/O bound, they wouldn't see a performance increase upgrading to a more powerful CPU without upgrading the RAM.

You are probably forgetting why the CPU can't process the instruction fast enough. I mean most instructions can be processed in a billionth of a second.. But nearly all instructions require at least 1 operand (some require 2 or 3) and SIMD require vastly more. Each I/O request takes much longer than to execute an instruction. Sadly, you have more operands than instructions.

 

You are forgetting a few things; More powerful CPU can have better I/O systems (Intels enthusiast platform has better I/O connectivity), have more cache (to avoid latency for outgoing request), and game code can behave differently with more cores.

Please avoid feeding the argumentative narcissistic academic monkey.

"the last 20 percent – going from demo to production-worthy algorithm – is both hard and is time-consuming. The last 20 percent is what separates the men from the boys" - Mobileye CEO

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Tomsen said:

You are probably forgetting why the CPU can't process the instruction fast enough. I mean most instructions can be processed in a billionth of a second.. But nearly all instructions require at least 1 operand (some require 2 or 3) and SIMD require vastly more. Each I/O request takes much longer than to execute an instruction. Sadly, you have more operands than instructions.

 

You are forgetting a few things; More powerful CPU can have better I/O systems (Intels enthusiast platform has better I/O connectivity), have more cache (to avoid latency for outgoing request), and game code can behave differently with more cores.

I think I'll put you in the same box I had Patrick: "needs to provide more than hearsay". And the performance from faster RAM is felt across a huge range of CPU- dating from the days of AMD's K6 and Intel's P6 to present. With the IMC in Skylake and Kabylake being better than those preceding them, meaning that they can better use system memory and thus get more performance via it in some games+applications. Notice as well that the i7 5775c due to its 128MB L4 cache can in some situation outperform a 6700K even when stock, which is further support for "faster memory=greater performance".

 

Would you like me to run a suite of cache+memory benchmarks followed by game and synthetic benhmarks tomorrow with the following:

  • Xeon X5450/QX6850/Pentium E6500K/C2D E8500/Pentium 4 631@stock with DDR2 667-1066 5-6-6-18 (better timings than most DDR3 1333 sticks at 1066)
  • A8 4555M with dual channel DDR3 1333 (2GB soldered on with no SPD,+2GB Hynix stick)
  • Phenom II P920 and N970 with DDR3 1066 and 1333 repsectively
  • i5 470UM with 8GB DDR3 800 (I still don't know what Intel was smoking with that thing's IMC).
  • Pentium M770 with 2GB DDR2 400
  • Pentium 4 HT 3.2GHz (Northwood) with 2GB single channel DDR 330 (it should run at 400MHz, but the ATi chipset for what ever reason won't let it)
  • 2x Pentium III 1000EB with 2GB PC133 SDRAM in 4 way interleave mode (on an Abit VP6)

I've already got the raw data on the bandwidth (read, right and copy) as well as latency averaged across 5 runs for all bar the Pentium III and Phenom II (using Aida64). I've also got the SS still to go with each one of the results as well.

http://www.anandtech.com/show/9320/intel-broadwell-review-i7-5775c-i5-5675c/4

Also note the clock speed difference of the stock 4790K vs the 5775C.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Tomsen said:

Yeah sure. You say that in the context of gaming then CPU overhead is I/O overhead. I do know where you are getting at.

 

How do you conclude it is true in the vast majority of time? I would make the bet that in the vast majority of games, you wouldn't see much benefits going from 1886MHz to 2400MHz. Sure for new big titles that isn't going to be true.

 

It might be easier, but it clearly gives the wrong idea for those not in the know. So without any further explanation, it leaves people with the wrong impression. Maybe those sites should be more descriptive in their reviews and also include memory in their analysis instead of dumping it down to CPU/GPU bottleneck. I mean system today are guaranteed to have CPU, GPU, memory, storage and network. All can have a say in the final performance.

 

Also, the gaming industry aren't famous for its correctness, the hardware review industry is mostly filled with random people unboxing and reading what it says on the box.

It clearly doesn't give convey the same information, and this very thread is a proof of that. 

 

I can't be certain, because I haven't run any kind of performance analysis on any game, but my guess would be; The game code has many serialized I/O request which can halt the CPU. Faster memory brings down the real-time latency IIRC. A stupid example would be having a loop with one or more I/O request for each run.

 

That is how language evolve. People misusing words enough and suddenly you change the very meaning of the word. 

 

You are probably forgetting why the CPU can't process the instruction fast enough. I mean most instructions can be processed in a billionth of a second.. But nearly all instructions require at least 1 operand (some require 2 or 3) and SIMD require vastly more. Each I/O request takes much longer than to execute an instruction. Sadly, you have more operands than instructions.

 

You are forgetting a few things; More powerful CPU can have better I/O systems (Intels enthusiast platform has better I/O connectivity), have more cache (to avoid latency for outgoing request), and game code can behave differently with more cores.

How do I conclude it's true the majority of the time? That's simple. I actually test the things I speak about. I even went as far back as an MMO I played in 2005, Silkroad Online, to see whether or not faster memory helped, because I remember that game running poorly, even with a good GPU. Surely enough, faster memory helped with minimum dips. Perfect World, released only a year or two later, was the same result. I went back to test other "older" titles (Metro, Crysis, AC3, etc) and across the board, faster ram helped. The only time I've seen it make no difference, are on games that are limited by the engine itself, or random titles (Shadow of Mordor). I've also been extremely vocal on this forum, for people to test every title they can think of, and provide the results. Normally I am met with the LTT video that ram doesn't matter, but those that do take me up on my offer, end up with positive results.

 

You also seem hung up on the "CPU Overhead" part, but understand this. It still applies:

Quote

In computer science, overhead is any combination of excess or indirect computation time, memory, bandwidth, or other resources that are required to attain a particular goal. It is a special case of engineering overhead.

You've also failed to sufficiently prove that this IS a CPU I/O problem. That's all I ask for, since you seem so certain that it is. The funny thing is, I agree with you that it's an I/O issue, but since I myself cannot prove it (I know absolutely nothing about programming) I can't make the claim. I can however, use CPU overhead, because being both "CPU bound" and "I/O bound" fall under that exact same banner. The CPU has overhead, whether it's induced by lack of threads, clock speed, or lack of fast enough ram to properly feed it. Regardless of the cause, it's still CPU overhead. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 28/2/2017 at 1:18 AM, Bouzoo said:

Interesting, because MBO specs say that 2667 is max. Perhaps it will downclock it? 

2666(7) is the max speed of the DDR4 standard. Every speed above is basically memory OC, which mobo vendors also write on their product pages. XMP for instance auto OC's the memory.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

On 28/02/2017 at 0:18 AM, Bouzoo said:

Interesting, because MBO specs say that 2667 is max. Perhaps it will downclock it? 

 

1 hour ago, Notional said:

2666(7) is the max speed of the DDR4 standard. Every speed above is basically memory OC, which mobo vendors also write on their product pages. XMP for instance auto OC's the memory.

That's true, even Intels 5960X lists DDR4 memory support of 1333/1600/2133 which are the JEDEC standards.
http://ark.intel.com/products/82930/Intel-Core-i7-5960X-Processor-Extreme-Edition-20M-Cache-up-to-3_50-GHz

 

The i7 6900K lists 2133/2400

https://ark.intel.com/products/94196/Intel-Core-i7-6900K-Processor-20M-Cache-up-to-3_70-GHz

 

The Asus X99 Deluxe motherboard's controller defaults to 2133Mhz before needing to be OC'd.

http://www.guru3d.com/articles-pages/asus-x99-deluxe-motherboard-review.html

 

If you want more, you simply have to overclock it.

 

Asus actually had lots of issues with RAM and XMP speeds for a very long time. I couldn't even run my RAM at 2400 for nearly 2 months after launch, and even after that they still had issues. Took quite a few BIOS updates before the got it stable and working well.

5950X | NH D15S | 64GB 3200Mhz | RTX 3090 | ASUS PG348Q+MG278Q

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 3/1/2017 at 11:30 AM, Dabombinable said:

I think I'll put you in the same box I had Patrick: "needs to provide more than hearsay". And the performance from faster RAM is felt across a huge range of CPU- dating from the days of AMD's K6 and Intel's P6 to present. With the IMC in Skylake and Kabylake being better than those preceding them, meaning that they can better use system memory and thus get more performance via it in some games+applications. Notice as well that the i7 5775c due to its 128MB L4 cache can in some situation outperform a 6700K even when stock, which is further support for "faster memory=greater performance".

 

Would you like me to run a suite of cache+memory benchmarks followed by game and synthetic benhmarks tomorrow with the following:

  • Xeon X5450/QX6850/Pentium E6500K/C2D E8500/Pentium 4 631@stock with DDR2 667-1066 5-6-6-18 (better timings than most DDR3 1333 sticks at 1066)
  • A8 4555M with dual channel DDR3 1333 (2GB soldered on with no SPD,+2GB Hynix stick)
  • Phenom II P920 and N970 with DDR3 1066 and 1333 repsectively
  • i5 470UM with 8GB DDR3 800 (I still don't know what Intel was smoking with that thing's IMC).
  • Pentium M770 with 2GB DDR2 400
  • Pentium 4 HT 3.2GHz (Northwood) with 2GB single channel DDR 330 (it should run at 400MHz, but the ATi chipset for what ever reason won't let it)
  • 2x Pentium III 1000EB with 2GB PC133 SDRAM in 4 way interleave mode (on an Abit VP6)

I've already got the raw data on the bandwidth (read, right and copy) as well as latency averaged across 5 runs for all bar the Pentium III and Phenom II (using Aida64). I've also got the SS still to go with each one of the results as well.

http://www.anandtech.com/show/9320/intel-broadwell-review-i7-5775c-i5-5675c/4

Also note the clock speed difference of the stock 4790K vs the 5775C.

I'm not quite sure how anything you posted goes against what I am saying? Did you understand what I wrote or?

I don't need to provide anything other than hearsay (you are all providing it for me), so either you don't understand the argument or you got your head up your ass.

 

I'm very aware of how that can sound offensive, but please bear in mind, that we are talking about a simple bottleneck example. If you don't understand the fundamental idea behind a bottleneck, then I would advise looking it up. To cut it short; You aren't going to gain a major increase in performance without alleviating the bottleneck. Can you agree to that?

 

On 3/1/2017 at 0:02 PM, MageTank said:

How do I conclude it's true the majority of the time? That's simple. I actually test the things I speak about. I even went as far back as an MMO I played in 2005, Silkroad Online, to see whether or not faster memory helped, because I remember that game running poorly, even with a good GPU. Surely enough, faster memory helped with minimum dips. Perfect World, released only a year or two later, was the same result. I went back to test other "older" titles (Metro, Crysis, AC3, etc) and across the board, faster ram helped. The only time I've seen it make no difference, are on games that are limited by the engine itself, or random titles (Shadow of Mordor). I've also been extremely vocal on this forum, for people to test every title they can think of, and provide the results. Normally I am met with the LTT video that ram doesn't matter, but those that do take me up on my offer, end up with positive results.

 

You also seem hung up on the "CPU Overhead" part, but understand this. It still applies:

You've also failed to sufficiently prove that this IS a CPU I/O problem. That's all I ask for, since you seem so certain that it is. The funny thing is, I agree with you that it's an I/O issue, but since I myself cannot prove it (I know absolutely nothing about programming) I can't make the claim. I can however, use CPU overhead, because being both "CPU bound" and "I/O bound" fall under that exact same banner. The CPU has overhead, whether it's induced by lack of threads, clock speed, or lack of fast enough ram to properly feed it. Regardless of the cause, it's still CPU overhead. 

@MageTank I never questioned that your testing is wrong, simply saying that you haven't tested the vast majority of games that exist. I think you will agree to that statement right? How do we even define a game, do facebook games also count?

 

I'm not really hung up on "CPU Overhead" (do note how the discussion went from "CPU bound" and "I/O bound" to "CPU overhead"). If you keep changing the subject slightly for each comment, then sure, at the end we will be talking about something completely different.

 

I have failed to prove anything, because how exactly would you like me to prove it? I could make the same argument and say that you haven't sufficiently proven otherwise. Sadly reality is on my side, and that is often shown, with the examples you are putting forward. The issue I have with dumping it all down to one common expression is that it gives rise for ignorance. We could just as well dump everything down to being GPU overhead.

 

Gaming isn't just affected by two components of your system, and trying to dump it down to such, will lead to ignorance.

Please avoid feeding the argumentative narcissistic academic monkey.

"the last 20 percent – going from demo to production-worthy algorithm – is both hard and is time-consuming. The last 20 percent is what separates the men from the boys" - Mobileye CEO

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, Tomsen said:

I'm not quite sure how anything you posted goes against what I am saying? Did you understand what I wrote or?

I don't need to provide anything other than hearsay (you are all providing it for me), so either you don't understand the argument or you got your head up your ass.

 

I'm very aware of how that can sound offensive, but please bear in mind, that we are talking about a simple bottleneck example. If you don't understand the fundamental idea behind a bottleneck, then I would advise looking it up. To cut it short; You aren't going to gain a major increase in performance without alleviating the bottleneck. Can you agree to that?

 

@MageTank I never questioned that your testing is wrong, simply saying that you haven't tested the vast majority of games that exist. I think you will agree to that statement right? How do we even define a game, do facebook games also count?

 

I'm not really hung up on "CPU Overhead" (do note how the discussion went from "CPU bound" and "I/O bound" to "CPU overhead"). If you keep changing the subject slightly for each comment, then sure, at the end we will be talking about something completely different.

 

I have failed to prove anything, because how exactly would you like me to prove it? I could make the same argument and say that you haven't sufficiently proven otherwise. Sadly reality is on my side, and that is often shown, with the examples you are putting forward. The issue I have with dumping it all down to one common expression is that it gives rise for ignorance. We could just as well dump everything down to being GPU overhead.

 

Gaming isn't just affected by two components of your system, and trying to dump it down to such, will lead to ignorance.

To recap: When a CPU can not process instructions fast enough-whether that be due to its IPC, clock speed-it has to temporarily store instructions in the cache and system memory. If the cache is large enough+fast enough with also low latency, then faster system memory has a smaller impact than it other wise would. An example of better performance through a large and fast cache is the i7 5775C. The example of faster system memory were the videos that I linked previously. Memory allowing greater performance isn't because of it reducing an I/O bottleneck, its a reduction in the CPU bottleneck. And yes "You aren't going to gain a major increase in performance without alleviating the bottleneck." is correct. It however can be done by either replacing the CPU with a faster one, using faster RAM or a combination of both. Also-my evidence isn't hearsay, so how about you provide some of your own that isn't as well.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

31 minutes ago, Dabombinable said:

To recap: When a CPU can not process instructions fast enough-whether that be due to its IPC, clock speed-it has to temporarily store instructions in the cache and system memory. If the cache is large enough+fast enough with also low latency, then faster system memory has a smaller impact than it other wise would. An example of better performance through a large and fast cache is the i7 5775C. The example of faster system memory were the videos that I linked previously. Memory allowing greater performance isn't because of it reducing an I/O bottleneck, its a reduction in the CPU bottleneck. And yes "You aren't going to gain a major increase in performance without alleviating the bottleneck." is correct. It however can be done by either replacing the CPU with a faster one, using faster RAM or a combination of both. Also-my evidence isn't hearsay, so how about you provide some of your own that isn't as well.

I already went over this, but I'll do it again, and correct a few mistakes you made.

 

You are wrong as to why the CPU can't process instruction fast enough. Clockspeed is irrelevant (lower clockspeed does make the latency to system memory appear lower, that is if you count it in CPU cycles and not real time ns), and IPC is just a measure of the amount of instruction per cycle. Neither actually explains why. Just to reiterate my last response to this statement: Most instruction can be executed in a billionth of a second, but each instruction require at least one operand. Some require 2 or 3. SIMD require vastly more. Operands are the actual data you are performing the instruction on. Naturally you can quickly end up with more operands than instruction.

 

Also, the CPU will temporarily store EVERY instruction it executes in the L1 instruction cache. Why would it store it in system memory, that is where the executable is, from where it gets the instruction from in the first place. That wouldn't make much sense...

 

I already went over how cache can have an influence on I/O dependency.

 

That is a wrong way to put the problem. If we change system memory to networking, you wouldn't want to categorize it as a CPU bottleneck would you? The bottleneck would be on the network latency, not on the CPU. Sure, the CPU gets affected by the network bottleneck, but that doesn't mean that the CPU is bottlenecking (that is the nature of a bottleneck, that one thing is keeping everything else down). It is just a wrong way to put things. 

 

Also, are you mixing my reply to @MageTank as a reply to you? 

 

 

 

Please avoid feeding the argumentative narcissistic academic monkey.

"the last 20 percent – going from demo to production-worthy algorithm – is both hard and is time-consuming. The last 20 percent is what separates the men from the boys" - Mobileye CEO

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Tomsen said:

I already went over this, but I'll do it again, and correct a few mistakes you made.

 

You are wrong as to why the CPU can't process instruction fast enough. Clockspeed is irrelevant (lower clockspeed does make the latency to system memory appear lower, that is if you count it in CPU cycles and not real time ns), and IPC is just a measure of the amount of instruction per cycle. Neither actually explains why. Just to reiterate my last response to this statement: Most instruction can be executed in a billionth of a second, but each instruction require at least one operand. Some require 2 or 3. SIMD require vastly more. Operands are the actual data you are performing the instruction on. Naturally you can quickly end up with more operands than instruction.

 

Also, the CPU will temporarily store EVERY instruction it executes in the L1 instruction cache. Why would it store it in system memory, that is where the executable is, from where it gets the instruction from in the first place. That wouldn't make much sense...

 

I already went over how cache can have an influence on I/O dependency.

 

That is a wrong way to put the problem. If we change system memory to networking, you wouldn't want to categorize it as a CPU bottleneck would you? The bottleneck would be on the network latency, not on the CPU. Sure, the CPU gets affected by the network bottleneck, but that doesn't mean that the CPU is bottlenecking (that is the nature of a bottleneck, that one thing is keeping everything else down). It is just a wrong way to put things. 

 

Also, are you mixing my reply to @MageTank as a reply to you? 

 

 

 

The instructions get stored in system memory when the cache gets filled, with each cache overflowing to the next and eventually system memory. That has been known for at the very least 20 years: http://www.anandtech.com/show/211/3

Quote

As originally discussed in the K6-3 review, the cache on a motherboard equipped with a K6-3 processor immediately becomes the system’s L3 cache. Those of you with motherboards with 512KB, 1MB or 2MB of L2 cache on-board will soon have the ability to take advantage of 512KB, 1MB or 2MB of L3 cache if you make the upgrade to the K6-3.

graph6.gif

From the initial discussion surrounding the importance of cache, we concluded that the less often a CPU has to return to the system memory for data retrieval, the faster the system’s overall performance will be. For example, let’s assume that the K6-3 is attempting to retrieve a segment of data that it could not retrieve from its on-chip L1 or L2 cache. On a system with no L3 cache (no cache on the mainboard), the processor would have to go directly to the slow system memory to retrieve the data. On a system with on-board L3 cache, the processor could try the much faster L3 cache first for the data before having to resort to retrieving it directly from the system memory, improving performance considerably for that single data retrieval operation.

 

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Dabombinable said:

The instructions get stored in system memory when the cache gets filled, with each cache overflowing to the next and eventually system memory. That has been known for at the very least 20 years: http://www.anandtech.com/show/211/3

 

OMG, it is now apparent you don't understand the very fundamental idea behind caching. The caches are holding copies of what is in system memory. Does it makes sense to you to keep copies of system memory in system memory?

Please avoid feeding the argumentative narcissistic academic monkey.

"the last 20 percent – going from demo to production-worthy algorithm – is both hard and is time-consuming. The last 20 percent is what separates the men from the boys" - Mobileye CEO

Link to comment
Share on other sites

Link to post
Share on other sites

26 minutes ago, Tomsen said:

OMG, it is now apparent you don't understand the very fundamental idea behind caching. The caches are holding copies of what is in system memory. Does it makes sense to you to keep copies of system memory in system memory?

Reading through https://en.wikipedia.org/wiki/CPU_cache, I misunderstood that part. The fact doesn't change though-faster memory increases performance.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Dabombinable said:

Reading through https://en.wikipedia.org/wiki/CPU_cache, I misunderstood that part. The fact doesn't change though-faster memory increases performance.

I never stated otherwise?

Please avoid feeding the argumentative narcissistic academic monkey.

"the last 20 percent – going from demo to production-worthy algorithm – is both hard and is time-consuming. The last 20 percent is what separates the men from the boys" - Mobileye CEO

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, Tomsen said:

I never stated otherwise?

On 26/02/2017 at 1:09 AM, Tomsen said:

Well, it isn't absolute truth either, it depends on the exact scenario. Any time CPU overhead (and to precise, it has to be I/O overhead) is holding the GPU back, faster memory will show an improvement. If you are CPU bound, faster memory wont do anything. This has being known for decades now.

And you said that you didn't mean GPU. I provided plenty of evidence that in CPU bound scenarios there is a performance benefit when you use faster memory.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

On 28/02/2017 at 2:49 PM, Dabombinable said:

I'd say Asus is just having motherboard issues again-just like in the days of LGA775 and the P45 chipset (it takes some really weird/illogical settings+stick installation order to get it running dual channel DDR2 1066 correctly on most Asus P45 motherboards)

More than just Asus, although the worst performance out so far seems to be on Asus boards.
German review site Golem.de has found that the motherboards do seem to be a big culprit for performance issues. They need new BIOSs before getting better and more consistent performance.

https://translate.google.co.uk/translate?hl=en&sl=de&u=https://www.golem.de/news/ryzen-7-1800x-im-test-amd-ist-endlich-zurueck-1703-125996-4.html&prev=search
 

Quote

The MSI board was delivered with BIOS version 113, until last Friday a new one appeared.

Version 117, which is still up-to-date, improved speed and stability. If we were still able to count on sporadic Bluescreens with the older UEFI, the board is currently stable. Much more important, however, is the drastically higher performance in games and the real pack with 7-Zip. The release notes include, among other things, a fixed problem with the memory act and its timing as well as the voltage.

 


 

Quote

Compared to the original bios, the new UEFI increases the image rate in our game course between plus 4 and plus 26 percent, on the average even plus 17 percent! In view of this tremendous increase in performance, we had to be certain that our values are correct, and have measured with the Asus boards.This gives us a touch more speed in games than with the updated MSI board.

Apart from the low bios performance with the older bios, an option is also missing for the MSI board, as well as for the Asus boards, to switch off the Simultaneous Multithreading (SMT) of the Ryzen 7 1800X. Since AMD according to its own statement similar to formerly Intel under Core-Parking suffers, SMT reduces the picture rate in some games partly. In turn, SMT has accelerated some applications dramatically, so we would have liked to try out how the double thread count affects the performance.

 

5950X | NH D15S | 64GB 3200Mhz | RTX 3090 | ASUS PG348Q+MG278Q

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Dabombinable said:

And you said that you didn't mean GPU. I provided plenty of evidence that in CPU bound scenarios there is a performance benefit when you use faster memory.

That is simply because you don't understand the argument in question.

My point is that you are mislabeling I/O bound with CPU bound.

 

Seems like we have come a full circle, do you have any other things you want to argue that hasn't already been in this thread?

 

Please avoid feeding the argumentative narcissistic academic monkey.

"the last 20 percent – going from demo to production-worthy algorithm – is both hard and is time-consuming. The last 20 percent is what separates the men from the boys" - Mobileye CEO

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Tomsen said:

That is simply because you don't understand the argument in question.

My point is that you are mislabeling I/O bound with CPU bound.

 

Seems like we have come a full circle, do you have any other things you want to argue that hasn't already been in this thread?

 

I/O bound would mean that the FSB is causing the bottleneck not the CPU. If the CPU wasn't the bottleneck you wouldn't see a performance increase, eg playing some games at 4K which would have a CPU bottleneck at 1080p.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, Tomsen said:

I'm not quite sure how anything you posted goes against what I am saying? Did you understand what I wrote or?

I don't need to provide anything other than hearsay (you are all providing it for me), so either you don't understand the argument or you got your head up your ass.

 

I'm very aware of how that can sound offensive, but please bear in mind, that we are talking about a simple bottleneck example. If you don't understand the fundamental idea behind a bottleneck, then I would advise looking it up. To cut it short; You aren't going to gain a major increase in performance without alleviating the bottleneck. Can you agree to that?

 

@MageTank I never questioned that your testing is wrong, simply saying that you haven't tested the vast majority of games that exist. I think you will agree to that statement right? How do we even define a game, do facebook games also count?

 

I'm not really hung up on "CPU Overhead" (do note how the discussion went from "CPU bound" and "I/O bound" to "CPU overhead"). If you keep changing the subject slightly for each comment, then sure, at the end we will be talking about something completely different.

 

I have failed to prove anything, because how exactly would you like me to prove it? I could make the same argument and say that you haven't sufficiently proven otherwise. Sadly reality is on my side, and that is often shown, with the examples you are putting forward. The issue I have with dumping it all down to one common expression is that it gives rise for ignorance. We could just as well dump everything down to being GPU overhead.

 

Gaming isn't just affected by two components of your system, and trying to dump it down to such, will lead to ignorance.

The discussion was never "CPU Bound". Someone said "Anything that is CPU intensive, benefits from faster ram" or something along those lines. You clarified that it was due to CPU I/O,  I simply said his first part about faster ram mattering for gaming being true. Then we both started arguing semantics about "absolute truths" and "vast majorities" and ended up here.

 

As for how I'd like you to prove it, I don't know. I made that perfectly clear when I said "I agree with you that it's an I/O problem, I just can't prove it". It's why I use an all-encompassing blanket term. Easier for people to understand, and requires less explanation on my part. Win-Win. It fits the description of a CPU I/O problem, almost perfectly:

Quote

The I/O bound state has been identified as a problem in computing almost since its inception. The Von Neumann architecture, which is employed by many computing devices, is based on a logically separate central processor unit which requests data from main memory, processes it and writes back the results. Since data must be moved between the CPU and memory along a bus which has a limited data transfer rate, there exists a condition that is known as the Von Neumann bottleneck. Put simply, this means that the data bandwidth between the CPU and memory tends to limit the overall speed of computation.

I just don't understand enough about the design of modern CPU's to know if such limitations still exist, and are the source of the problems we see solved by faster memory. CPU Overhead (which includes this I/O issue) is vague, but it basically means something on the CPU side of things (which, memory is on the CPU side) is holding the GPU back. Is it perfect? Not by any means, but it still conveys the intent of our message. If you have high CPU overhead, faster memory might be of aid. If faster memory doesn't help, OC the CPU as hard as you can and try again. Funny thing is, slow CPU's can bottleneck fast ram (G4400 vs my Samsung B-Die for example). 

 

Either way, I'll concede. You clearly believe in the "faster ram helps" cause, and that's good enough for me. Who knows, maybe in the future you will be able to educate me on this I/O thing and we will finally be able to tell people "why" faster ram helps? lol. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×