Jump to content

AMD speaks on W10 scheduler and Ryzen

@LAwLz @leadeater

Tom's reached a similar conclusion based on the existing data: http://www.tomshardware.com/news/amd-ryzen-5-1600x-1600-1500x-1500,33913.html

Quote

The CCX alignment appears pretty straightforward for the six-core Ryzen 5 models. The Ryzen 5 1600X's base/boost frequencies and TDP matches the Ryzen 7 1800X, so it’s logical to assume the 1600X employs the same dual-CCX (Core Complex) architecture, albeit with a few cores disabled due to defects (or just for the sake of product segmentation).

 

Performance could vary depending upon how AMD aligns the disabled cores. We know that latency increases as data navigates the chasm between the two CCX (via the Infinity Fabric). Lopsided core allocations (four cores on one CCX and two on the other, for instance) that vary per processor (perhaps 3+3 on some) could result in varying levels of application performance even among the same models, so it's unlikely. It's unclear if the Ryzen 5 processors will have the same 8MB of cache enabled for each four-core CCX, yielding a 16MB L3 for the entire chip. It's possible the company will also disable 2MB cache slices along with the deactivated cores (yielding a 12MB cache).

 

Most importantly, we aren't sure if the four-core models will employ a single CCX, or if AMD will continue to employ the dual-CCX design. If AMD utilizes the dual-CCX architecture, it will disable four cores (hopefully all on one CCX, or at least spread evenly between the CCX) to create the four-core SKUs.

 

A single four-core CCX, or only one active CCX on a dual-CCX chip, would help avoid many of the problems that appear to restrict Ryzen 7's gaming performance in many popular titles, and frankly, many enthusiasts are hoping this is the case. The four-core 1500X and 1400 have the same 65W TDP as the eight-core Ryzen 7 and the six-core Ryzen 5 1600X and 1600, which implies they also utilize a dual-CCX alignment. We would expect a lower TDP or significantly higher clocks within the same TDP envelope with a single-CCX architecture. Retaining the dual-CCX design with active cores on both CCX is far from ideal, and the gaming performance disparity would likely continue.

 

AMD hasn't responded to our queries, so we await further information.

+ thanks to @xAcid9 http://www.anandtech.com/show/11202/amd-announces-ryzen-5-april-11th

Quote

We have confirmation from AMD that there are no silly games going to be played with Ryzen 5. The six-core parts will be a strict 3+3 combination, while the four-core parts will use 2+2. This will be true across all CPUs, ensuring a consistent performance throughout.

 

Edited by zMeul
Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, Valentyn said:

 

It's ~3-7% for gaming if you only look at gaming results, and ignore BF1.

 

That's a nice little increase on some games. One many would gladly accept, as it's the IPC difference between haswell and broadwell. :P

 

If you ignore BF1, it's actually a 4.1% improvement in gaming.  With BF 1 massively skewing the results it ends up being ~7.5%.

 

The graph is a bit weird in that it treats the single-CCX "quad core" as the baseline, and the 2+2 setup as the comparison.  Since we're interested in seeing if performance improves without the inter-CCX latency, I'd have used the 2+2 setup as the baseline, since that is most similar to the way R7 functions currently, and compared it to the 4+0 "quad core".

SFF-ish:  Ryzen 5 1600X, Asrock AB350M Pro4, 16GB Corsair LPX 3200, Sapphire R9 Fury Nitro -75mV, 512gb Plextor Nvme m.2, 512gb Sandisk SATA m.2, Cryorig H7, stuffed into an Inwin 301 with rgb front panel mod.  LG27UD58.

 

Aging Workhorse:  Phenom II X6 1090T Black (4GHz #Yolo), 16GB Corsair XMS 1333, RX 470 Red Devil 4gb (Sold for $330 to Cryptominers), HD6850 1gb, Hilariously overkill Asus Crosshair V, 240gb Sandisk SSD Plus, 4TB's worth of mechanical drives, and a bunch of water/glycol.  Coming soon:  Bykski CPU block, whatever cheap Polaris 10 GPU I can get once miners start unloading them.

 

MintyFreshMedia:  Thinkserver TS130 with i3-3220, 4gb ecc ram, 120GB Toshiba/OCZ SSD booting Linux Mint XFCE, 2TB Hitachi Ultrastar.  In Progress:  3D printed drive mounts, 4 2TB ultrastars in RAID 5.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, LAwLz said:

All I hear is "don't use programs which doesn't show the results I want them to show! You should only look at the benchmarks which agrees with me!".

Showing a broad range of programs (which 10 is not, I will admit) is important when talking about what kind of performance differences we can expect. If you just look at the programs which benefits the most then you will give a twisted picture of what people should actually expect from an update (if we get one at all that is).

You can't just dismiss 40% of the benchmarks because they are the ones that benefits the least from it.

I'm sorry to say this, but maybe you should get your fingers out of your ears (because that is not what I'm arguing), this is outright ridiculous reasoning to our debate.

 

We, as in you and me in this very thread, are discussing the performance penalties of cross-CCX communication. Then it is important to adjust our data for that.

 

Just to make an example of how ridiculous it is; If instead we were arguing about the performance benefits of SMT in multi-threaded applications, wouldn't you then agree that including a bunch of single-threaded applications in our data would be dumb? It will totally screw the outcome.

It is garbage in garbage out in its basic form of statistics. You can't reach the right conclusion with screwed data input.

 

I can dismiss all the percentage I want if it isn't *relevant* to our discussion.

 

1 hour ago, LAwLz said:

The average (not counting BF1 because you should not include abnormalities like that) it's a 4% increase, and that's assuming the scheduler is essentially perfect which it won't be.

So what I expect is something like a ~3% performance boost in games (less overall) IF (huge if) we get a patch.

Funny how you get to cherry pick data :)

Kinda hard to make the case its an abnormality with this little data, since every other title tested also showed regression (up to 7%). You could argue that it would be an extreme case, and I would agree with you.

 

I do expect there will be a patch, when and how much it will improve the current situation is unknown. It will probably be AMD to submit it to microsoft, so it might take some time. 

 

1 hour ago, LAwLz said:

I did not mean for my post to read that way. I tagged you since I thought you'd think it was interesting data.

The whole thing about "Microsoft isn't to blame and Windows 10 is optimized for Ryzen" was more meant as a comment to those who were flaming me in another thread said that you shouldn't expect a 10-20% performance increase from some Windows 10 patch. And yes I am incredibly salty from that thread because I got quite a few warning points for arguing with those people who said my posts were just trolling, spam, idiotic etc.

Ok, thought it was directed towards me. I do agree that you aren't going to see 10-20% general performance increase. Might be some extreme cases, but I expect that to be well be extremely limited in quantity. 

 

1 hour ago, LAwLz said:

Anyway, yes it seems like a patch could in theory (although extremely hard to actually make and doing even a tiny thing wrong could reduce performance by a lot) increase performance, but only by ~3%.

That will be like... 1 or 2 FPS in games. It's still free performance so it would be good news (assuming no drawbacks)... But it's far from those 10-20% performance gains people seem to think Microsoft is keeping from Ryzen owners.

The patch would probably be more relevant to the extreme cases. 

Battlefield 1 would yield 22 FPS improvement, certain other titles might get closer to 10 FPS (numbers taken from the french source). The amount of titles that will see this kind of improvement might as well be limited to a handful.

 

If the application wasn't constrained by cross-CCX communications, then yeah of course there aren't going to be improvements.

 

1 hour ago, LAwLz said:

I didn't, but I think I have to. The person who wrote the report was looking at the same numbers I was looking at. Just because the author has one opinion does not mean it is the right one.

Now, this doesn't relate to this topic, but if you read an academic paper, you can't just take out one graph and establish your own conclusion, since the author (which is not true in our case) would go into more details and specifics in his report that would determine the reasoning behind the numbers of the graph.

 

Also, you did kinda assert your own opinion to be the right one in your previous comment.

Please avoid feeding the argumentative narcissistic academic monkey.

"the last 20 percent – going from demo to production-worthy algorithm – is both hard and is time-consuming. The last 20 percent is what separates the men from the boys" - Mobileye CEO

Link to comment
Share on other sites

Link to post
Share on other sites

xPyZyR4ABSIUpB0Jf7hZk4quIhDEhzuqk26Y-e_K

 

oh god! this is hurting my brain ...

why the fuck did they designed it like that? if you have CCX crosstalk you might actually pull the data from RAM than from the other CCX's cache

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, zMeul said:

oh god! this is hurting my brain ...

why the fuck did they designed it like that? if you have CCX crosstalk you might actually pull the data from RAM than from the other CCX's cache

It's not quite what they are saying, cross CCX communication is cross CCX communication it doesn't go through or come from RAM. What they are saying is the speed of the interconnect is set by the memory speed which is why we see Ryzen performance change so drastically with the memory speed. 

Link to comment
Share on other sites

Link to post
Share on other sites

43 minutes ago, leadeater said:

It's not quite what they are saying, cross CCX communication is cross CCX communication it doesn't go through or come from RAM. What they are saying is the speed of the interconnect is set by the memory speed which is why we see Ryzen performance change so drastically with the memory speed. 

no, that is what I say, not what they said

I know very well they said the "fabric" bus is clocked 1:1 to the MC

 

but here's the thing I say: if it slow, why not query the RAM instead of generate a cross CCX cache query - it will be just as slow xD

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, zMeul said:

no, that is what I say, not what they said

I know very well they said the "fabric" bus is clocked 1:1 to the MC

 

but here's the thing I say: if it slow, why not query the RAM instead of generate a cross CCX cache query - it will be just as slow xD

Because of higher latency. Your argument is flawed as always.

Please avoid feeding the argumentative narcissistic academic monkey.

"the last 20 percent – going from demo to production-worthy algorithm – is both hard and is time-consuming. The last 20 percent is what separates the men from the boys" - Mobileye CEO

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Tomsen said:

Because of higher latency. Your argument is flawed as always.

have you actually seen the abysmal latencies of the L3 cache?

and that cache needs to serve it's own CCX at the same time

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

It's not quite what they are saying, cross CCX communication is cross CCX communication it doesn't go through or come from RAM. What they are saying is the speed of the interconnect is set by the memory speed which is why we see Ryzen performance change so drastically with the memory speed. 

Can you explain this for a complete dumb dumb like myself? Is this Infinity Fabric thing responsible for this broken IMC?

 

In the mean time, I'll consult the googles and see of Cortana can uninstall InfinityFabric.exe and see if that fixes the IMC. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, zMeul said:

have you actually seen the abysmal latencies of the L3 cache?

and that cache needs to serve it's own CCX at the same time

The latency of DDR3 1333 across my A8 4555M's IMC is lower than the CCX latency-100ns

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, zMeul said:

have you actually seen the abysmal latencies of the L3 cache?

and that cache needs to serve it's own CCX at the same time

You think system memory latency is any better?

 

Also, to point out how flawed of an idea yours is;

 

CCX[0] request data[0], which is also being cached in CCX[1] L3 cache, but instead gets it from system memory. It might not get the actual updated data from the CCX[1] cache, or the two CCX clusters could start independently work on each of their data[0] cache.

 

Now this is actually solved in the cache protocol implemented. Cache lines have "states" (like open, locked, shared, etc), but that wont work through system memory as you so misleadingly think.

 

I would advise you to actually read into what you are arguing. From my point of view, you are spewing pure nonsense. It simply doesn't make any sense in so many ways.

Please avoid feeding the argumentative narcissistic academic monkey.

"the last 20 percent – going from demo to production-worthy algorithm – is both hard and is time-consuming. The last 20 percent is what separates the men from the boys" - Mobileye CEO

Link to comment
Share on other sites

Link to post
Share on other sites

I'm actually not super well versed in the RAM latency vs speed real world effects. Is this excerpted post from another source relatively accurate?

 

Spoiler

It may seem like it makes sense but most of what he wrote is incorrect.

DDR3-2133 is a memory module rated for a transfer rate of 2133 megabits per second on each IO pin. Data on the IO pin is exchanged on both the rising edge and the falling edge of the DRAM IO bus reference clock. Thus, a module rated for DDR3-2133 has an IO bus frequency of 1066Mhz, not 2133Mhz. The measure of IO pin transfer rate is in transfers per second, not cycles per second, so a DDR transfer rate of 2133MT/s has a bus frequency of 1066Mhz. The same holds true for DDR, DDR2, and DDR3.

The actual DRAM modules themselves operate at an even slower rate. What do DDR-400, DDR2-800, and DDR3-1600 all have in common? The core memory clock is 200Mhz.

DDR-400 has an IO bus frequency of 200Mhz, and a 1:1 ratio between the IO bus reference clock and the core memory clock.

DDR2-800 has an IO bus frequency of 400Mhz, and a 2:1 ratio between the IO bus reference clock and the core memory clock.

DDR3-1600 has an IO bus frequency of 800Mhz, and a 4:1 ratio between the IO bus reference clock and the core memory clock.

This pattern is an example of one of the most misunderstood aspects of modern DRAM. It has gotten wider and deeper -- which allows for greater capacity and larger prefetch buffers -- not faster which allows for lower latency. Latency measured in real time (nanoseconds) hasn't changed significantly over the past decade.

The most commonly cited DRAM performance metric aside from the IO bus transfer rate is the CAS latency. The CAS latency is the duration (in clock cycles) that it takes for the DRAM module performing a read operation to register a column address and produce a stable output on the IO pins. This delay is measured in IO bus clock cycles, which as noted above scale between generations. This is why DDR2-800C (CL4) has the same first word delay of 10ns as DDR3-1600G (CL8). Where DDR3 really improves over its predecessors is in fourth and eighth word delays, this is accomplished through a deeper prefetch buffer (outside of the scope of this explanation).

So it should be no surprise then that DDR3-2133 with CAS 9 is faster than DDR3-1600 with CAS9. This is nice, but somewhat immaterial. Why is that?

x86 memory channels data paths are 64 bits wide, and the DRAM modules are arranged to fit this. If you look at a DRAM module without the heatsink attached (eg, this one) you'll see 8 integrated circuits on one side of the PCB. Each of these ICs is an 8 bit DRAM module and they are all tied together with common command and address lines, but with data lines formed into a 64 bit bus. Arranged together, they form what is called a rank. DIMMs that have 8 modules on each side arranged into two ranks are called dual-rank. Servers often use compact modules which have 4 or even 8 ranks crammed onto a single PCB for extreme amounts of memory.

The memory controller can only access one rank at a time, even if two or more ranks are installed in the channel on one or more DIMMs. However, the individual DRAM chips themselves are broken down into individual banks, each of which can be active and working independently at the same time. DDR3 modules have 8 individual banks on each IC. So, even with high DRAM latencies, the DRAM controller can keep the entire channel busy nearly 100% of the time simply by switching to another bank after a command has been issued, and switching back when the output is ready. Thus, high speed memory almost always outperforms low latency memory; this is especially true when high speed memory is accompanied by low latency (often at the cost of higher supply voltage).

This does not necessarily mean that DDR3-2133 is necessary or even a good idea. I run 32GiB of it in my PC but it requires some extra tweaking to obtain stability.

I hope that this answered your question

 

Link to comment
Share on other sites

Link to post
Share on other sites

39 minutes ago, zMeul said:

but here's the thing I say: if it slow, why not query the RAM instead of generate a cross CCX cache query - it will be just as slow xD

No it wont as L1/L2/L3 cache is much faster than ram

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, leadeater said:

No it wont as L1/L2/L3 cache is much faster than ram

yes .. it doesn't matter if the speed of the interconnect bus is limited to that of the MC

you can have the L3 caches running at 1milion Mhz, if the bus connecting CCXs runs at 1Hz, the data between CCXs will transfer at 1Hz xD

 

this is what you should be looking at:

ping-amd.png

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, zMeul said:

yes .. it doesn't matter if the speed of the interconnect bus is limited to that of the MC

you can have the L3 caches running at 1milion Mhz, if the bus connecting CCXs runs at 1Hz, the data between CCXs will transfer at 1Hz xD

That is ignoring latency of accessing ram and pulling it in to L1/L2/L3 caches. Inter CCX communication does not have to do that, it can pull L3 cache from one CCX and load it directly in to L2 cache of a different CCX across the bus.

 

Any action of pulling data from RAM is far more costly than using CPU cache, always (Intel or AMD).

 

Edit:

What our doing it taking one piece of data and applying it across the board without actually looking at the full picture, yes I have seen the above graph before and yes cross CCX latency is higher but that is only applicable to that specific scenario.

 

Accessing RAM and getting the CPU to execute work can't be done directly, CPU cache must always be used and that is why your statement of do it from RAM because it will be just as slow can never be true. 

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, MageTank said:

Can you explain this for a complete dumb dumb like myself? Is this Infinity Fabric thing responsible for this broken IMC?

AMD's Infinity Fabric is the name for the technology that drives the twin quadcore modules as an octocore CPU. It's not the cause of AMD's apparent IMC issues, the issues with Infinity Fabric seem to be due to the IMC issues.

Come Bloody Angel

Break off your chains

And look what I've found in the dirt.

 

Pale battered body

Seems she was struggling

Something is wrong with this world.

 

Fierce Bloody Angel

The blood is on your hands

Why did you come to this world?

 

Everybody turns to dust.

 

Everybody turns to dust.

 

The blood is on your hands.

 

The blood is on your hands!

 

Pyo.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, leadeater said:

That is ignoring latency of accessing ram and pulling it in to L1/L2/L3 caches. Inter CCX communication does not have to do that, it can pull L3 cache from one CCX and load it directly in to L2 cache of a different CCX across the bus.

 

Any action of pulling data from RAM is far more costly than using CPU cache, always (Intel or AMD).

look at the graph I posted above

 

and:

  • L3 latency queried from same CCX is at ~33ns
  • DDR4 2666 latency on Ryzen is 90ms
  • inter CCX latency is way way over 100ms
Link to comment
Share on other sites

Link to post
Share on other sites

Don't mind me, three dudes arguing about the CCX. I am just gonna sit here by myself and spew some random thoughts in my head until someone chimes in to correct me.

 

What if this Disney™ Infinity Fabric isn't just connected to the clock speed of the ram itself. What if it's actually connected to the IMC's RTL itself? Huge stretch, I know, but bear with me. If the RTL itself has an impact on the Infinity Fabric (since RTL is dictated not only by your frequency, but by CAS, Command Rate, Ranks, and nearly every tertiary timing); this would make sense as to why AMD locked down the modification of any of these timings, as it could potentially break absolutely everything connected to them. I have zero evidence for this, and zero knowledge of the CCX's and this Infinity Fabric, but at this point, I am grasping for straws. 

 

There has to be a reason beyond laziness or oversight for them to not let us touch tertiary timings. We've had access to them for as long as I could remember. Long before most of us even knew what they were for. Why fail to include them now, unless there was a serious reason for doing so? They included them on Carrizo, and that wasn't long ago.

 

Man. This forum needs a shock collar that zaps me every single time I mention Ryzen's IMC. I know people have to be tired of hearing about it by now, lol. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, MageTank said:

Don't mind me, three dudes arguing about the CCX. I am just gonna sit here by myself and spew some random thoughts in my head until someone chimes in to correct me.

 

What if this Disney™ Infinity Fabric isn't just connected to the clock speed of the ram itself. What if it's actually connected to the RTL IMC itself? Huge stretch, I know, but bear with me. If the RTL itself has an impact on the Infinity Fabric (since RTL is dictated not only by your frequency, but by CAS, Command Rate, Ranks, and nearly every tertiary timing). This would make sense as to why AMD locked down the modification of any of these timings, as it could potentially break absolutely everything connected to them. I have zero evidence for this, and zero knowledge of the CCX's and this Infinity Fabric, but at this point, I am grasping for straws. 

 

There has to be a reason beyond laziness or oversight for them to not let us touch tertiary timings. We've had access to them for as long as I could remember. Long before most of us even knew what they were for. Why fail to include them now, unless there was a serious reason for doing so? They included them on Carrizo, and that wasn't long ago.

 

Man. This forum needs a shock collar that zaps me every single time I mention Ryzen's IMC. I know people have to be tired of hearing about it by now, lol. 

I'm not tired of hearing about it, I've learned more from you in regards to memory than years of pestering some other "Enthusiasts."

Come Bloody Angel

Break off your chains

And look what I've found in the dirt.

 

Pale battered body

Seems she was struggling

Something is wrong with this world.

 

Fierce Bloody Angel

The blood is on your hands

Why did you come to this world?

 

Everybody turns to dust.

 

Everybody turns to dust.

 

The blood is on your hands.

 

The blood is on your hands!

 

Pyo.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, zMeul said:

look at the graph I posted above

 

and:

  • L3 latency queried from same CCX is at ~33ns
  • DDR4 2666 latency on Ryzen is 90ms
  • inter CCX latency is way way over 100ms

Ram + CCX latency + L2 latency is the worst case.

Ram + L2 latency is best case

 

But there is no direct from ram so you need to do your calculations again adding the full paths of what actually happens. Also L3 cache is victim cache so data from ram goes to L2 cache not L3. Data falls out of L2 cache up to L3.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, leadeater said:

Ram + CCX latency + L2 latency is the worst case.

Ram + L2 latency is best case

 

But there is no direct from ram so you need to do your calculations again adding the full paths of what actually happens. Also L3 cache is victim cache so data from ram goes to L2 cache not L3. Data falls out of L2 cache up to L3.

maybe, maybe not

I'm still expecting that block diagram on Zen; because from what I'm getting is that inter CCX data gets transferred trough L3 - otherwise it makes no bloody sense based on the results we're seeing

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, zMeul said:

maybe, maybe not

I'm still expecting that block diagram on Zen; because from what I'm getting is that inter CCX data gets transferred trough L3 - otherwise it makes no bloody sense based on the results we're seeing

Yea it may or may not pass through L3 cache on the way between CCXs, I don't know. But what I was really pointing out was it being as slow as just getting it from ram isn't the case due to the way you need to load data in from RAM, even if the RAM access latency is lower than L3 cache it will always be higher than directly through the caches even across the interconnect.

 

Passing data between CCXs using the RAM would be a double operation: cache to ram then ram to cache. That means the twice the latency if we go with the impossible zero latency of ram.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, leadeater said:

Yea it may or may not pass through L3 cache on the way between CCXs, I don't know. But what I was really pointing out was it being as slow as just getting it from ram isn't the case due to the way you need to load data in from RAM, even if the RAM access latency is lower than L3 cache it will always be higher than directly through the caches even across the interconnect.

 

Passing data between CCXs using the RAM would be a double operation: cache to ram then ram to cache. That means the twice the latency if we go with the impossible zero latency of ram.

makes sense

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, MageTank said:

What if this Disney™ Infinity Fabric isn't just connected to the clock speed of the ram itself. What if it's actually connected to the IMC's RTL itself? Huge stretch, I know, but bear with me. If the RTL itself has an impact on the Infinity Fabric (since RTL is dictated not only by your frequency, but by CAS, Command Rate, Ranks, and nearly every tertiary timing); this would make sense as to why AMD locked down the modification of any of these timings, as it could potentially break absolutely everything connected to them. I have zero evidence for this, and zero knowledge of the CCX's and this Infinity Fabric, but at this point, I am grasping for straws. 

That is actually a really good point and sounds plausible for why we can't change them.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, leadeater said:

That is actually a really good point and sounds plausible for why we can't change them.

Great. Now I wish they'd let me touch them even more now x.x

 

Sure, I might risk the total destruction of everything tied to the IF, but if it's actually related that deeply, then it would mean memory overclocking matters even more, and should (in theory) yield even more performance. High risk, high reward. The best kind of overclocking.

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×