Jump to content

AMD speaks on W10 scheduler and Ryzen

2 minutes ago, Valentyn said:

Hardware.fr tested two scenarios for the Quadcore. One CCX 4core vs two CCX 2+2

20% improvement in BF1 using a single CCX....

3-8% in other games.

Which kind of makes me wonder... Why did AMD not make an 8 core die and instead decided to make a dual quad core die? WHYYYY? Ryzen would have been THE CPU to buy, if it didn't have this flaw!

CPU: Intel Core i7-5820K | Motherboard: AsRock X99 Extreme4 | Graphics Card: Gigabyte GTX 1080 G1 Gaming | RAM: 16GB G.Skill Ripjaws4 2133MHz | Storage: 1 x Samsung 860 EVO 1TB | 1 x WD Green 2TB | 1 x WD Blue 500GB | PSU: Corsair RM750x | Case: Phanteks Enthoo Pro (White) | Cooling: Arctic Freezer i32

 

Mice: Logitech G Pro X Superlight (main), Logitech G Pro Wireless, Razer Viper Ultimate, Zowie S1 Divina Blue, Zowie FK1-B Divina Blue, Logitech G Pro (3366 sensor), Glorious Model O, Razer Viper Mini, Logitech G305, Logitech G502, Logitech G402

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, zMeul said:

source: https://community.amd.com/community/gaming/blog/2017/03/13/amd-ryzen-community-update?sf62107357=1

 

in the light of AMD's own investigations and other 3rd party investigation into Windows' 10 ability to properly identify Ryzen CPUs and assigns threads accordingly, AMD's Robert Hallock posted on AMD's blog:

 

 

so ... ladies and gentlemen - do not expect AMD / MicroSoft to release a W10 patch that will magically fix Ryzen's poor gaming performance - it is what it is and nothing will change that

 

however, AMD plans to release a patch that addresses the power plan issues, in April

 

so why W7 performs better?

because W7 is the better OS ;)

theres one thing the scheduler could do to improve perf though which is keep threads wich a lot of crosstalk in the same ccx. but this is adding features to scheduler not exactly a bug

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, zMeul said:

no, you should not trust my math and do your own - the source of the data is in the OP, easily accessible

the point was I didn't do the IPC % calculations based on PCPer's data but the results from "Slit's" results over at Anand's forums and I arrived at the similar ~14% KabyLake IPC increase over Haswell

 

and that shows Anand's numbers are totally out of whack

I'm not necessarily saying that he's wrong, but I tend to trust the benchmarks of well-established and trusted sources over a random forum-poster.  I can't think of any possible ulterior motives, and I'm sure there aren't any, but that is the only place I've seen these results, and I searched for quite a while.

Royal Rumble: https://pcpartpicker.com/user/N3v3r3nding_N3wb/saved/#view=NR9ycf

 

"How fortunate for governments that the people they administer don't think." -- Adolf Hitler
 

"I am always ready to learn although I do not always like being taught." -- Winston Churchill

 

"We must learn to live together as brothers or perish together as fools." -- Martin Luther King Jr.

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, djdwosk97 said:

I meant compared to Intel -- 40 vs 80.

My bet would be because of intels ringbus. The cores in each CCX cluster is more tightly coupled.

Please avoid feeding the argumentative narcissistic academic monkey.

"the last 20 percent – going from demo to production-worthy algorithm – is both hard and is time-consuming. The last 20 percent is what separates the men from the boys" - Mobileye CEO

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Hunter259 said:

My apologies. No they are a single die but the way that looks to me is when data is transferred between CCX's there is going to be an increase in latency just due to design.

Thats true, but it also decreases latency between cores in the CCX. It got pros and cons.

 

1 hour ago, PCGuy_5960 said:

Lel. Power consumption has nothing to do with TDP

Power consumption is correlated with TDP. Power creates heat. TDP is based on heat.

 

1 hour ago, PCGuy_5960 said:

Ryzen's Gaming performance is mostly an issue with its architecture:

This is from PCPer's article. I have one question, why did AMD have to make Ryzen 7 a dual quad core die? Why did they not make an 8 core die and call it a day? Had they done that, Ryzen would be an AMAZING CPU for everyone. Now, it is only for content creators who game. 

Ryzen isn't a dual quad core die. It is a single 8 core die.

Please avoid feeding the argumentative narcissistic academic monkey.

"the last 20 percent – going from demo to production-worthy algorithm – is both hard and is time-consuming. The last 20 percent is what separates the men from the boys" - Mobileye CEO

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Tomsen said:

Power consumption is correlated with TDP. Power creates heat. TDP is based on heat.

95W TDP doesn't mean that the CPU consumes 95W. In this sense.

3 minutes ago, Tomsen said:

Ryzen isn't a dual quad core die. It is a single 8 core die.

It is a dual quad core CCX die, so it is like a dual quad core die. 

CPU: Intel Core i7-5820K | Motherboard: AsRock X99 Extreme4 | Graphics Card: Gigabyte GTX 1080 G1 Gaming | RAM: 16GB G.Skill Ripjaws4 2133MHz | Storage: 1 x Samsung 860 EVO 1TB | 1 x WD Green 2TB | 1 x WD Blue 500GB | PSU: Corsair RM750x | Case: Phanteks Enthoo Pro (White) | Cooling: Arctic Freezer i32

 

Mice: Logitech G Pro X Superlight (main), Logitech G Pro Wireless, Razer Viper Ultimate, Zowie S1 Divina Blue, Zowie FK1-B Divina Blue, Logitech G Pro (3366 sensor), Glorious Model O, Razer Viper Mini, Logitech G305, Logitech G502, Logitech G402

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, PCGuy_5960 said:

95W TDP doesn't mean that the CPU consumes 95W. In this sense.

It is a dual quad core CCX die, so it is like a dual quad core die. 

I never said it would correlate 1 to 1. But increase power and you will increase TDP.

No, it is nothing like a dual quad core die. MCM is an entirely different concept, that we will see with naples.

Please avoid feeding the argumentative narcissistic academic monkey.

"the last 20 percent – going from demo to production-worthy algorithm – is both hard and is time-consuming. The last 20 percent is what separates the men from the boys" - Mobileye CEO

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Tomsen said:

No, it is nothing like a dual quad core die. MCM is an entirely different concept, that we will see with naples.

OK then, but still, why did they choose this dual quad core CCX die and not a simple 8 core die?

CPU: Intel Core i7-5820K | Motherboard: AsRock X99 Extreme4 | Graphics Card: Gigabyte GTX 1080 G1 Gaming | RAM: 16GB G.Skill Ripjaws4 2133MHz | Storage: 1 x Samsung 860 EVO 1TB | 1 x WD Green 2TB | 1 x WD Blue 500GB | PSU: Corsair RM750x | Case: Phanteks Enthoo Pro (White) | Cooling: Arctic Freezer i32

 

Mice: Logitech G Pro X Superlight (main), Logitech G Pro Wireless, Razer Viper Ultimate, Zowie S1 Divina Blue, Zowie FK1-B Divina Blue, Logitech G Pro (3366 sensor), Glorious Model O, Razer Viper Mini, Logitech G305, Logitech G502, Logitech G402

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, zMeul said:

does anyone have a block diagram for Ryzen?

I'm searching for something rather detailed, like the one for Bulldozer:

  Reveal hidden contents

AMD_Bulldozer_block_diagram_(8_core_CPU)

 

777F7040-8999-403C-AC77-626D07CB96F4-5865-00000D6949B616BF_tmp.png

Which I got from here: http://www.linleygroup.com/mpr/article.php?id=11666

1 hour ago, PCGuy_5960 said:

Ryzen's Gaming performance is mostly an issue with its architecture:

This is from PCPer's article. I have one question, why did AMD have to make Ryzen 7 a dual quad core die? Why did they not make an 8 core die and call it a day? Had they done that, Ryzen would be an AMAZING CPU for everyone. Now, it is only for content creators who game. 

Simplicity. They had the 4 core parts done, and building an interconnect to connect them would be WAY easier than building the L3 cache essentially from the ground up and having to replicate the connections between the other cores as well. Cache gets many times more complicated when you increase its size, and I'm willing to bet that if amd had decided to make an 8 core zen CPU that wasn't two ccxs that it would've taken at least an extra month or two and we wouldn't see the first zen CPUs until Computex. They had a tight timeframe and needed to get zen out so they had to settle on this. I wouldn't be surprised though if in zen 2 or something we see them make it a true 8 core design rather than this.

Make sure to quote me or tag me when responding to me, or I might not know you replied! Examples:

 

Do this:

Quote

And make sure you do it by hitting the quote button at the bottom left of my post, and not the one inside the editor!

Or this:

@DocSwag

 

Buy whatever product is best for you, not what product is "best" for the market.

 

Interested in computer architecture? Still in middle or high school? P.M. me!

 

I love computer hardware and feel free to ask me anything about that (or phones). I especially like SSDs. But please do not ask me anything about Networking, programming, command line stuff, or any relatively hard software stuff. I know next to nothing about that.

 

Compooters:

Spoiler

Desktop:

Spoiler

CPU: i7 6700k, CPU Cooler: be quiet! Dark Rock Pro 3, Motherboard: MSI Z170a KRAIT GAMING, RAM: G.Skill Ripjaws 4 Series 4x4gb DDR4-2666 MHz, Storage: SanDisk SSD Plus 240gb + OCZ Vertex 180 480 GB + Western Digital Caviar Blue 1 TB 7200 RPM, Video Card: EVGA GTX 970 SSC, Case: Fractal Design Define S, Power Supply: Seasonic Focus+ Gold 650w Yay, Keyboard: Logitech G710+, Mouse: Logitech G502 Proteus Spectrum, Headphones: B&O H9i, Monitor: LG 29um67 (2560x1080 75hz freesync)

Home Server:

Spoiler

CPU: Pentium G4400, CPU Cooler: Stock, Motherboard: MSI h110l Pro Mini AC, RAM: Hyper X Fury DDR4 1x8gb 2133 MHz, Storage: PNY CS1311 120gb SSD + two Segate 4tb HDDs in RAID 1, Video Card: Does Intel Integrated Graphics count?, Case: Fractal Design Node 304, Power Supply: Seasonic 360w 80+ Gold, Keyboard+Mouse+Monitor: Does it matter?

Laptop (I use it for school):

Spoiler

Surface book 2 13" with an i7 8650u, 8gb RAM, 256 GB storage, and a GTX 1050

And if you're curious (or a stalker) I have a Just Black Pixel 2 XL 64gb

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, PCGuy_5960 said:

OK, then, but still, why did they choose this dual quad core CCX die and not a simple 8 core die?

There is probably a ton of reason behind it. Do consider I'm not working with or for AMD, so I don't actually know any of the reasoning behind the decision, all I can do is speculate based on the pros and cons we observe.

 

1) Scalability: It makes it much easier to scale from 4 to 8, 16, 32 cores.

 

2) R&D: This goes to line with scalability. They design the IP of one CCX, and then they can reuse it. Makes it incredible cheaper to make multiple dies without much further R&D. It is a much more modular design than Intels.

 

3) Potential performance benefits: The cores in each CCX will be more tightly coupled so the interconnect between those should be lower. Cache lookup inside the CCX should be lower as well. Imagine the 32-core naples chip with its 64MB L3 cache. You can imagine a lookup could take much longer than if you split it up in 8 parts of 8MB.

Please avoid feeding the argumentative narcissistic academic monkey.

"the last 20 percent – going from demo to production-worthy algorithm – is both hard and is time-consuming. The last 20 percent is what separates the men from the boys" - Mobileye CEO

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, DocSwag said:

Simplicity. They had the 4 core parts done, and building an interconnect to connect them would be WAY easier than building the L3 cache essentially from the ground up and having to replicate the connections between the other cores as well. Cache gets many times more complicated when you increase its size

This could be a good reason....

 

CPU: Intel Core i7-5820K | Motherboard: AsRock X99 Extreme4 | Graphics Card: Gigabyte GTX 1080 G1 Gaming | RAM: 16GB G.Skill Ripjaws4 2133MHz | Storage: 1 x Samsung 860 EVO 1TB | 1 x WD Green 2TB | 1 x WD Blue 500GB | PSU: Corsair RM750x | Case: Phanteks Enthoo Pro (White) | Cooling: Arctic Freezer i32

 

Mice: Logitech G Pro X Superlight (main), Logitech G Pro Wireless, Razer Viper Ultimate, Zowie S1 Divina Blue, Zowie FK1-B Divina Blue, Logitech G Pro (3366 sensor), Glorious Model O, Razer Viper Mini, Logitech G305, Logitech G502, Logitech G402

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Tomsen said:

1) Scalability: It makes it much easier to scale from 4 to 8, 16, 32 cores.

True :D

9 minutes ago, Tomsen said:

2) R&D: This goes to line with scalability. They design the IP of one CCX, and then they can reuse it. Makes it incredible cheaper to make multiple dies without much further R&D. It is a much more modular design than Intels.

I guess this is very important to AMD, considering that Intel's lowest end HEDT CPUs cost more than Ryzen's highest end CPUs....

9 minutes ago, Tomsen said:

3) Potential performance benefits: The cores in each CCX will be more tightly coupled so the interconnect between those should be lower. Cache lookup inside the CCX should be lower as well. Imagine the 32-core naples chip with its 64MB L3 cache. You can imagine a lookup could take much longer than if you split it up in 8 parts of 8MB.

But because of this, performance in some workloads (mostly gaming) suffers....

CPU: Intel Core i7-5820K | Motherboard: AsRock X99 Extreme4 | Graphics Card: Gigabyte GTX 1080 G1 Gaming | RAM: 16GB G.Skill Ripjaws4 2133MHz | Storage: 1 x Samsung 860 EVO 1TB | 1 x WD Green 2TB | 1 x WD Blue 500GB | PSU: Corsair RM750x | Case: Phanteks Enthoo Pro (White) | Cooling: Arctic Freezer i32

 

Mice: Logitech G Pro X Superlight (main), Logitech G Pro Wireless, Razer Viper Ultimate, Zowie S1 Divina Blue, Zowie FK1-B Divina Blue, Logitech G Pro (3366 sensor), Glorious Model O, Razer Viper Mini, Logitech G305, Logitech G502, Logitech G402

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, PCGuy_5960 said:

True :D

I guess this is very important to AMD, considering that Intel's lowest end HEDT CPUs cost more than Ryzen's highest end CPUs....

But because of this, performance in some workloads (mostly gaming) suffers....

when you step out of the 4 core realm your priority isn't gaming first that's usually your side thing... You're primary objective at 6+ cores is rendering, compiling ya know stuff that those kind of cpus are very very good at doing but a quad will just take forever to do since you have a high clocked cores but you're lacking the resources to streamline it better.

CPU: Intel i7 7700K | GPU: ROG Strix GTX 1080Ti | PSU: Seasonic X-1250 (faulty) | Memory: Corsair Vengeance RGB 3200Mhz 16GB | OS Drive: Western Digital Black NVMe 250GB | Game Drive(s): Samsung 970 Evo 500GB, Hitachi 7K3000 3TB 3.5" | Motherboard: Gigabyte Z270x Gaming 7 | Case: Fractal Design Define S (No Window and modded front Panel) | Monitor(s): Dell S2716DG G-Sync 144Hz, Acer R240HY 60Hz (Dead) | Keyboard: G.SKILL RIPJAWS KM780R MX | Mouse: Steelseries Sensei 310 (Striked out parts are sold or dead, awaiting zen2 parts)

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, XenosTech said:

when you step out of the 4 core realm your priority isn't gaming first that's usually your side thing... You're primary objective at 6+ cores is rendering, compiling ya know stuff that those kind of cpus are very very good at doing but a quad will just take forever to do since you have a high clocked cores but you're lacking the resources to streamline it better.

True, but up to a point. Many people bought Intel HEDT CPUs, because they wanted or they needed more PCIe lanes and not because they needed 6+ cores.... 

CPU: Intel Core i7-5820K | Motherboard: AsRock X99 Extreme4 | Graphics Card: Gigabyte GTX 1080 G1 Gaming | RAM: 16GB G.Skill Ripjaws4 2133MHz | Storage: 1 x Samsung 860 EVO 1TB | 1 x WD Green 2TB | 1 x WD Blue 500GB | PSU: Corsair RM750x | Case: Phanteks Enthoo Pro (White) | Cooling: Arctic Freezer i32

 

Mice: Logitech G Pro X Superlight (main), Logitech G Pro Wireless, Razer Viper Ultimate, Zowie S1 Divina Blue, Zowie FK1-B Divina Blue, Logitech G Pro (3366 sensor), Glorious Model O, Razer Viper Mini, Logitech G305, Logitech G502, Logitech G402

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, PCGuy_5960 said:

True :D

I guess this is very important to AMD, considering that Intel's lowest end HEDT CPUs cost more than Ryzen's highest end CPUs....

But because of this, performance in some workloads (mostly gaming) suffers....

It is even more important when you consider how involved AMD was in semi-custom. It might allow them to save a few million $$ for each semi-custom design they make. It can be critical, as one of AMDs previous issues was they didn't have enough cash in hand to make the designs they needed at the time.

 

Question is how much of this performance penalty can be negated with proper scheduling support.

Please avoid feeding the argumentative narcissistic academic monkey.

"the last 20 percent – going from demo to production-worthy algorithm – is both hard and is time-consuming. The last 20 percent is what separates the men from the boys" - Mobileye CEO

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, PCGuy_5960 said:

True, but up to a point. Many people bought Intel HEDT CPUs, because they wanted or they needed more PCIe lanes and not because they needed 6+ cores.... 

That too but those are still average gaming chips to be honest... Intel designed them to be all round good at the end of the day but unless devs actually sit down and leverage the resources that those chips are packing... Quads with their high single core performance will be the kings or gaming

CPU: Intel i7 7700K | GPU: ROG Strix GTX 1080Ti | PSU: Seasonic X-1250 (faulty) | Memory: Corsair Vengeance RGB 3200Mhz 16GB | OS Drive: Western Digital Black NVMe 250GB | Game Drive(s): Samsung 970 Evo 500GB, Hitachi 7K3000 3TB 3.5" | Motherboard: Gigabyte Z270x Gaming 7 | Case: Fractal Design Define S (No Window and modded front Panel) | Monitor(s): Dell S2716DG G-Sync 144Hz, Acer R240HY 60Hz (Dead) | Keyboard: G.SKILL RIPJAWS KM780R MX | Mouse: Steelseries Sensei 310 (Striked out parts are sold or dead, awaiting zen2 parts)

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, zMeul said:

MS wouldn't even manufacture and test the darn things if there wasn't a real use for them

 

few years back would you even imagined GPUs to be good a compute? and I mean really good at it

Why not? Intel invested a lot of money in their iGPU's and now they are going to team up with amd anyway. It's not because it doesn't have a goal it's pointless. Documenting DNA is a perfect example, it took years (10+) before they documented a complete dna double helix from a human. Is that knowledge itself useful? Not really. Is it useful for other projects? Yes. 

 

It's actually very likely there's atm no real use for them and it's possible they will never have a use and won't end up in mass production. However that doesn't mean it's wasted money. It's perfectly possible they learned a lot from it and use that knowledge in future products.

 

And it's not only hardware, look at mantle and vulkan. Was mantle a success? No. Did it help vulkan a lot? Yes. Did it give AMD a lot of knowledge? Yes. So even tho mantle failed, it did give them something they did use later. Actually i would say it's still helping amd today :)

If you want my attention, quote meh! D: or just stick an @samcool55 in your post :3

Spying on everyone to fight against terrorism is like shooting a mosquito with a cannon

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, XenosTech said:

That too but those are still average gaming chips to be honest... 

WRONG. HEDT CPUs are AMAZING chips for gaming and will outperform every i5 and older i7s (4790K and older)

At worst they are just a few frames slower than a 6700K or a 7700K. 

CPU: Intel Core i7-5820K | Motherboard: AsRock X99 Extreme4 | Graphics Card: Gigabyte GTX 1080 G1 Gaming | RAM: 16GB G.Skill Ripjaws4 2133MHz | Storage: 1 x Samsung 860 EVO 1TB | 1 x WD Green 2TB | 1 x WD Blue 500GB | PSU: Corsair RM750x | Case: Phanteks Enthoo Pro (White) | Cooling: Arctic Freezer i32

 

Mice: Logitech G Pro X Superlight (main), Logitech G Pro Wireless, Razer Viper Ultimate, Zowie S1 Divina Blue, Zowie FK1-B Divina Blue, Logitech G Pro (3366 sensor), Glorious Model O, Razer Viper Mini, Logitech G305, Logitech G502, Logitech G402

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, PCGuy_5960 said:

WRONG. HEDT CPUs are AMAZING chips for gaming and will outperform every i5 and older i7s (4790K and older)

At worst they are just a few frames slower than a 6700K or a 7700K. 

So you're going to pay all that extra cash just for gaming ? HEDT cpus are avg gaming chips when compared to the consumer i7's only diff is the HEDT chips are anywhere near 100% usage in a gaming session

CPU: Intel i7 7700K | GPU: ROG Strix GTX 1080Ti | PSU: Seasonic X-1250 (faulty) | Memory: Corsair Vengeance RGB 3200Mhz 16GB | OS Drive: Western Digital Black NVMe 250GB | Game Drive(s): Samsung 970 Evo 500GB, Hitachi 7K3000 3TB 3.5" | Motherboard: Gigabyte Z270x Gaming 7 | Case: Fractal Design Define S (No Window and modded front Panel) | Monitor(s): Dell S2716DG G-Sync 144Hz, Acer R240HY 60Hz (Dead) | Keyboard: G.SKILL RIPJAWS KM780R MX | Mouse: Steelseries Sensei 310 (Striked out parts are sold or dead, awaiting zen2 parts)

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, XenosTech said:

So you're going to pay all that extra cash just for gaming ?

No. All I am saying is that they are not average for gaming. Are they worth the price premium? Heck no.

2 minutes ago, XenosTech said:

HEDT cpus are avg gaming chips when compared to the consumer i7's

No they are not, did you watch the video?

2 minutes ago, XenosTech said:

only diff is the HEDT chips are nowhere near 100% usage in a gaming session

FIFY ;)

CPU: Intel Core i7-5820K | Motherboard: AsRock X99 Extreme4 | Graphics Card: Gigabyte GTX 1080 G1 Gaming | RAM: 16GB G.Skill Ripjaws4 2133MHz | Storage: 1 x Samsung 860 EVO 1TB | 1 x WD Green 2TB | 1 x WD Blue 500GB | PSU: Corsair RM750x | Case: Phanteks Enthoo Pro (White) | Cooling: Arctic Freezer i32

 

Mice: Logitech G Pro X Superlight (main), Logitech G Pro Wireless, Razer Viper Ultimate, Zowie S1 Divina Blue, Zowie FK1-B Divina Blue, Logitech G Pro (3366 sensor), Glorious Model O, Razer Viper Mini, Logitech G305, Logitech G502, Logitech G402

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, zMeul said:

-snip-

AMD is designing ARM and x86 CPUs for the server market.

Also the MCM design isn't architectural, it's a conscious design choice. AMDs implementation is much better than the Core 2 Quads, as the CCXs interact through the Infinity Fabric, which itself runs at half the speed if the memory, which is why faster memory leads to such dramatic performance improvements.

        Pixelbook Go i5 Pixel 4 XL 

  

                                     

 

 

                                                                           

                                                                              

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, Tomsen said:

It is not impossible for microsoft to optimize for zen architecture. For example, keep all the processes spawned from one program on a single CCX.

That might sound like a simple and efficient solution, but when you remember how many threads are actually running from each process you quickly realize that keeping them on a single CCX would most likely cause more issues than good. I mean, even a rather simple program like Dropbox spawns closer to 200 threads. Limiting each process to a single CCX would most likely result in lower performance far more often than it would increase it, and assigning it by hand is far too time consuming.

 

The only way Microsoft could improve the situation that I can think of, would be to build an AI into the scheduler which can analyze the characteristics of every program it runs, and do comparisons of when it is and isn't beneficial to keep it to a single CCX. But that has a ton of issues as well.

 

 

12 hours ago, techstorm970 said:

With that logic, you could say that it also makes no sense to buy Intel Extreme processors.

But... It doesn't make any sense. There have been very few Extreme edition processors worth buying, even if you have had money to get them.

Extreme Edition is mostly for e-peen.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, PCGuy_5960 said:

No. All I am saying is that they are not average for gaming. Are they worth the price premium? Heck no.

No they are not, did you watch the video?

FIFY ;)

watched that ages ago... To me i7's are avg chips for gaming.

 

I'm looking at it like this:

 

When you look at the requirements for something do you go with the recommended or do you buy the config that's above the recommended since most people should know the recommended in most case is the bare minimum to get optimal performance with a few things dialed backed, whereas you want to floor this without compromising your settings

CPU: Intel i7 7700K | GPU: ROG Strix GTX 1080Ti | PSU: Seasonic X-1250 (faulty) | Memory: Corsair Vengeance RGB 3200Mhz 16GB | OS Drive: Western Digital Black NVMe 250GB | Game Drive(s): Samsung 970 Evo 500GB, Hitachi 7K3000 3TB 3.5" | Motherboard: Gigabyte Z270x Gaming 7 | Case: Fractal Design Define S (No Window and modded front Panel) | Monitor(s): Dell S2716DG G-Sync 144Hz, Acer R240HY 60Hz (Dead) | Keyboard: G.SKILL RIPJAWS KM780R MX | Mouse: Steelseries Sensei 310 (Striked out parts are sold or dead, awaiting zen2 parts)

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, XenosTech said:

watched that ages ago... To me i7's are avg chips for gaming.

Why? i7s are the best gaming CPUs.... And you will notice an imrpovement when getting an i7 instead of an i5

9 minutes ago, XenosTech said:

When you look at the requirements for something do you go with the recommended or do you buy the config that's above the recommended since most people should know the recommended in most case is the bare minimum to get optimal performance with a few things dialed backed, whereas you want to floor this without compromising your settings

Most games recommend an i5, therefore you should get an i7, this is what you are trying to tell me, right? You are contradicting yourself here.......

CPU: Intel Core i7-5820K | Motherboard: AsRock X99 Extreme4 | Graphics Card: Gigabyte GTX 1080 G1 Gaming | RAM: 16GB G.Skill Ripjaws4 2133MHz | Storage: 1 x Samsung 860 EVO 1TB | 1 x WD Green 2TB | 1 x WD Blue 500GB | PSU: Corsair RM750x | Case: Phanteks Enthoo Pro (White) | Cooling: Arctic Freezer i32

 

Mice: Logitech G Pro X Superlight (main), Logitech G Pro Wireless, Razer Viper Ultimate, Zowie S1 Divina Blue, Zowie FK1-B Divina Blue, Logitech G Pro (3366 sensor), Glorious Model O, Razer Viper Mini, Logitech G305, Logitech G502, Logitech G402

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, LAwLz said:

That might sound like a simple and efficient solution, but when you remember how many threads are actually running from each process you quickly realize that keeping them on a single CCX would most likely cause more issues than good. I mean, even a rather simple program like Dropbox spawns closer to 200 threads. Limiting each process to a single CCX would most likely result in lower performance far more often than it would increase it, and assigning it by hand is far too time consuming.

 

The only way Microsoft could improve the situation that I can think of, would be to build an AI into the scheduler which can analyze the characteristics of every program it runs, and do comparisons of when it is and isn't beneficial to keep it to a single CCX. But that has a ton of issues as well.

 

 

But... It doesn't make any sense. There have been very few Extreme edition processors worth buying, even if you have had money to get them.

Extreme Edition is mostly for e-peen.

why couldnt this already be done 3rd party like inspector/flawless widescreen/etc?

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×