Jump to content

recent W10 patch, March15, shows some improvements for Ryzen

zMeul

Does 1 core with 2 threads give better perf or 2 cores on different ccx? 

Please quote me so that I know that you have replied unless it is my own topic.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Ezio Auditore said:

Does 1 core with 2 threads give better perf or 2 cores on different ccx? 

If you are running 2 threads at the same time, running them in 2 different cores is superior to running both in the same core. Unless the 2 cores are located in different planets.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, SpaceGhostC2C said:

If you are running 2 threads at the same time, running them in 2 different cores is superior to running both in the same core. Unless the 2 cores are located in different planets.

Even with the latency? 

Please quote me so that I know that you have replied unless it is my own topic.

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, Ezio Auditore said:

Even with the latency? 

I believe that the latency is rarely greater than having to wait for the other task to finish. So that's why there's a huge difference between 1 core, 2 cores and 4. So even with the latency of a motherboard with 2 sockets it's less than waiting for tasks to finish 1 or 2 at the time. But once you have 4 if your IPC is good enough it's usually really difficult for a task to be waiting more for allocation vs waiting for latency. This is why for more improvements either the task would have to be massively complicated to supersede the IPC speeds by a lot or the software needs to specifically address more than 4 cores so it saves even more time addressing more threads.

 

But at that point, the way I understand it, it gets really difficult for the software to allocate faster than what you can get by 4 fast cores with 8 threads anyway. 

-------

Current Rig

-------

Link to comment
Share on other sites

Link to post
Share on other sites

35 minutes ago, Ezio Auditore said:

Even with the latency? 

Yes. I mean, you can replace the communication being extremely costly with the task itself being trivially tiny. We are discussing an interconnection that's still faster than the communication between CPU and RAM. And that's of course assuming we are not discussing two completely independent threads, but that some core will eventually need to see the results of all threads. And still you would need for them to consult the other's output very often for that to matter.

 

You are basically comparing a worker doing two tasks vs. two workers doing one task each. If the task is putting a stamp on a paper, and then putting the paper on a pile, only with both stamps on it, it would be efficient to use two workers if they seat next to each other. If they are in different floors, you may prefer one guy putting both stamps. On the other hand, if they are putting the same stamp, and you only need one to go in the pile, you can have each worker doing half the pile, and one only coming downstairs at the end of the day to bring his half. In that case, using two workers would make you 1.999999999 times as fast.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Evanair said:

I hope they don't.  Having only one company make the hardware for systems, both Nvidia for Video and Intel for CPU (afterall, AMD out of business would kill ATI) would be a godsend for programmers.  You would be able to program for one platform and one video platform. 

Hardware might be more stagnant, but software utilization of the chips and design would improve.

no it would not. Software would stagnate, as there would be little reason for Nvidia and Intel to spend millions upon millions every year in order to send optimization devs out to software devs. And why do they send these optimization devs in the first place? to retain their competitive edge over the competition. If AMD goes under, everything will slowly stagnate in the PC gaming market. HPC and Enterprise will always optimize, but they would do that irregardless of monopoly or not, because it is in their best interest of get 100% out of the millions they spendt on hardware.

 

The biggest risk of a monopoly however is hardware design quirks.If Intel/Nvidia puts out a CPU/GPU with a glaring flaw, and with no competition. They CAN be so bold, as to tell their customers "sorry, but we wont be able to fix this shit. Wait for next generation, in the meantime, please try avoid doing XYZ so you wont trigger the issue".

THAT would be devastating. but that could happen, if there was no competition.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, MageTank said:

That was not my point. My point was, people heralded DX12 as the savior of gaming, giving developers even more access to hardware, and potentially bringing slower CPU's to life again. Instead, we got the exact same lazily programmed titles that flubbed the resources of games, performing worse for some  and barely performing the same/slightly better in others. I have yet to see a DX12 title that showed an abundant performance improvement over DX11. If someone knows of a title, I'd appreciate it.

 

It does not matter what you do. You can unify the entire platform, make it one CPU architecture, one GPU architecture, switch everything into a proprietary closed system, and you will still end up with rushed, incomplete products because developers are either lazy, overworked/rushed, or lack the budget to meet their goals. 

 

Oh, and let's humor that dream world of yours. You understand with only one GPU Company (Nvidia), you end up with your hardware being made obsolete on a yearly basis, right? We now see the GTX 960 beating a GTX 780. A card that was barely faster than the GTX 760 at launch, barely matched the 770 during it's lifespan, all of a sudden beating the 780 now? Yeah, that's not a world I want to live in. 

 

You are asking for a stagnant hardware market and anti-consumer prices, to make it easier on programmers that will NEVER have it easy due to the burden of their studio executives. Surely I am not the only one that thinks this sounds silly, right?

I'm getting real tired of hearing that the reason that games don't use more cores is because they are simply lazy or rushed. The majority of games simply don't need the extra cores. Plain and simple. You can only thread something so much before you start hurting it's performance. The majority of games are like this. DX12 opened the door to extremely low level access but it really isn't going to help that much. DX11 is actually a pretty damn good API for performance when the programmers know their way around it. Could DX12 gain some more performance? Maybe. I honestly have no idea but it sure as hell isn't going to be some kind of savior. Also, It's not a monopoly if Intel was the only CPU manufacturer on the desktop market people. Stop saying Intel only keeps AMD alive to keep the big bad government away. You are only a monopoly if you abuse your position to disrupt consumer purchasing. Intel would have to crush/buy every single company that trys to enter the market to be considered a monopoly that needs to be stopped.

Main Gaming PC - i9 10850k @ 5GHz - EVGA XC Ultra 2080ti with Heatkiller 4 - Asrock Z490 Taichi - Corsair H115i - 32GB GSkill Ripjaws V 3600 CL16 OC'd to 3733 - HX850i - Samsung NVME 256GB SSD - Samsung 3.2TB PCIe 8x Enterprise NVMe - Toshiba 3TB 7200RPM HD - Lian Li Air

 

Proxmox Server - i7 8700k @ 4.5Ghz - 32GB EVGA 3000 CL15 OC'd to 3200 - Asus Strix Z370-E Gaming - Oracle F80 800GB Enterprise SSD, LSI SAS running 3 4TB and 2 6TB (Both Raid Z0), Samsung 840Pro 120GB - Phanteks Enthoo Pro

 

Super Server - i9 7980Xe @ 4.5GHz - 64GB 3200MHz Cl16 - Asrock X299 Professional - Nvidia Telsa K20 -Sandisk 512GB Enterprise SATA SSD, 128GB Seagate SATA SSD, 1.5TB WD Green (Over 9 years of power on time) - Phanteks Enthoo Pro 2

 

Laptop - 2019 Macbook Pro 16" - i7 - 16GB - 512GB - 5500M 8GB - Thermal Pads and Graphite Tape modded

 

Smart Phones - iPhone X - 64GB, AT&T, iOS 13.3 iPhone 6 : 16gb, AT&T, iOS 12 iPhone 4 : 16gb, AT&T Go Phone, iOS 7.1.1 Jailbroken. iPhone 3G : 8gb, AT&T Go Phone, iOS 4.2.1 Jailbroken.

 

Link to comment
Share on other sites

Link to post
Share on other sites

55 minutes ago, Prysin said:

They CAN be so bold, as to tell their customers "sorry, but we wont be able to fix this shit. Wait for next generation, in the meantime, please try avoid doing XYZ so you wont trigger the issue".

You're joking right?  Companies do that now. 

Link to comment
Share on other sites

Link to post
Share on other sites

Holy shit this is amazing! Absolutely mind blowing!

 

People actually believe anything coming from that garbage website? Does nobody here have any prior experience with Bits and Chips? They are extremely AMD biased and constantly lies and twists the truth. 

How anyone can look at this benchmark and go "yeah that's probably true" is truly amazing. Did none of you learn to be source critical at all? 

And what's even more hilarious is that some people are saying PCPer were wrong and now looks like idiots.

 

I honestly would not be surprised if the patch actually lowered performance, but BnC posted that it increased performance. That's the level of trust I have for them. 

 

 

Edit: they also say in another tweet that no other program has seen any improvement after the patch. It is isolated to the Unreal benchmark. So my guess is that they just botched the pre-patch benchmark and the post-patch is what they should have been getting all along. Similar to what some other person in this thread showed. 

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, LAwLz said:

People actually believe anything coming from that garbage website? Does nobody here have any prior experience with Bits and Chips? They are extremely AMD biased and constantly lies and twists the truth. 

hell if I know who they are ... shoot me

Link to comment
Share on other sites

Link to post
Share on other sites

IDK, there might be a slight improvement from Windows updates as there were issues with power management and probably some slight tweaks to do, but this performance boost is just impossible, MAYYYBE in some rare, isolated instances that could happen, but I highly doubt that.

 

I earlier expected around 8-10% average performance boost in games that the R7 lineup performed unusually bad in such as Total War: Warhammer and Fallout 4 and I still stand by that statement. (with little to no effect on games in which the R7 performed as expected)

CPU: AMD Ryzen 7 5800X3D GPU: AMD Radeon RX 6900 XT 16GB GDDR6 Motherboard: MSI PRESTIGE X570 CREATION
AIO: Corsair H150i Pro RAM: Corsair Dominator Platinum RGB 32GB 3600MHz DDR4 Case: Lian Li PC-O11 Dynamic PSU: Corsair RM850x White

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, Evanair said:

As a followup, Jay2cents just posted a video of 1800x vs 5960x in a head-to-head workstation render.  Intel won by 8-10% margin on equal clockspeed. Intel system can be pushed another 500mhz faster as well.  Not bad for a 3 year old design.  Makes you wonder what Intel will release next when they update to the x299 market.

that wasn't a good test, it didn't have the same conditions for both systems, as one was using dual 1080s and the other was using dual tytan x (maxwells) 

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Hunter259 said:

I'm getting real tired of hearing that the reason that games don't use more cores is because they are simply lazy or rushed. The majority of games simply don't need the extra cores. Plain and simple. You can only thread something so much before you start hurting it's performance. The majority of games are like this. DX12 opened the door to extremely low level access but it really isn't going to help that much. DX11 is actually a pretty damn good API for performance when the programmers know their way around it. Could DX12 gain some more performance? Maybe. I honestly have no idea but it sure as hell isn't going to be some kind of savior. Also, It's not a monopoly if Intel was the only CPU manufacturer on the desktop market people. Stop saying Intel only keeps AMD alive to keep the big bad government away. You are only a monopoly if you abuse your position to disrupt consumer purchasing. Intel would have to crush/buy every single company that trys to enter the market to be considered a monopoly that needs to be stopped.

You must be getting real tired of reading, because nowhere in any of the posts I made did I say that they don't "use more cores" because they are lazy or rushed. You completely ignored my point, and went off on an unrelated tangent. My point was, closing the ecosystem down to a single CPU and GPU manufacturer won't matter if the developers remain lazy and rushed. We won't see an improvement in CPU/GPU utilization if studio executives keep sticking their noses into the development process and setting unrealistic goals.

 

I understand very little about programming, but I do know of Amdahl's law. You will not find me pretending that all things can be threaded easily, because I know that's not the case. I also only stated DX12 was "the savior of gaming" because literally every blog/news outlet labeled it as such. 

http://bgr.com/2014/04/07/xbox-one-directx-12-update/

http://hexus.net/gaming/news/pc/78449-directx-12-benchmark-slides-imply-significant-frame-rate-boost/

http://www.digitaltrends.com/computing/intel-benchmarks-directx-12-looks-faster-efficient-dx11/

 

Now, I have to know if you've even read a single post of mine in this thread, because I've certainly never said Intel keeps AMD alive to avoid a monopoly. In fact, I've never even used the word "monopoly" in this thread. I am going to have to ask you to stop twisting the context of my words to fit whatever narrative you are trying to paint. It's not going to end the way you think it will. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I wonder if MS heard that story about W7 performing better with Ryzen so they wanted to put a stop to that.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, cj09beira said:

From scource you linked 

 

AMD has reportedly released new AGESA Microcode for Ryzen, which is designed to improve RAM compatibility on their AM4 platform. This new code should soon be released in new UEFI/BIOS files for AM4 motherboard soon, though this will depend on how long it takes motherboard makers to add this new code to their next UEFI/BIOS releases. 

 

its only memory compatibility

l i7 5930k | Rampage V Extreme | Corsair H110i GTX | G.Skill Ripjaws 4 16GB DDR4 3000Mhz | EVGA GTX 1070 Superclocked ACX 3.0 | EVGA SuperNova 750w G2 | Samsung 960 Evo 250GB l Samsung 950 Pro 256GB | Samsung 850 EVO 250GB | Samsung 840 EVO 250GB | 1 TB WD Red | Corsair 900D |

 

Evolution - https://uk.pcpartpicker.com/list/LbJq9W

CPU-Z Validation - http://valid.x86.fr/3dxmew

Link to comment
Share on other sites

Link to post
Share on other sites

37 minutes ago, RxL said:

From scource you linked 

 

AMD has reportedly released new AGESA Microcode for Ryzen, which is designed to improve RAM compatibility on their AM4 platform. This new code should soon be released in new UEFI/BIOS files for AM4 motherboard soon, though this will depend on how long it takes motherboard makers to add this new code to their next UEFI/BIOS releases. 

 

its only memory compatibility

I didn't  sqy what it was did i :-P

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, cj09beira said:

I didn't  sqy what it was did i :-P

i only put that incase people got the wrong idea of what the microcode was going to do / or what it was for

l i7 5930k | Rampage V Extreme | Corsair H110i GTX | G.Skill Ripjaws 4 16GB DDR4 3000Mhz | EVGA GTX 1070 Superclocked ACX 3.0 | EVGA SuperNova 750w G2 | Samsung 960 Evo 250GB l Samsung 950 Pro 256GB | Samsung 850 EVO 250GB | Samsung 840 EVO 250GB | 1 TB WD Red | Corsair 900D |

 

Evolution - https://uk.pcpartpicker.com/list/LbJq9W

CPU-Z Validation - http://valid.x86.fr/3dxmew

Link to comment
Share on other sites

Link to post
Share on other sites

20 hours ago, Dan Castellaneta said:

Unreal Tournament 3 seems like a really pointless game to benchmark. Something like even the Source engine would be better for benchmarking. Oh well, we'll wait and see how else this improves shit.

On the contrary, less demanding games are ideal to spot differences. Of course it isn't representative of what you'll get in modern games but still.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

46 minutes ago, tom_w141 said:

For those commenting on how AMD isn't on an upward trend.

To be fair you are comparing a 3 month chart and a 5 year chart. If we look at Intel over 5 years they have an upwards trend as well.image.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Dylanc1500 said:

To be fair you are comparing a 3 month chart and a 5 year chart. If we look at Intel over 5 years they have an upwards trend as well.image.jpg

Sorry should state that was to highlight intel just before and just after zen and not a like for like comparison over 5 years. I was just trying show how low AMD was and its sudden strong upward trajectory, I only set the graph to 3 months to "zoom" in on more recent changes in intel pre and post zen.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Dylanc1500 said:

-snip-

 

1 minute ago, tom_w141 said:

-snip-

If you set it to a year Intel has a down tred and AMD has and up trend but look at it for a longer period and they're both about the same while nvidia is light years above them xD

CPU: Intel i7 7700K | GPU: ROG Strix GTX 1080Ti | PSU: Seasonic X-1250 (faulty) | Memory: Corsair Vengeance RGB 3200Mhz 16GB | OS Drive: Western Digital Black NVMe 250GB | Game Drive(s): Samsung 970 Evo 500GB, Hitachi 7K3000 3TB 3.5" | Motherboard: Gigabyte Z270x Gaming 7 | Case: Fractal Design Define S (No Window and modded front Panel) | Monitor(s): Dell S2716DG G-Sync 144Hz, Acer R240HY 60Hz (Dead) | Keyboard: G.SKILL RIPJAWS KM780R MX | Mouse: Steelseries Sensei 310 (Striked out parts are sold or dead, awaiting zen2 parts)

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, tom_w141 said:

Sorry should state that was to highlight intel just before and just after zen and not a like for like comparison over 5 years. I was just trying show how low AMD was and its sudden strong upward trajectory, I only set the graph to 3 months to "zoom" in on more recent changes in intel pre and post zen.

Here is something interesting though. Also look at their market cap difference. Stock price isn't everything. image.jpgimage.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×