Jump to content

AMD responds to 1080p gaming tests on Ryzen. Supports ECC RAM. Win 10 SMT bug

3DOSH
1 hour ago, Dabombinable said:

So if I bought an 1800X and it ran Skyrim or BF4 worse at 1080p than my 4790K, the reason would be Intel bias not lackluster single threaded performance according to AMD?

A lot of column B and a small bit of column A. Sure most things are "Intel optimized" but not to the degree of the performance deficit been shown, or more correctly I don't believe it is.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, zMeul said:

there's but one arch, x86

Intel has the original implementation while AMD wings it - and by winging it they cut off few of their toes

 

code doesn't not get build for Bulldozer nor Zen, code gets built for x86

 

who will move to VS2017? people that want the latest shit and have the money to pay for; VS licences aren't cheap

That isn't quite how it works, compilers can and do optimize the resultant code where it knows it can. You even showed an example of that with the MS VS and Intel compiler forum post you showed.

 

You can also put in code path hints and sections that only apply to CPUs with certain instruction sets, see benchmarks of CPUs with and without AES instruction sets and the massive performance boost for ones that do. This doesn't happen just with having the instruction set, the application actually has to be programmed to use it when it's there, you can make the compiler only use it for Intel's AES implementation and not AMD's (why you would is another conversation).

 

Then you can get in to the very fine detail of how compilers structure resultant code for different operations, see your example. If the resultant code has poorly optimized memory operations for the CPU arch for example performance will be less than what is possible.

 

Your general point still stands though. x86 is a common standard, but both Intel and AMD have their own implementations, and purely counting on compilers to optimize perfectly for you in every way every time is going to make you have a bad day :P.

Link to comment
Share on other sites

Link to post
Share on other sites

Well, IPC aside, maybe some current memory issues may be the cause, there are some odd differences between tests and reviews, we'll see in time. 

Also, it may really be about optimization hence Zen being completely new architecture, something OS and games have never seen before. And completely different to CMT FX

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Ducky One 3 TKL (Cherry MX-Speed-Silver)Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

I personally think this is great news. I bought a R7 1700 (because I can just overclock to a 1700X) and the performance is on par with the i7 or just 10fps lower. I'm looking at the big picture with DX12 becoming more adopted. Also, developers are going to have to put out patches to support the new architecture which is what I expected. But the fact that Ryzen has been seen not stuttering in some games, like GTA V, like the 7700K would. Overall, going from a 4670K to this is a huge performance leap for myself.

*Insert Name* R̶y̶z̶e̶n̶ Intel Build!  https://linustechtips.com/main/topic/748542-insert-name-r̶y̶z̶e̶n̶-intel-build/

Case: NZXT S340 Elite Matte White Motherboard: Gigabyte AORUS Z270X Gaming 5 CPU: Intel Core i7 7700K GPU: ASUS STRIX OC GTX 1080 RAM: Corsair Ballistix Sport LT 2400mhz Cooler: Enermax ETS-T40F-BK PSU: Corsair CX750M SSD: PNY CS1311 120GB HDD: Seagate Momentum 2.5" 7200RPM 500GB

 

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, done12many2 said:

This is some shady shit.

 

 

This is why I like jay. He is blunt and doesn't except anything but honesty from companies, and when they aren't he will call them out. He isn't afraid to tell everyone exactly what is going on, even behind the scenes.

Link to comment
Share on other sites

Link to post
Share on other sites

43 minutes ago, done12many2 said:

This is some shady shit.

 

 

Speaking of that video, a bit later in the video someone asked Jay to test overclocking with four of the cores disabled, and Jay kind of went off on him because no one would buy an 8c/16t cpu and disable half the cores....But, I KIND OF disagree with that sentiment. If I needed a new CPU right now (I mostly do gaming), a 7700k would be the best choice, but if Ryzen was only 10% behind in single-threaded performance, then I would take a 1700 over a 7700k any day because I'd be willing to risk much better performance in the future for 10% today. Now, as it stands, you're talking about more like a 20% single-threaded performance delta due to clocks and IPC, but if you could bump those clocks up by like 10% by disabling cores, then you could have close to a 7700k (-10%, which I'd be okay with) today and 8c/16t in the future (should games, or your needs, change).

 

 

I also really appreciate another part a bit later in the video: "If a $1000 CPU beats a $500 CPU, then everyone will just respond: 'No shit....', but if a $500 CPU beats a $1000 CPU, then....."

PSU Tier List | CoC

Gaming Build | FreeNAS Server

Spoiler

i5-4690k || Seidon 240m || GTX780 ACX || MSI Z97s SLI Plus || 8GB 2400mhz || 250GB 840 Evo || 1TB WD Blue || H440 (Black/Blue) || Windows 10 Pro || Dell P2414H & BenQ XL2411Z || Ducky Shine Mini || Logitech G502 Proteus Core

Spoiler

FreeNAS 9.3 - Stable || Xeon E3 1230v2 || Supermicro X9SCM-F || 32GB Crucial ECC DDR3 || 3x4TB WD Red (JBOD) || SYBA SI-PEX40064 sata controller || Corsair CX500m || NZXT Source 210.

Link to comment
Share on other sites

Link to post
Share on other sites

The Intel story made the rounds because it's not like Intel doesn't have a history of outright illegal actions to mess up AMD.  (As a point.) So even a hint in that direction is going to get people blowing it up.  So it's not really correct to think just because Reddit has at it that there isn't potentially something there.

 

But the really tight testing time-frame is actually what's made the launch less than perfect.  The reviewers needed time to get apples to apples comparisons of the Motherboards, as I think the Asus board (and what firmware) is the big driver behind a lot of the massive variance effects.  Beyond some of the clear bugs that need to be worked out with BIOS updates.

 

Further, after the rush of all of the information and a chance to digest it, I'm really coming to the opinion that the 1080p Ultra tests are functionally synthetic in nature.  There is something of an apples-to-apples comparison effect, but you're also comparing it to $1600USD chips in game titles at 1080p.  Someone dropping $1600USD + $1200USD ain't playing at 1080p.  So while a standard, across-the-board comparison has its uses, we're in a situation where we're testing something that's unlikely to be actually used.  Yet at the same time, it's generally the fastest way to produce numbers for comparisons. 

 

Which just doubles back on the time-frame issue.  There's some very specific use cases for each of the chip-lines (which is why Intel has multiple!), so it's a question of where does Ryzen 7 compare for those different uses?  With a bit more time to lay out some various cases and test against that (streaming is one of those cases), we'd get a much better appreciation for where the current Market sits among all of the chips.  The Ryzen chips seem like render/compile beasts, so there's a huge potential in a lot of smaller office setups.  

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, goodtofufriday said:

Inserting Canned response.

 

4o1da6s.png

 

As a CPU that positioned to be in between a 6900k and a 7700k, then no its not a disappointment or a failure. if its purpose was to beat a 7700k then sure. But it is, and was always, meant to be a cpu that had 8c/16th and was a competitive in betweener. And this is exactly what it is.

 

You can't seriously have expected a 8c/16th cpu to out perform a 4c/8th cpu in single core perf. Not even intel themselves does that.

 

Even with the 1700, its a 8c/16th cpu. IPC was never expected to be the same. If you ONLY care about pure gaming then yeah the 7700k was always the clear choice from the beginning.

 

The R7 series of chips has never been positioned as a PURE gaming chip.

 

You cant complain about a product not doing something it was never intended to do... And it does do marginally well against the 7700k as the benches I posted showed. Thats like complaining that your ford mustang cant out race a lambo. Both are fast, but one is meant to be Really Fast.

 

Optimization for hyperthreading exist. You should expect that games need optimization for AMDs SMT implementation as well.

 

The ram speed issue exist and was previously known about. you should expect it to affect gaming FPS. More so in some games than others of course.

 

Ryzen supports ECC, but it is not validated on consumer boards. This doesn't mean it doesnt work.

 

Ryzen is EXACTLY what AMD said it would be. And it should be EXACTLY what you expected.

You are arguing about the wrong facts. But that is okay. I don't expect you or anyone else in this thread to be able to process the failure of Ryzen until around April or May, when the hype has worn out. 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Prysin said:

You are arguing about the wrong facts. But that is okay. I don't expect you or anyone else in this thread to be able to process the failure of Ryzen until around April or May, when the hype has worn out. 

I dont deal in alternative facts. 

CPU: Amd 7800X3D | GPU: AMD 7900XTX

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, djdwosk97 said:

Speaking of that video, a bit later in the video someone asked Jay to test overclocking with four of the cores disabled, and Jay kind of went off on him because no one would buy an 8c/16t cpu and disable half the cores....But, I KIND OF disagree with that sentiment. If I needed a new CPU right now (I mostly do gaming), a 7700k would be the best choice, but if Ryzen was only 10% behind in single-threaded performance, then I would take a 1700 over a 7700k any day because I'd be willing to risk much better performance in the future for 10% today. Now, as it stands, you're talking about more like a 20% single-threaded performance delta due to clocks and IPC, but if you could bump those clocks up by like 10% by disabling cores, then you could have close to a 7700k (-10%, which I'd be okay with) today and 8c/16t in the future (should games, or your needs, change).

 

 

I also really appreciate another part a bit later in the video: "If a $1000 CPU beats a $500 CPU, then everyone will just respond: 'No shit....', but if a $500 CPU beats a $1000 CPU, then....."

That's interesting way to look at it. Wondering, how far can you OC those chips if you disable 4 cores in BIOS. Ar maybe just 2 cores, so you get 6c/12t CPU.

Hopefully Linus will see this post and do some testing on that xD 

Intel i7 12700K | Gigabyte Z690 Gaming X DDR4 | Pure Loop 240mm | G.Skill 3200MHz 32GB CL14 | CM V850 G2 | RTX 3070 Phoenix | Lian Li O11 Air mini

Samsung EVO 960 M.2 250GB | Samsung EVO 860 PRO 512GB | 4x Be Quiet! Silent Wings 140mm fans

WD My Cloud 4TB

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Simon771 said:

That's interesting way to look at it. Wondering, how far can you OC those chips if you disable 4 cores in BIOS. Ar maybe just 2 cores, so you get 6c/12t CPU.

Hopefully Linus will see this post and do some testing on that xD 

 

How much further you can go depends on if you are thermally limited at 8 cores or not.  If you are facing a thermal limitation, dropping 4 cores can result in maybe 200 MHz extra.  If you aren't thermally limited, which from what I'm seeing Ryzen is not, then you won't gain much at all by dropping cores.

 

I do this trick on my 5960x a lot as you know.  xD

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, done12many2 said:

 

How much further you can go depends on if you are thermally limited at 8 cores or not.  If you are facing a thermal limitation, dropping 4 cores can result in maybe 200 MHz extra.  If you aren't thermally limited, which from what I'm seeing Ryzen is not, then you won't gain much at all by dropping cores.

 

I do this trick on my 5960x a lot as you know.  xD

It's just something worth testing out if you ask me. And I do realise it wouldn't bump from 4,1GHz to 5GHz just by disabling few cores xD But it might give just a little improvements for those games that still rely on single core performance.

Someone could buy that 8 core now, disable 4 cores to get better performance in older games, and after year or two he can enable it back when games will utilise all those cores better. But that's only if there is acctually any benefit from disabling 4 cores.

And I do remember your testing with few cores disabled for better gaming performance if I'm not mistaken.

 

Too bad DX12 and Vulkan isn't smething developers would just download, patch for few days and baaam! Insane optimisation and utilisation of all cores. Would be great for gamers like me xD

Hopefully Black Desert Online will optimise their game to depend more on raw CPU power than single core performance and I can go ahead and buy Ryzen.

Intel i7 12700K | Gigabyte Z690 Gaming X DDR4 | Pure Loop 240mm | G.Skill 3200MHz 32GB CL14 | CM V850 G2 | RTX 3070 Phoenix | Lian Li O11 Air mini

Samsung EVO 960 M.2 250GB | Samsung EVO 860 PRO 512GB | 4x Be Quiet! Silent Wings 140mm fans

WD My Cloud 4TB

Link to comment
Share on other sites

Link to post
Share on other sites

They talked about developers optimizing for Ryzen during capsaicin, which I thought as kind of suspicious. Kinda makes sense now that it's an optimization/performance issue.

hello!

is it me you're looking for?

ᴾC SᴾeCS ᴰoWᴺ ᴮEᴸoW

Spoiler

Desktop: X99-PC

CPU: i7 5820k

Mobo: X99 Deluxe

Cooler: Dark Rock Pro 3

RAM: 32GB DDR4
GPU: GTX 1080

Storage: 1TB 850 Evo, 1TB HDD, bunch of external hard drives
PSU: EVGA G2 750w

Peripherals: Logitech G502, Ducky One 711

Audio: Xonar U7, O2 amplifier (RIP), HD6XX

Monitors: 4k 24" Dell monitor, 1080p 24" Asus monitor

 

Laptop:

-Overkill Dell XPS

Fully maxed out early 2017 Dell XPS 15, GTX 1050 4GB, 7700HQ, 1TB nvme SSD, 32GB RAM, 4k display. 97Whr battery :x 
Dell was having a $600 off sale for the fully specced out model, so I decided to get it :P

 

-Crapbook

Fully specced out early 2013 Macbook "pro" with gt 650m and constant 105c temperature on the CPU (GPU is 80-90C) when doing anything intensive...

A 2013 laptop with a regular sized battery still has better battery life than a 2017 laptop with a massive battery! I think this is a testament to apple's ability at making laptops, or maybe how little CPU technology has improved even 4+ years later (at least, until the recent introduction of 15W 4 core CPUs). Anyway, I'm never going to get a 35W CPU laptop again unless battery technology becomes ~5x better than as it is in 2018.

Apple knows how to make proper consumer-grade laptops (they don't know how to make pro laptops though). I guess this mostly software power efficiency related, but getting a mac makes perfect sense if you want a portable/powerful laptop that can do anything you want it to with great battery life.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Dylanc1500 said:

This is why I like jay. He is blunt and doesn't except anything but honesty from companies, and when they aren't he will call them out. He isn't afraid to tell everyone exactly what is going on, even behind the scenes.

He may be blunt but he has also frequently been a combination of arrogant and ignorant. For example: the RX 480 power consumption debacle. He was adamant that a hardware fix was necessary despite multiple factors saying it was not. It was later fixed through a simple firmware update and if I recall he never acknowledged it. I hardly ever watch his videos but I've heard similar stories before. 

 

For that reason alone I wouldn't want to watch such a person. One thing is to not be an engineer and merely be an enthusiast. Another is to not acknowledge that fact and stubbornly act as if one was an engineer.

Link to comment
Share on other sites

Link to post
Share on other sites

I think it mostly comes down to optimizing for more cores. Intel hasn't touched core count for years and has focused on increasing single core performance. If games start using 12 or 16 threads properly then ryzen should have the advantage.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, leadeater said:

That isn't quite how it works, compilers can and do optimize the resultant code where it knows it can. You even showed an example of that with the MS VS and Intel compiler forum post you showed.

 

You can also put in code path hints and sections that only apply to CPUs with certain instruction sets, see benchmarks of CPUs with and without AES instruction sets and the massive performance boost for ones that do. This doesn't happen just with having the instruction set, the application actually has to be programmed to use it when it's there, you can make the compiler only use it for Intel's AES implementation and not AMD's (why you would is another conversation).

that's not per implementation optimization (Intel / AMD) but optimization for a set

if you go in the compiler and specifically ask SSE4a, that shit won't work on any Intel CPU as that particular set is exclusive to AMD CPUs

 

if AMD takes AVX2 and doesn't implement it correctly, those CPUs will have issues executing instructions in that set or even completely crash

 

I forgot when this was exactly, but Intel released a CPU with AVX-256 (me thinks it was) and couple of weeks/months after release it was discovered that it was spewing erroneous results

Intel took a look at it and issued a micro-code update to completely disable the set

 

so, the ball is the CPU manufacturer's yard for those sets to be implemented correctly

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, zMeul said:

that's not per implementation optimization (Intel / AMD) but optimization for a set

if you go in the compiler and specifically ask SSE4a, that shit won't work on any Intel CPU as that particular set is exclusive to AMD CPUs

 

if AMD takes AVX2 and doesn't implement it correctly, those CPUs will have issues executing instructions in that set or even completely crash

 

I forgot when this was exactly, but Intel released a CPU with AVX-256 (me thinks it was) and couple of weeks/months after release it was discovered that it was spewing erroneous results

Intel took a look at it and issued a micro-code update to completely disable the set

 

so, the ball is the CPU manufacturer's yard for those sets to be implemented correctly

This what your talking about?
https://en.wikipedia.org/wiki/Pentium_FDIV_bug

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, zMeul said:

that's not per implementation optimization (Intel / AMD) but optimization for a set

if you go in the compiler and specifically ass SSE4a, that shit won't work on any Intel CPU as that particular set is exclusive to AMD CPUs

 

if AMD takes AVX2 and doesn't implement it correctly, those CPUs will have issues executing instructions in that set or even completely crash

It's a bit of both really, examples used cover both devs and cpu manufacturers.

 

The core purpose of a compiler is to compile your code down to CPU machine code, if the compiler does a bad job of this or does memory operations in a way that is sub optimal for the architecture then there will be a performance impact. How much is way too hard to really say.

 

CPU microcode updates on desktop platforms isn't really a thing as the risk is way too high. 

Edit:

Should say user CPU microcode updates are very rare.

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, Dabombinable said:

This what your talking about?
https://en.wikipedia.org/wiki/Pentium_FDIV_bug

no, that's way old

 

+ @leadeater

the thing I was talking about .. I think I might remember it wrong

but, the latest shit that happened in Intel's camp was the AVX 256bit "bug" in Skylake - discovered around December 2015, fixed in March with a micro-code update 

Link to comment
Share on other sites

Link to post
Share on other sites

I love how everyone got so hyped from the multitude of leaks and since the R7 series was beating the 6900K in most benchmarks, Cinebench mainly, that it was going to be a BEAST in gaming. Does anyone not realize that just because your CPU is great in rendering doesn't mean it's going to represent that in gaming. Perfect example are the Xeon's. I still bought an R7 1700 because I can overclock it to the 1700X and 1800X and XFR is bullshit. I bought it because it's impressive and better than my dusty old 4670K. Since gaming benchmarks were ESSENTIALLY the same, from Linus, that made my decision to go ahead and spend the same price for 4 more cores and 8 more threads.

 

Everyone, just calm down and wait for this to be ironed out. They have already claimed that they have hundreds of video game developers working on getting Ryzen optimized. So we could see plenty of patches for our loved games. Also, don't forget about the wider adoption of DX12.

 

The future is/might still be bright.

*Insert Name* R̶y̶z̶e̶n̶ Intel Build!  https://linustechtips.com/main/topic/748542-insert-name-r̶y̶z̶e̶n̶-intel-build/

Case: NZXT S340 Elite Matte White Motherboard: Gigabyte AORUS Z270X Gaming 5 CPU: Intel Core i7 7700K GPU: ASUS STRIX OC GTX 1080 RAM: Corsair Ballistix Sport LT 2400mhz Cooler: Enermax ETS-T40F-BK PSU: Corsair CX750M SSD: PNY CS1311 120GB HDD: Seagate Momentum 2.5" 7200RPM 500GB

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, zMeul said:

no, that's way old

 

+ @leadeater

I think I might remember it wrong, but the latest shit that happened in Intel's camp was the AVX 256bit "bug" in Skylake

discovered around December 2015, fixed in March with a micro-code update 

Yea Microsoft released an update for it, quite rare for it to come from them and be installed via Windows Updates. Normally microcode updates are distributed to motherboard manufactures and get included in bios updates, once you install a CPU that needs the update it'll get it.

 

https://support.microsoft.com/en-nz/help/3064209/june-2015-intel-cpu-microcode-update-for-windows

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, leadeater said:

Yea Microsoft released an update for it, quite rare for it to come from them and be installed via Windows Updates. Normally microcode updates are distributed to motherboard manufactures and get included in bios updates, once you install a CPU that needs the update it'll get it.

 

https://support.microsoft.com/en-nz/help/3064209/june-2015-intel-cpu-microcode-update-for-windows

no, that's not it

it was released BIOS update wise

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×