Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
Majestic

AMD once again violating power specifications? (AMD RX-480)

Recommended Posts

5 minutes ago, Starelementpoke said:

Can we spread this around? I'd love to see what happens with more cards tested.

Is it like how skylake based processor are overvolted and that you can lower the voltage AND overclock them anyway?

Link to post
Share on other sites
2 minutes ago, laminutederire said:

Is it like how skylake based processor are overvolted and that you can lower the voltage AND overclock them anyway?

Possibly, haven't read too much into it.

Link to post
Share on other sites
3 hours ago, Sintezza said:

 

What was missleading about it?

he streaming benchmarks with xplit? nothing wrong with those.

Xplit did better on AMD FX that time.

 

And the other gaming benchmarks video´s were legit aswell.

Wenn people dont listen to what he says and how he does the benches and which particular hardware he used, doesnt make his video wrong.

The gaming benchmarks were with a 7870 and a 7970 card if i remember correctly.

So there was nothing missleading with that.

Because the FX8350 was totaly capable to maxout a 7970.

So it wasnt that strange that you got similar scores on the FX8350 vs the i7-3570K and soforth.

But yeah if people are just too lame to understand that, then yeah...

Also those people who keep reflecting those video´s back till todays time, dont understand how the revolution of hardware works atall.

But thats not TS fault.

Those videos were extremely poorly done and should not be taken seriously.

I wrote this back when I first saw them, and yes it really did take this much text to explain how many things they got wrong:

 

Spoiler
You guys have probably seen this man already, right? It is logan from Tek Syndicate (Razethew0rld on YouTube).
Logan.jpg

If you haven't then don't worry. All you need to know is that he makes YouTube videos about technology stuff, and owns a website.

Now, he has gotten quite a lot of popularity recently, especially his video where he talks about how Apple hasn't really invented anything (very good video, go watch it here).
Anyway, I was going to write a post about him sooner or later but it seems like I have to do it now, thanks to his latest video "AMD FX 8350 vs Intel 3570K vs 3770K vs 3820 - Gaming and XSplit Streaming Benchmarks". This video is spreading so much misinformation and I am really doubting Logan's credibility after it.
So the video compares the AMD FX 8350 against the Intel i5 3570K and the reason why it has gotten so much attention is because it shows the 8350 beating the 3570K. Why is this so newsworthy? Well first of all, AMD fanboys now has something to attack Intel fanboys with. They haven't had this in a long time because well, AMD's CPU hasn't been very good for the last few years. Secondly, this contractions basically all other reviews of the chip. That by itself is pretty much enough to ignore it. If 5 trusted sites says one thing, and then this very new site comes along and say all other sites are wrong, then you wouldn't trust it, right? Well it seems like a lot of forums do trust him. So, here are some things I think Logan did wrong.

 
First of all, he is using a 7870 as the GPU. This is a bad idea since a lot of the games he will be testing will be GPU bound, especially since he is running some of them at 2560x1440. Oh, and he also runs with x8 AA, even though he should have it completely turned off for a CPU test.
In a CPU test, you want to use a really beefy GPU, like a 680 or a 7970, and then run the game at very low settings. That way you will eliminate as much of a GPU bottleneck as possible. I was really surprised that he got such huge difference, since no other site shows that. The sites I have checked which compared them in games showed a difference of only ~5%, or the Intel chips beating the AMD ones. Please note that for gaming, the i5 and i7 will perform very similarly.
Overclock3D "If you are looking to upgrade a full system then it's impossible to recommend. It's too slow, it draws too much power, it's too hot. It's just not worth it."
 
 
Problem number 2. He doesn't mention how he did the test. This is extremely important because it is a way for users to verify what he is doing is correct and the results can be replicated. When doing a test you should not only list which parts you are using, but also where the benchmark was made (either using a built in tool, or show on which map and area you do the test). If you look at the website he lists what parts he used. Here is the list:
 
AMD FX 8350 Rig
  • MSI 990FXA-GD80 Motherboard
  • 16 GB Kingston 2133MHz DDR3
  • Corsair H80 Liquid Cooling Unit
  • Kingston HyperX3K 120 GB SSD
  • HIS ICEQ Radeon 7870
Intel Z77 Rigs (3570k and 3770k)
  • EVGA Z77 Stinger mini-ITX Motherboard
  • 16 GB ADATA 2133 MHZ DDR3
  • Corsair H100
  • ADATA 256 GB SX900 SSD
  • HIS ICEQ Radeon 7870
Intel 3820 Rig
  • ASRock X79 Extreme4m Motherboard
  • 16 GB Gelid 2133MHz DDR3
  • Corsair H80 Liquid Cooling Unit
  • Kingston HyperX3K 256 GB SSD
  • HIS ICEQ Radeon 7870
The first thing I thought was "wait a minute, what the hell is he doing?". When you do a serious comparison, you only change the parts you absolutely must change. In this case it would have been the motherboard, and the CPU. However, he changes everything from the CPU cooler to the memory and the SSD and everything. That's not how you do proper tests Logan. If you look at serious reviewers like Anandtech then you'd see that he uses the same stuff for his reviews. That's to minimize the risk of other parts changing the results. 
 
 
Problem number 3. He says that the Intel CPUs got a big pool of L3 cache, which is correct. He also says that the AMD CPU has a big pool of L3 cache and shared L2 cache for each core. However, the way he says it makes it sound like the Intel CPUs does not have any L2 cache at all, which is wrong. Each Intel core has their own pool of L2 cache, which they don't have to share with other cores (unlike the AMD cores has). 
 
 

Problem number 4. The way he present numbers in the video is horrible. He should have showed a graph and not just have each result show up on the screen in the bottom corner in small text, and then disappear after just a few seconds.
Not only is his way of showing the numbers awful, he actually reports the wrong numbers multiple times. At 04:45 he says "3570K, 24.92 frames per seconds" but the video video however, it shows 37.12 FPS. That's a pretty huge difference, and the higher number was the correct one (according to his website). 

He also says "with the 3770K, 197.44 frames per seconds" but the video shows 111.97 fps, and the website says 111.920. So we got 3 different numbers for one of the tests. What? That's not a mistake you can do if you want to be taken seriously.
The results for the 3820 doesn't even appear on the screen in the video.

 

 
 
Problem number 5. The i5 beats the i7 in some games. To be more precise, they got exactly the same results in Crysis 2 at both 1080p and 1440p. Even with two identical i5's, you would still get a difference of ~5% (called the margin of error). To get exactly the same results twice is far to unlikely to be true. Then the i7 gets far better results than the i5 in Warhead, even though they should be about the same (Warhead won't take advantage of the extra threads). Also, he didn't do any tests with Xsplit and the i7-3770K.
The result for Crysis Warhead on the i5 is also strange. 26 FPS at 1080p normally, and 24.9 FPS at 1080p with Xsplit? Are you telling me that using Xsplit only reduced the FPS by 1? To me, that either means that there was a GPU bottleneck, or that something was wrong when he did the run without Xsplit (or he is just pulling numbers out of his ass).
 
 

Problem number 6. I have not played Trine 2, but those numbers seem waaay to low for a 2D platformer. 32 FPS in a 2D platformer with an i7-3820? That doesn't sound right at all. Also, the 3770K beats the 3820 by ~48% in Trine 2 according to Logan. That doesn't seem right at all.

 

 

 

 

 

Problem number 7. He says that the AMD chip is really good at overclocking, but he does not mention that you can easily get the i5 and i7 up to ~4.5 GHz with a ~30 dollar cooler (like the Cooler Master 212). The AMD FX chip also produces A LOT more heat, and uses A LOT more power, so chances are it won't overclock nearly as well as the Intel processors.
 


I am going to write more about Tek Syndicate and Logan in the future, but that's enough for today.


Just to clarify, I am not trying to bash AMD or anything like that. What I am trying to do is warn people about Tek Syndicate and their videos, because they are filled with errors. Logan has proved that he just makes things up which sounds good multiple times in the past (when he talks about memory, and his monitor review are good examples of this).
Be very, VERY cautious when you listen to what Logan says, it might just be bullshit pulled out of his ass. As this article shows, he is not serious or professional at all when it comes to testing and reviewing. Read proper reviews from people like Anandtech, TechPowerUp, Overclock3D and many more to verify before listening to Logan's advice.

 

Spoiler
1359107044250.gif
(.gif just for comedic relief. I am not trying to imply anything)


Tek Syndicate made a new video (link here) where they addressed a lot of the issues with the first one, but it's still far from perfect if you ask me. So, here are my issues with their new video.


Before I start with the issue I'd like to give credit where credit is due. Here are the things they fixed in their new video:

 

 

  • More powerful GPU. I would have liked to see lower resolution as well to eliminate the GPU bottleneck as much as possible but whatever. It's nice to see that they at least did something to try to fix the issue.
  • The systems were very similar. Very glad to see that they fixed that.
  • They now show a graph showing the FPS. Much easier to see and read. I didn't see/hear him report any wrong numbers either, unlike the last video when he did that multiple times.


Anyway, here are my issues with this video. Again, it's far better but not perfect by any stretch of the imagination.

At around 2:50 they say that a lot of websites agree with their results. Let's look at these results that agree with them, shall we?
First up, Hardware.info. Here is a summary of their benchmarks:

 

 

Hardware.info.png

Maybe this is just me, but these results doesn't agree with their results at all. Just look at the difference in percent. Logan also cherry picked and only showed the one instance where the AMD CPU beats the Intel one.

So according to Logan, the i5-3570K should beat the FX-3580 by ~32% in Crysis 2. Hardware.info on the other hand, shows either a 1,74% difference, or a 3% difference (depending on resolution and settings). That's not what I call "agreeing with their results".

But what about the results from Hardware Canucks?
Again, their results shows completely different results than what Logan got.

 

 

Hardware+Canucks.png

The Intel CPU beats the AMD according to Hardware Canucks as well.
I am not sure about what you think, but it seems to me that even the sources Logan recommends contradict his results.


At 3:05 they try to pull the "we are not fanboys, we use Intel" card. Of course they do. Intel has dominated AMD for a long time now. To me, this just sounds like the old "I am not racist, I got a black friend".
Also, like I said in my previous post I don't think they are bniased. It's just that their testing methods are flawed in some way.


At 4:00 they talk about Intel cheating on Cinebench. I am not going to defend for that. It is a horrible thing to do but let's be honest, does it really matter for us consumers? If some compilers are biased towards Intel then to me that sounds like a reason to get an Intel processor, since it will perform better. We can't do much about it. It's up to the developers to optimize for whichever platform they want.

He claims that he does "real world testing" but he also says that he deliberately pick games such as Trine 2 since they are indie games? That sounds like cherry picking to me.

 

 

 

 

At 4:44 they talk about the patch from Microsoft. From what I have read, there isn’t really any issue with the caching like Logan said. What happens is that Windows 7 currently detects each bulldozer module as 2 cores. This means that if you got two threads running, both of those will be assigned to the same module, and will have to share some resources such as the FPU. What the patch does is makes it so that if you got two threads running, they will both run on separate modules, and therefore won’t have to share resources. According to AMD, this is not an issue in Windows 8. Even the website you show in the video (Tom’s hardware) tells you this. Yes it’s a bit nitpicky but seriously, you should not say things like “it might be an issue on Windows 8” when you got text right in front of you saying that it is not.

 
So what kind of performance difference can you get with these patches? I have seen people say that it is because of these Logan get such strange results. Well I found a blog post by AMD which says this:


Our testing shows that not every application realizes a performance boost. In fact, heavily threaded apps (those designed to use all 8 cores), get little or no uplift from this hotfix – they are already maxing out the processor.  In other cases, the uplift averages out to a 1-2 percent uplift. But heck, it is free performance, and this is the scheduler model that will be used in Windows 8 (along with some further enhancements), so why not add it to your list of downloads?

So the patch will give you a 1 or 2 percent increase in performance. The patch does have some drawbacks though. It will make it so that you go into turbo (clock down some cores and overclock some cores, in order to increase performance per core) less often, and your computer will use slightly more power (cores won't go into idle as often). Not really an issue on desktops but I wouldn't want to use it on a laptop.

 

 

 

 

The new numbers clearly shows that the GPU was bottlenecking before (much higher numbers on all platforms and in all games) and is most likely still bottlenecking.

 
 
I was not really a fan of the whole “but the AMD one will use more power!” argument people where throwing around, just so that we’re clear about that. Power is cheap. Anyway 3 hours a day seems pretty reasonable per day, but that’s without counting anything else. As soon as you start counting with more than 3 hours of gaming, more CPU intense tasks (like maybe a bit of transcoding and stuff like that), and just a bunch of general tasks then the power cost will quickly ramp up and the Intel one will be cheaper in the end (after 3 or fewer years).

 
 
I don’t quite get what he is talking about at 14:00. Drivers? Well yes the drivers for the chipset which controls things like the network, audio and such have been improved, but it sounds like he is implying that he has installed a CPU driver which increased performance or something like that. CPUs are not like graphics cards. They don’t need drivers, and again, the patch from Microsoft will only increase performance by 1 or 2 percent. From what I have seen and know, chipset drivers will not increase your CPU performance.

 
 
Overall, this video is far better than his last one, which in my opinion was horrible and he should have taken it down. This one is better but still seems rather strange. I got more Tek Syndicate stuff coming since I always find misinformation in their videos (especially when Logan talks about RAM) and it seems like they are gaining popularity. If you are a big show like them they you can't mess things up and spread misinformation like they do.

 

Link to post
Share on other sites
On 7/1/2016 at 0:09 PM, Trixanity said:

Did you raise the power target? Tests show that you don't gain any performance from OC'ing because it's power starved meaning it doesn't really hit the specified clocks but raising that target blows up the power consumption. Also, you should also try undervolting a bit. Improves temps and could give you a better OC despite it being somewhat counterintuitive. 

1.075V stable was achieved by someone. It actually kinda points to that the card is running at too high a voltage at stock. Too high as in more than necessary, not too high as in explosions everywhere.

I wouldn't say it's counter-intuitive.  In fact it makes a lot of sense.  If ht is power and/or thermal throttling, turning down the voltage (assuming it's still enough to be stable) could alleviate those issues without impacting performance.

Link to post
Share on other sites
1 minute ago, Ryan_Vickers said:

I wouldn't say it's counter-intuitive.  In fact it makes a lot of sense.  If ht is power and/or thermal throttling, turning down the voltage (assuming it's still enough to be stable) could alleviate those issues without impacting performance.

It depends on the situation but it's usually the case you have to raise voltage to achieve higher clock speeds. However, on this card the voltage is already higher than it needs to be, so it causes increased temperature and increases power consumption causing it to throttle. That's why it makes sense. Well, actually of course it makes sense given the factors we're dealing with here but one would need to know about it to understand why using default or higher voltage would actually not increase performance or help overclocking.

 

So I called it counterintuitive because overclocking usually involves increasing voltage but that would be in situations where your thermals are good and the chip is lacking power, so an increase in voltage is needed to increase clock speed.

Link to post
Share on other sites
2 hours ago, Trixanity said:

It depends on the situation but it's usually the case you have to raise voltage to achieve higher clock speeds. However, on this card the voltage is already higher than it needs to be, so it causes increased temperature and increases power consumption causing it to throttle. That's why it makes sense. Well, actually of course it makes sense given the factors we're dealing with here but one would need to know about it to understand why using default or higher voltage would actually not increase performance or help overclocking.

 

So I called it counterintuitive because overclocking usually involves increasing voltage but that would be in situations where your thermals are good and the chip is lacking power, so an increase in voltage is needed to increase clock speed.

Yeah, usually you have to up the voltage, but in this case they didn't need higher clocks or anything, they needed more power headroom, which could be had by adding power or using less :)

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×