Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

[UPDATE] AMD Announces Polaris Architecture - 4th Generation GCN Arriving in Mid 2016

that CPU doesn't run on dark matter from a parallel universe

 

no load on the CPU you say? ok .. prove it! 

and that 2nd system doesn't have the same "magic" going on?! COME ON!

 

I'm just gonna leave this here: 

 

This video makes no sense. I mean.. in video everything is on ULTRA and you see combat on the ground where there is a lot more details.

The AMD testing was all made on MEDIUM settings and air battles which are not so demanding. So the CPU usage will differ a lot.

Link to post
Share on other sites

that CPU doesn't run on dark matter from a parallel universe

 

no load on the CPU you say? ok .. prove it! 

and that 2nd system doesn't have the same "magic" going on?! COME ON!

 

I'm just gonna leave this here: 

 

Ah, one more thing AMD could have done to manipulate power: cut off the 4790K's ability to use its low power states and instead run at full 4.2-4.4GHz all the time on the Nvidia system.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to post
Share on other sites

This video makes no sense. I mean.. in video everything is on ULTRA and you see combat on the ground where there is a lot more details.

The AMD testing was all made on MEDIUM settings and air battles which are not so demanding. So the CPU usage will differ a lot.

look again at my post, originally I used a video from SP, the MP CPU usage goes even to 100%

Link to post
Share on other sites

look again at my post, originally I used a video from SP, the MP CPU usage goes even to 100%

It isn't the same bit of the game though. The AMD benchmark has no ground units visible and almost no fighting. Those are two things that are generally calculated by the CPU, thus the CPU load in your video would likely (can't be sure as we don't have the CPU load from AMD's video) be higher.

Intel i7 5820K (4.5 GHz) | MSI X99A MPower | 32 GB Kingston HyperX Fury 2666MHz | Asus RoG STRIX GTX 1080ti OC | Samsung 951 m.2 nVME 512GB | Crucial MX200 1000GB | Western Digital Caviar Black 2000GB | Noctua NH-D15 | Fractal Define R5 | Seasonic 860 Platinum | Logitech G910 | Sennheiser 599 | Blue Yeti | Logitech G502

 

Nikon D500 | Nikon 300mm f/4 PF  | Nikon 200-500 f/5.6 | Nikon 50mm f/1.8 | Tamron 70-210 f/4 VCII | Sigma 10-20 f/3.5 | Nikon 17-55 f/2.8 | Tamron 90mm F2.8 SP Di VC USD Macro | Neewer 750II

Link to post
Share on other sites

Ah, one more thing AMD could have done to manipulate power: cut off the 4790K's ability to use its low power states and instead run at full 4.2-4.4GHz all the time on the Nvidia system.

sad part about this, people believed their lie

Link to post
Share on other sites

look again at my post, originally I used a video from SP, the MP CPU usage goes even to 100%

Yes it does. And once again everything is on ULTRA and ground combat which is far more demanding than air battles on which the AMD demo was demonstrated while using MEDIUM settings.

Link to post
Share on other sites

Yes it does. And once again everything is on ULTRA and ground combat which is far more demanding than air battles on which the AMD demo was demonstrated while using MEDIUM settings.

 

Which is sad because neither SHOULD be capable of saturating a 4690K at any level of currently available settings.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to post
Share on other sites

Yes it does. And once again everything is on ULTRA and ground combat which is far more demanding than air battles on which the AMD demo was demonstrated while using MEDIUM settings.

they could've used the ultra low quality setting on 640x480, still the difference AMD claims is outlandish

not even considering they're comparing different generation architectures: one on 14nm FinFET and the other presumably on 28nm

 

why didn't they compared their Polaris with Intel's own IGP - why?! because AMD would've lost  :lol:

Link to post
Share on other sites

they could've used the ultra low quality setting on 640x480, still the difference AMD claims is outlandish

not even considering they're comparing different generation architectures: one on 14nm FinFET and the other presumably on 28nm

 

why didn't they compared their Polaris with Intel's own IGP - why?! because AMD would've lost  :lol:

You mean the Intel iGPU that manages on average 30-50FPS at 720p?

http://www.anandtech.com/show/9483/intel-skylake-review-6700k-6600k-ddr4-ddr3-ipc-6th-generation/20

http://arstechnica.com/gadgets/2015/08/intel-skylake-core-i7-6700k-reviewed/2/

Intel i7 5820K (4.5 GHz) | MSI X99A MPower | 32 GB Kingston HyperX Fury 2666MHz | Asus RoG STRIX GTX 1080ti OC | Samsung 951 m.2 nVME 512GB | Crucial MX200 1000GB | Western Digital Caviar Black 2000GB | Noctua NH-D15 | Fractal Define R5 | Seasonic 860 Platinum | Logitech G910 | Sennheiser 599 | Blue Yeti | Logitech G502

 

Nikon D500 | Nikon 300mm f/4 PF  | Nikon 200-500 f/5.6 | Nikon 50mm f/1.8 | Tamron 70-210 f/4 VCII | Sigma 10-20 f/3.5 | Nikon 17-55 f/2.8 | Tamron 90mm F2.8 SP Di VC USD Macro | Neewer 750II

Link to post
Share on other sites

You mean the Intel iGPU that manages on average 30-50FPS at 720p?

exactly, if they're gonna' do a bad comparison at least do it all the way  ^_^

Link to post
Share on other sites

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to post
Share on other sites

exactly, if they're gonna' do a bad comparison at least do it all the way  ^_^

I think we'd be debating a whole different topic if AMD did this benchmark with an nVidia engineering sample of their 14/16nm finFET gpu. What would you have benched it against?

Intel i7 5820K (4.5 GHz) | MSI X99A MPower | 32 GB Kingston HyperX Fury 2666MHz | Asus RoG STRIX GTX 1080ti OC | Samsung 951 m.2 nVME 512GB | Crucial MX200 1000GB | Western Digital Caviar Black 2000GB | Noctua NH-D15 | Fractal Define R5 | Seasonic 860 Platinum | Logitech G910 | Sennheiser 599 | Blue Yeti | Logitech G502

 

Nikon D500 | Nikon 300mm f/4 PF  | Nikon 200-500 f/5.6 | Nikon 50mm f/1.8 | Tamron 70-210 f/4 VCII | Sigma 10-20 f/3.5 | Nikon 17-55 f/2.8 | Tamron 90mm F2.8 SP Di VC USD Macro | Neewer 750II

Link to post
Share on other sites

I think we'd be debating a whole different topic if AMD did this benchmark with an nVidia engineering sample of their 14/16nm finFET gpu. What would you have benched it against?

 

A 5775C.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to post
Share on other sites

Oh please, get the 5775C in there.

The ten people that managed to buy one at 370 dollars before it was discontinued to not eat away the Skylake market? :)

The 5775C iGPU still doesn't do 60FPS at 720p though (going by the benchmarks on the Anandtech page I linked). And this is 1080p, with twice as many pixels to drive.

Intel i7 5820K (4.5 GHz) | MSI X99A MPower | 32 GB Kingston HyperX Fury 2666MHz | Asus RoG STRIX GTX 1080ti OC | Samsung 951 m.2 nVME 512GB | Crucial MX200 1000GB | Western Digital Caviar Black 2000GB | Noctua NH-D15 | Fractal Define R5 | Seasonic 860 Platinum | Logitech G910 | Sennheiser 599 | Blue Yeti | Logitech G502

 

Nikon D500 | Nikon 300mm f/4 PF  | Nikon 200-500 f/5.6 | Nikon 50mm f/1.8 | Tamron 70-210 f/4 VCII | Sigma 10-20 f/3.5 | Nikon 17-55 f/2.8 | Tamron 90mm F2.8 SP Di VC USD Macro | Neewer 750II

Link to post
Share on other sites

why didn't they compared their Polaris with Intel's own IGP - why?! because AMD would've lost  :lol:

 

Because Intel is not a competitor at this point. Even if they become one it won't be for few more years and both NV and AMD will habe something new out by then. No one goes and tells "Yeah I'll buy a gaming PC and use an Intel iGPU for gaming". 

 

And how could the new polaris dGPU lose to Iris Pro 6200 is beyond me. That is targeted at AMD APUs, this is a discrete card. How you would compare those is also beyond me. 

The ability to google properly is a skill of its own. 

Link to post
Share on other sites

I think we'd be debating a whole different topic if AMD did this benchmark with an nVidia engineering sample of their 14/16nm finFET gpu. What would you have benched it against?

let's get back to AMD's bullcrap testing shall we?

 

a non-reference GTX950 has a peak power draw (just the video card) of 110W http://media.bestofmicro.com/O/G/520576/gallery/Power_w_600.png

system 2, as shown, drew at the wall 152W - that leaves the rest of the system to use 42W

 

now, system 1 draws at the wall 85W, minus 42W would make the Polaris based card to draw 43W on it's own ?!

 

bce0ef53bcf1247e22edcd2912442dbb5114c303

 

note: reference GTX950 is rated for 90W power draw (not TDP !!!) - http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-950/specifications

Link to post
Share on other sites

should i feel bad for buying a 390 last year on black friday?

One day I will be able to play Monster Hunter Frontier in French/Italian/English on my PC, it's just a matter of time... 4 5 6 7 8 9 years later: It's finally coming!!!

Phones: iPhone 4S/SE | LG V10 | Lumia 920

Laptops: Macbook Pro 15" (mid-2012) | Compaq Presario V6000

Link to post
Share on other sites

Is that 86W board power or total system power?

 

As from the print I'm reading from the slide it says 'The polaris card got 60fps and consumed 86W of power', not 'The polaris system'.

 

According to the AnandTech article, total system power.

'Fanboyism is stupid' - someone on this forum.

Be nice to each other boys and girls. And don't cheap out on a power supply.

Spoiler

CPU: Intel Core i7 4790K - 4.5 GHz | Motherboard: ASUS MAXIMUS VII HERO | RAM: 32GB Corsair Vengeance Pro DDR3 | SSD: Samsung 850 EVO - 500GB | GPU: MSI GTX 980 Ti Gaming 6GB | PSU: EVGA SuperNOVA 650 G2 | Case: NZXT Phantom 530 | Cooling: CRYORIG R1 Ultimate | Monitor: ASUS ROG Swift PG279Q | Peripherals: Corsair Vengeance K70 and Razer DeathAdder

 

Link to post
Share on other sites

let's get back to AMD's bullcrap testing shall we?

 

a non-reference GTX950 has a peak power draw (just the video card) of 110W http://media.bestofmicro.com/O/G/520576/gallery/Power_w_600.png

system 2, as shown, drew at the wall 152W - that leaves the rest of the system to use 42W

 

now, system 1 draws at the wall 85W, minus 42W would make the Polaris based card to draw 43W on it's own ?

nVidia managed a ~20% power consumption delta going from Kepler to Maxwell, and that was without the benefit of halving their transistor size (which brings a power saving all on its own). So a larger power consumption delta for going from 28nm to 14nm, on top of architectural improvements, isn't purely in the realm of science fiction.

Intel i7 5820K (4.5 GHz) | MSI X99A MPower | 32 GB Kingston HyperX Fury 2666MHz | Asus RoG STRIX GTX 1080ti OC | Samsung 951 m.2 nVME 512GB | Crucial MX200 1000GB | Western Digital Caviar Black 2000GB | Noctua NH-D15 | Fractal Define R5 | Seasonic 860 Platinum | Logitech G910 | Sennheiser 599 | Blue Yeti | Logitech G502

 

Nikon D500 | Nikon 300mm f/4 PF  | Nikon 200-500 f/5.6 | Nikon 50mm f/1.8 | Tamron 70-210 f/4 VCII | Sigma 10-20 f/3.5 | Nikon 17-55 f/2.8 | Tamron 90mm F2.8 SP Di VC USD Macro | Neewer 750II

Link to post
Share on other sites

The ten people that managed to buy one at 370 dollars before it was discontinued to not eat away the Skylake market? :)

The 5775C iGPU still doesn't do 60FPS at 720p though (going by the benchmarks on the Anandtech page I linked). And this is 1080p, with twice as many pixels to drive.

 

You can buy the 5775C on newer and Amazon.

 

Depends on the game, but Battlefront it's about 40 fps on 1080p medium.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to post
Share on other sites

nVidia managed a ~20% power consumption delta going from Kepler to Maxwell, and that was without the benefit of halving their transistor size (which brings a power saving all on its own). So a larger power consumption delta for going from 28nm to 14nm, on top of architectural improvements, isn't purely in the realm of science fiction.

mate, you're saying that both testing are capped - that means the GTX950 would draw a significant amount less power than it's actually rated for

 

keeping in mind that both systems minus videocards would draw equal amounts of power, that would make system 1 draw power from where!? a parallel universe!?

 

let's say the GTX950 would draw on it's own, capped, ~50W - that would mean the rest of the system would draw 100W

and this is where all bells, whistles and horns would ring because system1, with video card, draws only 85W -  HELLO!

 

as I said, AMD's testing is bullcrap!

system1 is running with all power caps in place, disabled core, disabled lanes

system2 is running with no caps

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×