Jump to content

AMD Raven Ridge mobile graphics are faster than the Intel Iris Plus 640

Okjoek
On 9/20/2017 at 8:20 AM, Okjoek said:

I like the "Ryzen" name. Much more than "Threadripper" or "Epyc" which sound too gaudy to me. Only thing I hated about the numbering was that they went into the whole 3,5,7 BS.

Threadripper would be a good name for a horse.

.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Prysin said:

nah, HBM, even if HBM 1, would kill the price bracket. You need a cheaper solution like a 256-512MB L4 cache like on Broadwell

Small HBM wouldn't be that expensive and L4 cache would be much more than HBM is. Memory types used in CPU/GPU caches is the most expensive by far, mind you you can call HBM on package L4 cache anyway since it's not like there is a rule on what memory type has to be used for it to be called a CPU/GPU cache.

 

HBM is also much cheaper to implement on package but off chip and is designed for it which is why it's fit for purpose. HBM2 in it's largest capacity is expensive sure but that is normal for an emerging tech or the highest capacity of something possible.

Link to comment
Share on other sites

Link to post
Share on other sites

I'm not worried about it using lots of power at all.  Actually I see this thing being crazily efficient.

 

All of AMD's 14nm parts have been extremely efficient at lower clockspeeds and voltages.  Even Fiji cards (which I know are not 14nm) become shockingly power-efficient if you drop the clockspeeds a bit and pull voltage out.  They just have been coming out of the box with the voltage YOLO'd in order to hit high clocks.

 

Looking at the full-fat chips:

When HardOCP did a clock for clock test of Vega 64 vs Fury X, they saw total system power consumption on the V64 machine drop from 476W to 308W, without dropping the (stupidly high) voltage Vega tends to run at.  So just from lower clockspeeds (1050MHz wouldn't surprise me too much for mobile Vega), a 64CU card goes from being a 375W monster to using more like 200W.

 

The only decent non-mining results I've found for an underclocked and undervolted Vega was the V56 that Tomshardware measured in the 160W range at 1100MHz-ish under 4K gaming loads.  (link below)

http://www.tomshardware.com/reviews/radeon-rx-vega-56,5202-22.html

 

It's pretty safe to assume an integrated GPU based on on the Vega architecture is gonna be running a lot closer to the one in the Tom's article, and with only 11CU's you've only got 1/5 the stream processors to feed, so you don't need the massive power delivery setup.  At this point we're already in the neighborhood of 30-40 watts, and that's basing our assumptions on overbuilt desktop hardware that doesn't allow you to manually set voltages below something like 900mV (on a GPU that will run at 1500MHz at 1050mV)

SFF-ish:  Ryzen 5 1600X, Asrock AB350M Pro4, 16GB Corsair LPX 3200, Sapphire R9 Fury Nitro -75mV, 512gb Plextor Nvme m.2, 512gb Sandisk SATA m.2, Cryorig H7, stuffed into an Inwin 301 with rgb front panel mod.  LG27UD58.

 

Aging Workhorse:  Phenom II X6 1090T Black (4GHz #Yolo), 16GB Corsair XMS 1333, RX 470 Red Devil 4gb (Sold for $330 to Cryptominers), HD6850 1gb, Hilariously overkill Asus Crosshair V, 240gb Sandisk SSD Plus, 4TB's worth of mechanical drives, and a bunch of water/glycol.  Coming soon:  Bykski CPU block, whatever cheap Polaris 10 GPU I can get once miners start unloading them.

 

MintyFreshMedia:  Thinkserver TS130 with i3-3220, 4gb ecc ram, 120GB Toshiba/OCZ SSD booting Linux Mint XFCE, 2TB Hitachi Ultrastar.  In Progress:  3D printed drive mounts, 4 2TB ultrastars in RAID 5.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, leadeater said:

Small HBM wouldn't be that expensive and L4 cache would be much more than HBM is. Memory types used in CPU/GPU caches is the most expensive by far, mind you you can call HBM on package L4 cache anyway since it's not like there is a rule on what memory type has to be used for it to be called a CPU/GPU cache.

 

HBM is also much cheaper to implement on package but off chip and is designed for it which is why it's fit for purpose. HBM2 in it's largest capacity is expensive sure but that is normal for an emerging tech or the highest capacity of something possible.

Why not have a separate memory DIMM on the motherboard for graphics memory? Or would it have too much latency or something I'm missing?

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Okjoek said:

Why not have a separate memory DIMM on the motherboard for graphics memory? Or would it have too much latency or something I'm missing?

Well right now APU's just share the system memory so no real difference if doing that. The main issues with sharing the system memory is capacity (if cheap device) and bandwidth. Bandwidth really is the biggest issue though, starve a GPU of that and it doesn't matter how big or fast it is it'll perform badly. 

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, leadeater said:

Well right now APU's just share the system memory so no real difference if doing that. The main issues with sharing the system memory is capacity (if cheap device) and bandwidth. Bandwidth really is the biggest issue though, starve a GPU of that and it doesn't matter how big or fast it is it'll perform badly. 

yup, and we know bandwidth is what holds the APUs back. Because the dGPU of the APU had 3.2x the bandwidth of the APU itself. the R7 240 is roughly the same GPU throughput (TFLOPS) but it has FAR more bandwidth

Link to comment
Share on other sites

Link to post
Share on other sites

On 21/09/2017 at 10:38 AM, Brooksie359 said:

I would argue that the main issue is intel has to try and make a gpu without using any of AMD or Nvidia ip which is basically impossible to do while making a decent gpu. I mean I believe Intel even has licences some of AMD ip recently.

Course you can, qualcomm snapdragon has GPU of equivalent IPCs to nvidia, but they dont use x86. In the past intel has used powerVR with some atoms though but sucked at it with drivers.

Link to comment
Share on other sites

Link to post
Share on other sites

So when are we going to have an iGPU or APU that's as fast as whatever the equivalent of these at the time are? xD

nvidia-titan-xp-01.jpg  quadro2017-3.jpg  ryzen_threadripper_1950x_27-100731177-orig.jpg  core-i7-retail-1.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Okjoek said:

How would you guys feel if they made a TR4 APU?

I would feel like it was out of my price range.

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, Okjoek said:

How would you guys feel if they made a TR4 APU?

well if they made a TR4 sized APU would be really awesome, (basicly 1800x + vega 56 ish). BUT it is a specialized setup, were it would make sense in SFF PC's but not big towers.  AMD has made a white paper about a server grade APU that has 4 CPU dies, 8 GPU dies and HBM per GPU on one package.

if you want to annoy me, then join my teamspeak server ts.benja.cc

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, The Benjamins said:

well if they made a TR4 sized APU would be really awesome, (basicly 1800x + vega 56 ish). BUT it is a specialized setup, were it would make sense in SFF PC's but not big towers.  AMD has made a white paper about a server grade APU that has 4 CPU dies, 8 GPU dies and HBM per GPU on one package.

Hey yeah, I'd buy into TR4 as an ITX form factor with 8 core CPU performance and turn the rest of that massive space into GPU horsepower with HBM II. Get one of those 300W HDPLEX boards...

 

Some kind of custom cooling solution? Heatsink case?...

 

I wonder still wouldn't it be better to just get a low-profile GPU with an R7 1700? Put one of those Noctua L9 x 65 coolers that IIRC would be shorter than a LP GPU.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, System Error Message said:

Course you can, qualcomm snapdragon has GPU of equivalent IPCs to nvidia, but they dont use x86. In the past intel has used powerVR with some atoms though but sucked at it with drivers.

What? x86 has nothing to do with GPUs. And Qualcomm's Adreno GPU technology was bought from AMD. It was ATI's mobile efforts.

That's why they have access to a lot of graphics IP. 

 

You need access to graphics IP to make one.

Only other way is to make a giant black box and hope no one discovers how your GPU works which is probably what Apple is doing right now if they haven't licensed any IP. If they haven't licensed anything, they've probably utilized their intimate knowledge of PowerVR's tech to reverse engineer it and make a better GPU whilst praying that Imagination doesn't find enough evidence to sue them into oblivion - or alternatively hope no one will make an issue of it in order to avoid graphics companies suing each other left and right in patent warfare because other than cross licensing agreements that's basically what keeps them all in check: the fear of starting a war.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, System Error Message said:

Course you can, qualcomm snapdragon has GPU of equivalent IPCs to nvidia, but they dont use x86. In the past intel has used powerVR with some atoms though but sucked at it with drivers.

Adreno graphics are just built on old ATI tech.  It's not a coincidence that "Adreno" is an anagram of "Radeon"

SFF-ish:  Ryzen 5 1600X, Asrock AB350M Pro4, 16GB Corsair LPX 3200, Sapphire R9 Fury Nitro -75mV, 512gb Plextor Nvme m.2, 512gb Sandisk SATA m.2, Cryorig H7, stuffed into an Inwin 301 with rgb front panel mod.  LG27UD58.

 

Aging Workhorse:  Phenom II X6 1090T Black (4GHz #Yolo), 16GB Corsair XMS 1333, RX 470 Red Devil 4gb (Sold for $330 to Cryptominers), HD6850 1gb, Hilariously overkill Asus Crosshair V, 240gb Sandisk SSD Plus, 4TB's worth of mechanical drives, and a bunch of water/glycol.  Coming soon:  Bykski CPU block, whatever cheap Polaris 10 GPU I can get once miners start unloading them.

 

MintyFreshMedia:  Thinkserver TS130 with i3-3220, 4gb ecc ram, 120GB Toshiba/OCZ SSD booting Linux Mint XFCE, 2TB Hitachi Ultrastar.  In Progress:  3D printed drive mounts, 4 2TB ultrastars in RAID 5.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, PianoPlayer88Key said:

So when are we going to have an iGPU or APU that's as fast as whatever the equivalent of these at the time are? xD

nvidia-titan-xp-01.jpg  quadro2017-3.jpg  ryzen_threadripper_1950x_27-100731177-orig.jpg  core-i7-retail-1.jpg

never, probably, as you will always have more power/heat headroom by having the gpu separate from the gpu

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Trixanity said:

What? x86 has nothing to do with GPUs. And Qualcomm's Adreno GPU technology was bought from AMD. It was ATI's mobile efforts.

That's why they have access to a lot of graphics IP. 

 

You need access to graphics IP to make one.

Only other way is to make a giant black box and hope no one discovers how your GPU works which is probably what Apple is doing right now if they haven't licensed any IP. If they haven't licensed anything, they've probably utilized their intimate knowledge of PowerVR's tech to reverse engineer it and make a better GPU whilst praying that Imagination doesn't find enough evidence to sue them into oblivion - or alternatively hope no one will make an issue of it in order to avoid graphics companies suing each other left and right in patent warfare because other than cross licensing agreements that's basically what keeps them all in check: the fear of starting a war.

i mean qualcomm uses ARM with adreno and not x86. Funny how old ATI competes with nvidia quite well in IPCs and in the mobile segment, using unified GPU architecture compatible with the latest openCL.

 

You can make your own GPU, a GPU is simply a CPU with many many FPUs coupled with some graphics related stuff like ROPs and TMUs on a large memory bandwidth. If you have an FPGA you can always give it a go.

5 hours ago, Phate.exe said:

Adreno graphics are just built on old ATI tech.  It's not a coincidence that "Adreno" is an anagram of "Radeon"

Perhaps AMD needs to look at the roots of itself for better GPUs.

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, System Error Message said:

i mean qualcomm uses ARM with adreno and not x86. Funny how old ATI competes with nvidia quite well in IPCs and in the mobile segment, using unified GPU architecture compatible with the latest openCL.

 

You can make your own GPU, a GPU is simply a CPU with many many FPUs coupled with some graphics related stuff like ROPs and TMUs on a large memory bandwidth. If you have an FPGA you can always give it a go.

Perhaps AMD needs to look at the roots of itself for better GPUs.

Of course you can. But you can't sell it. You'll be unable to avoid infringing on patents; it's impossible to make a modern GPU without infringing on one or more patents. That's the core problem. However you can try to obfuscate your work to keep lawsuits at bay.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×