Jump to content

AMD Raven Ridge mobile graphics are faster than the Intel Iris Plus 640

Okjoek
15 hours ago, Sauron said:

Because vega draws a lot of power. It may not matter as much on a desktop, but on a laptop it's crucial.

Vega at lower clocks is actually very efficient

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, Prysin said:

that is for VGA clocks. APU clocks could come down to 1100-1300MHz and still be damn fast for iGPU to be.

those are similar to vega56 clocks. I'm guessing they will go a lot lower than that.

Link to comment
Share on other sites

Link to post
Share on other sites

59 minutes ago, Humbug said:

those are similar to vega56 clocks. I'm guessing they will go a lot lower than that.

I thought we saw some leaks with clocks around 700 Mhz?

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, hey_yo_ said:

A proprietary I/O developed by Intel and in collaboration with Apple. Enables eGPU's which is good for adding extra graphics for laptops, connect two 4K monitors at 60 Hz, a single 4K monitor at 120 Hz or a single 5K monitor at 60 Hz. At the moment, only Skylake and above uses TB3.

 

I agree that an eGPU has significantly more potential than on board graphics from a technical perspective, but I have never seen anyone ever use an eGPU with their laptop in real life. The point in a laptop is portability. No one is going to want to carry around a greater than laptop sized eGPU case with their laptop sized laptop. An eGPU hurts the portability of the device. Why pay for the GPU and eGPU case that will harm portability when you can just pay another $500-$1000 on the initial laptop sticker price for a decent-great dedicated GPU that's built in and doesn't kill portability?

 

Seriously, I'm probably missing something here. Is an eGPU better when connecting the laptop to another screen(s) other than the laptop's built-in screen? Do people mine with laptops? Do dedicated GPUs kill laptop battery life, but eGPUs don't? What am I missing here? Aren't TB3 monitors rare and expensive as well?

 

I look at Raven Ridge and see huge potential because it looks like it will only use as much to slightly more power than a competitive Intel counterpart, but Raven Ridge based APU laptops will likely not need a dedicated GPU even up into the lower mid range to mid range laptop offerings whereas Intel based laptops will need a dedicated GPU. This means longer battery life and smaller form factor mid range laptops that will have the capability for better cooling.

CPU: i7 4790k @ 4.7 GHz

GPU: XFX GTS RX580 4GB

Cooling: Corsair h100i

Mobo: Asus z97-A 

RAM: 4x8 GB 1600 MHz Corsair Vengence

PSU: Corsair HX850

Case: NZXT S340 Elite Tempered glass edition

Display: LG 29UM68-P

Keyboard: Roccat Ryos MK FX RGB

Mouse: Logitech g900 Chaos Spectrum

Headphones: Sennheiser HD6XX

OS: Windows 10 Home

Link to comment
Share on other sites

Link to post
Share on other sites

16 hours ago, The Benjamins said:

GPU tasks need faster ram. DDR3 is about 15-25GBps DDR4 is about 20-30GBps (I might be off by a bit but close enough) were HBM1 is 500GBps 4 stacks HBM2 is 480 GBps 2 stacks (fiji, vega). GDDR5 has been seen to be anywhere in 200GBps - 500GBps

 

DDR4 will have some nice gains on the A12-9800 vs the A10-7870k (I have that in my oil PC)

 

Testing on the A10-7870k noticed 5-10% gains in FPS when going from 1600 to 1866 DDR3.

 

This is why the console APU's use GDDR5.

 

So a APU with on package HBM would eliminate the bottleneck of slow GPU RAM vs GPU using shared system RAM.

Because GDDR5(X) and HBM (2) can transfer significantly faster than DDR4, would it be possible for an APU (or CPU) to use HBM or GDDR5 in place of DDR4? I recognize you just said that a GPU couldn't use the CPU's DDR4 memory because the CPU's memory is too slow, but I'm asking if the reverse case is possible (CPU using GPU memory). I noticed you mentioned that consoles use GPU memory, but I don't know much about CPU architecture so I have no idea if that same concept could be transfered to existing CPU architectures.

CPU: i7 4790k @ 4.7 GHz

GPU: XFX GTS RX580 4GB

Cooling: Corsair h100i

Mobo: Asus z97-A 

RAM: 4x8 GB 1600 MHz Corsair Vengence

PSU: Corsair HX850

Case: NZXT S340 Elite Tempered glass edition

Display: LG 29UM68-P

Keyboard: Roccat Ryos MK FX RGB

Mouse: Logitech g900 Chaos Spectrum

Headphones: Sennheiser HD6XX

OS: Windows 10 Home

Link to comment
Share on other sites

Link to post
Share on other sites

45 minutes ago, ATFink said:

Seriously, I'm probably missing something here. Is an eGPU better when connecting the laptop to another screen(s) other than the laptop's built-in screen? Do people mine with laptops? Do dedicated GPUs kill laptop battery life, but eGPUs don't? What am I missing here? Aren't TB3 monitors rare and expensive as well?

TB3 can charge a laptop while transferring data at 40 Gbps because it can transfer both power and data at the same time. Since the LG Ultrafine 5K monitor can charge a MacBook Pro or any laptop with TB3 while extending displays, I bet the same goes with an eGPU enclosure. I don't think anyone is mining with their laptop. That's why I don't see Intel licensing Thunderbolt 3 to AMD anytime soon.

There is more that meets the eye
I see the soul that is inside

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, ATFink said:

Because GDDR5(X) and HBM (2) can transfer significantly faster than DDR4, would it be possible for an APU (or CPU) to use HBM or GDDR5 in place of DDR4? I recognize you just said that a GPU couldn't use the CPU's DDR4 memory because the CPU's memory is too slow, but I'm asking if the reverse case is possible (CPU using GPU memory). I noticed you mentioned that consoles use GPU memory, but I don't know much about CPU architecture so I have no idea if that same concept could be transfered to existing CPU architectures.

He didn't say that the GPU can't run off of DDR3/4 from them being too slow, but that the consoles used GDDR5 instead as it has a much greater bandwidth. Now that that is out of the way, the APU *should* be capable of using GDDR5 and HBM as system ram as was demonstrated with current gen consoles.

AMD has been dedicating a lot of resources for a while on making their APUs HSA compliant (Which I think they head that consortium, correct me if I'm wrong). One of the places that are effected is heterogeneous Uniform Memory Access or HSA hUMA for short. (Link) Also current consoles are based on the x86 ISA that is used in most PCs today.

Edited by Guest
Added link
Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Humbug said:

those are similar to vega56 clocks. I'm guessing they will go a lot lower than that.

nah. 28nm SHP Kaveri/Carrizo managed around 1100MHz OC'd, 825MHz factory (7870k). 1100MHz is easily within reach. maybe 1250-1300 is max factory OC, time will show.

 

Kaveri was limited more by TDP due to the CPU cores being incredibly power hungry, and the APU using a shared power circuit. On AM4 boards, the top row of capacitors is for the AM4 APU-GPU section only, whilst the other power phases is for the CPU.... Look closely and you see up to 6phases for some APU-GPUs alone. This will allow incredibly granular control of power, and with a socket and pinout able to handle FAR more power then FM2+ was designed for, well, it goes without saying that a top end ZEN + VEGA APU could easily be allowed 95-100w of BASE TDP, while remaining unlocked. Mind you, 14nm LPP which is what the APUs are launching at (possible 12nm LP later?) are more then 30% more power efficient then 28NM SHP. WHich was already quite good, for 28nm.

 

i would be surprised if the initial top end VEGA based APU isnt somewhere areound 1050-1100 MHz. It should EASILY reach such clocks while staying inside the power budget.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, ATFink said:

Because GDDR5(X) and HBM (2) can transfer significantly faster than DDR4, would it be possible for an APU (or CPU) to use HBM or GDDR5 in place of DDR4? I recognize you just said that a GPU couldn't use the CPU's DDR4 memory because the CPU's memory is too slow, but I'm asking if the reverse case is possible (CPU using GPU memory). I noticed you mentioned that consoles use GPU memory, but I don't know much about CPU architecture so I have no idea if that same concept could be transfered to existing CPU architectures.

RAM designed for GPU use has different performance characteristics compared to RAM for CPUs. GPU ram is designed for wider buses but has higher latency, you do get higher throughput or bandwidth for highly serialized and queued tasks but for general tasks with a lot of switching and bursting you get less throughput.

 

    

Link to comment
Share on other sites

Link to post
Share on other sites

21 hours ago, mynameisjuan said:

Who would have guess that a company that has a dedicated GPU branch has better graphics in their chips than the company that doesnt have a GPU branch.

In all fairness the company with the gpu branch has a severely lacking gpu branch atm.

- ASUS X99 Deluxe - i7 5820k - Nvidia GTX 1080ti SLi - 4x4GB EVGA SSC 2800mhz DDR4 - Samsung SM951 500 - 2x Samsung 850 EVO 512 -

- EK Supremacy EVO CPU Block - EK FC 1080 GPU Blocks - EK XRES 100 DDC - EK Coolstream XE 360 - EK Coolstream XE 240 -

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, ATFink said:

Because GDDR5(X) and HBM (2) can transfer significantly faster than DDR4, would it be possible for an APU (or CPU) to use HBM or GDDR5 in place of DDR4? I recognize you just said that a GPU couldn't use the CPU's DDR4 memory because the CPU's memory is too slow, but I'm asking if the reverse case is possible (CPU using GPU memory). I noticed you mentioned that consoles use GPU memory, but I don't know much about CPU architecture so I have no idea if that same concept could be transfered to existing CPU architectures.

actually, DDR4 bandwidth is quite insane once you reach 4200+ MHz... sure, it is not going to match top end GDDR5, but it can get close to 100GB/s (on intel. Not on AMDs crap IMC)

 

Formula is this

Bus Width (64 bits) x channels (2) x Clock speed (4200Mhz) / 8 (to get BYTE) = 67.2GB per sec

Sure, not INSANE...

 

until you realize. Kaveri on DDR3 maxed out around 24GB/s (due to shitty IMC. should have gotten 32GBs)

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, tjcater said:

He didn't say that the GPU can't run off of DDR3/4 from them being too slow, but that the consoles used GDDR5 instead as it has a much greater bandwidth. Now that that is out of the way, the APU *should* be capable of using GDDR5 and HBM as system ram as was demonstrated with current gen consoles.

AMD has been dedicating a lot of resources for a while on making their APUs HSA compliant (Which I think they head that consortium, correct me if I'm wrong). One of the places that are effected is heterogeneous Uniform Memory Access or HSA hUMA for short. (Link) Also current consoles are based on the x86 ISA that is used in most PCs today.

You wouldn't really want to use GDDR for a PC though, not for system memory. Consoles are much more single tasked and have a single purpose in mind where a PC could be doing any number of things at once which is where GDDR starts to fall down. Peak throughput based on spec isn't actually what you achieve in real life for any given task.

 

HBM on the APU package would be a great idea though for use with Vega's HBCC, you would probably only need a really small amount of it too like only 1GB. I should think HBM + DDR4 would work rather well together for a high end APU. 

Link to comment
Share on other sites

Link to post
Share on other sites

Did anyone expect any different??

CanTauSces: x5675 4.57ghz ~ 24GB 2133mhz CL10 Corsair Platinum ~ MSI X58 BIG BANG ~ AMD RADEON R9 Fury Nitro 1155mhz ~ 2x Velociraptor 1TB RAID 0 ~ 960GB x3 Crucial SSD ~ Creative SB Audigy FX ~ Corsair RM series 850 watts ~ Dell U2715H 27" 2560x1440.

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, Ryan_Vickers said:

That would be interesting.  Even just getting Ryzen laptops at all would be good imo, but I suppose they're waiting for APUs since when was the last time you saw a laptop that didn't have an iGPU for power purposes?

Funny you should say that as one has very recently been released with a Ryzen CPU to boot. Edit: also dat pwr consumption tho. Using less power than an Intel mobile i7 and GTX 1070 while sporting a R7 1700 and RX 580. And using a lot less power than 7700K + GTX 1080.

 

15 hours ago, Ryan_Vickers said:

ok, but are they better than the Iris Pro 580?  I believe that's their best iGPU at the moment

Iris Pro has an advantage due to the memory bandwidth provided by the L4 cache. AMD will be hampered by having to reach for the system memory instead. HBM would solve it but that's a very expensive solution so that's off the table for now. So unless Raven Ridge has some ace hidden we don't know about, it'll be bottlenecked by memory bandwidth which is hilarious because the behemoth of a sibling called Vega 64 has the exact same problem despite using HBM.

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, TidaLWaveZ said:

In all fairness the company with the gpu branch has a severely lacking gpu branch atm.

Still more advanced then intel though. 

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, hey_yo_ said:

TB3 can charge a laptop while transferring data at 40 Gbps because it can transfer both power and data at the same time. Since the LG Ultrafine 5K monitor can charge a MacBook Pro or any laptop with TB3 while extending displays, I bet the same goes with an eGPU enclosure. I don't think anyone is mining with their laptop. That's why I don't see Intel licensing Thunderbolt 3 to AMD anytime soon.

https://newsroom.intel.com/editorials/envision-world-thunderbolt-3-everywhere/

 

Thunderbolt 3 is already being planned to be openly licensed, royalty-free.

Quote

In addition to Intel’s Thunderbolt silicon, next year Intel plans to make the Thunderbolt protocol specification available to the industry under a nonexclusive, royalty-free license. Releasing the Thunderbolt protocol specification in this manner is expected to greatly increase Thunderbolt adoption by encouraging third-party chip makers to build Thunderbolt-compatible chips. We expect industry chip development to accelerate a wide range of new devices and user experiences

 

MOAR COARS: 5GHz "Confirmed" Black Edition™ The Build
AMD 5950X 4.7/4.6GHz All Core Dynamic OC + 1900MHz FCLK | 5GHz+ PBO | ASUS X570 Dark Hero | 32 GB 3800MHz 14-15-15-30-48-1T GDM 8GBx4 |  PowerColor AMD Radeon 6900 XT Liquid Devil @ 2700MHz Core + 2130MHz Mem | 2x 480mm Rad | 8x Blacknoise Noiseblocker NB-eLoop B12-PS Black Edition 120mm PWM | Thermaltake Core P5 TG Ti + Additional 3D Printed Rad Mount

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 9/20/2017 at 10:19 AM, Sauron said:

Because vega draws a lot of power. It may not matter as much on a desktop, but on a laptop it's crucial.

Think Vega 64 uses a bunch of power. Vega 64 undervolted which I own one performs almost the same using a bunch less energy. No telling how powerful they are @ low power until they are released. My gut tells my they are gonna do well with power and still perform well for APUs. Core for core Ryzen also uses less power than Intels CPU as well.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, ATFink said:

I agree that an eGPU has significantly more potential than on board graphics from a technical perspective, but I have never seen anyone ever use an eGPU with their laptop in real life. The point in a laptop is portability. No one is going to want to carry around a greater than laptop sized eGPU case with their laptop sized laptop. An eGPU hurts the portability of the device. Why pay for the GPU and eGPU case that will harm portability when you can just pay another $500-$1000 on the initial laptop sticker price for a decent-great dedicated GPU that's built in and doesn't kill portability?

 

Seriously, I'm probably missing something here. Is an eGPU better when connecting the laptop to another screen(s) other than the laptop's built-in screen? Do people mine with laptops? Do dedicated GPUs kill laptop battery life, but eGPUs don't? What am I missing here? Aren't TB3 monitors rare and expensive as well?

 

I look at Raven Ridge and see huge potential because it looks like it will only use as much to slightly more power than a competitive Intel counterpart, but Raven Ridge based APU laptops will likely not need a dedicated GPU even up into the lower mid range to mid range laptop offerings whereas Intel based laptops will need a dedicated GPU. This means longer battery life and smaller form factor mid range laptops that will have the capability for better cooling.

High end laptops, rocking something like a 7700HQ, but lower end GPUs, can serve as a fairly powerful desktop at home when paired with eGPU docks, but you only take the laptop when you need to go somewhere.

Pyo.

Come Bloody Angel

Break off your chains

And look what I've found in the dirt.

 

Pale battered body

Seems she was struggling

Something is wrong with this world.

 

Fierce Bloody Angel

The blood is on your hands

Why did you come to this world?

 

Everybody turns to dust.

 

Everybody turns to dust.

 

The blood is on your hands.

 

The blood is on your hands!

 

Pyo.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, leadeater said:

You wouldn't really want to use GDDR for a PC though, not for system memory. Consoles are much more single tasked and have a single purpose in mind where a PC could be doing any number of things at once which is where GDDR starts to fall down. Peak throughput based on spec isn't actually what you achieve in real life for any given task.

 

HBM on the APU package would be a great idea though for use with Vega's HBCC, you would probably only need a really small amount of it too like only 1GB. I should think HBM + DDR4 would work rather well together for a high end APU. 

nah, HBM, even if HBM 1, would kill the price bracket. You need a cheaper solution like a 256-512MB L4 cache like on Broadwell

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Trixanity said:

Funny you should say that as one has very recently been released with a Ryzen CPU to boot. Edit: also dat pwr consumption tho. Using less power than an Intel mobile i7 and GTX 1070 while sporting a R7 1700 and RX 580. And using a lot less power than 7700K + GTX 1080.

Damn! Nice :D 

4 hours ago, Trixanity said:

Iris Pro has an advantage due to the memory bandwidth provided by the L4 cache. AMD will be hampered by having to reach for the system memory instead. HBM would solve it but that's a very expensive solution so that's off the table for now. So unless Raven Ridge has some ace hidden we don't know about, it'll be bottlenecked by memory bandwidth which is hilarious because the behemoth of a sibling called Vega 64 has the exact same problem despite using HBM.

Ah, makes sense.

4 hours ago, mynameisjuan said:

Still more advanced then intel though. 

Is it though?  Like I said,

19 hours ago, Ryan_Vickers said:

ok, but are they better than the Iris Pro 580?  I believe that's their best iGPU at the moment

Solve your own audio issues  |  First Steps with RPi 3  |  Humidity & Condensation  |  Sleep & Hibernation  |  Overclocking RAM  |  Making Backups  |  Displays  |  4K / 8K / 16K / etc.  |  Do I need 80+ Platinum?

If you can read this you're using the wrong theme.  You can change it at the bottom.

Link to comment
Share on other sites

Link to post
Share on other sites

On 20/09/2017 at 10:17 AM, Sauron said:

That's good but power consumption might be a concern.

 

Also, amd seems particularly passionate about making their naming scheme extra confusing. Aside from the fact that they're taking the "u" denomination straight from intel and that they are basically giving them the same serials as sandy bridge, how on Earth does the "2500u" name tell you it is even in the same generation as the "1700x"?

at 300mhz, Vega is incredibly efficient.

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, Ryan_Vickers said:

Damn! Nice :D 

Ah, makes sense.

Is it though?  Like I said,

I guess I should clarify that just because Raven Ridge will be bottlenecked by memory bandwidth doesn't mean it will perform badly; just that it could likely perform significantly better with more memory bandwidth. Likewise this doesn't mean that Iris Pro 580 is faster than a top Raven Ridge SKU. Raven Ridge is likely to be faster but if Raven Ridge had an L4 cache, had HBM or had another way to boost memory bandwidth (assuming their HBCC solution being enabled and working doesn't do much) it would even the playing field to the point where AMD's GPU prowess will show. The GPU itself should be that much more powerful.

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, Trixanity said:

I guess I should clarify that just because Raven Ridge will be bottlenecked by memory bandwidth doesn't mean it will perform badly; just that it could likely perform significantly better with more memory bandwidth. Likewise this doesn't mean that Iris Pro 580 is faster than a top Raven Ridge SKU. Raven Ridge is likely to be faster but if Raven Ridge had an L4 cache, had HBM or had another way to boost memory bandwidth (assuming their HBCC solution being enabled and working doesn't do much) it would even the playing field to the point where AMD's GPU prowess will show. The GPU itself should be that much more powerful.

But do we actually have performance numbers for the best APU AMD has plans to release, or is that still not out yet?

Solve your own audio issues  |  First Steps with RPi 3  |  Humidity & Condensation  |  Sleep & Hibernation  |  Overclocking RAM  |  Making Backups  |  Displays  |  4K / 8K / 16K / etc.  |  Do I need 80+ Platinum?

If you can read this you're using the wrong theme.  You can change it at the bottom.

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Ryan_Vickers said:

But do we actually have performance numbers for the best APU AMD has plans to release, or is that still not out yet?

Not really numbers. The best we've got is Geekbench which means not useful at all. What we do have is CU numbers which is supposedly 11 (that's 704 Stream Processors) which is more than an RX 550 but less than an RX560 (however those are Polaris-based and not Vega-based). It's hard to tell final performance numbers but it's very dependent on how AMD handles the memory bandwidth starvation. Their HBCC supposedly alleviates it but we haven't really seen it do much of anything on desktop Vega. Whether that's a driver issue or it just doesn't do anything when you have 8 GB of HBM2 memory remains to be seen. If we disregard memory bandwidth, then Raven Ridge will be a fair bit faster based on clock speed and stream processors alone. Problem is we have no basis of comparison. If we compare it (Iris Pro) to RX 550, the RX 550 is much faster but it also had GDDR5 memory so it isn't comparable as it has the memory it needs. If we look at Bristol Ridge laptops, they are a fair bit slower but are also crippled by a poor CPU, older GPU architecture (GCN 1.2 - same as Fiji and Tonga) often single channel memory (even if they are DDR4) and have 512 or fewer stream processors and slower clock speeds which are conditions Raven Ridge shouldn't face. So we have no reliable benchmarks to compare it to.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×