Jump to content

AMD CEO Dr Lisa Su, interviewed (Strategy, Radeons, CPUs, consoles )

ahhming

Something else people need to understand with AMD - Creating a new architecture (which is what you all want) requires money and time. AMD doesn't have deep pockets like Intel and thus it's going to take a little longer. AMD is working on bringing you what you're asking for, just be patient. If you're currently running hardware that's fairly recent (new within the last two years +/-) then there's no need to upgrade for at least another year or more anyways.

I slightly disagree because my FX8350 is has been getting its butt handed to it by i5 processors.  

"45 ACP because shooting twice is silly!"

Link to comment
Share on other sites

Link to post
Share on other sites

I slightly disagree because my FX8350 is has been getting its butt handed to it by i5 processors.  

 

That's due to poor game coding. If a game can properly utilize it then it's a different story. The 8350 cleans house on transcoding and rendering against the i5.

Link to comment
Share on other sites

Link to post
Share on other sites

Some important bits

AMD are still committed to RAW CPU performance.

AMD wants to be define as the one who leads the market innovation

Hints that amd devices will be in wearable, smartphone and cars

More power efficent / high performance apu

bridging x86 & ARM

AMD aiming for 16 ,14,10,7 nm

committed to raw performance: ok? this doesn't have to be said, you better be.

Leading the competition: ok? doesn't every company want that, in their own market(s)? doesn't have to be said..

Wearables: oh god.. as if we cared. go make your profits in the wearables and invest it into your CPU's, please!

power efficiency: they say this, what, every year?

bridging x86&ARM: alright, cool I guess?

7nm: easier said than done. but good luck anyway.

 

honestly, she said nothing >.>

SPAAAAAACE!!!

Link to comment
Share on other sites

Link to post
Share on other sites

committed to raw performance: ok? this doesn't have to be said, you better be.

Leading the competition: ok? doesn't every company want that, in their own market(s)? doesn't have to be said..

Wearables: oh god.. as if we cared. go make your profits in the wearables and invest it into your CPU's, please!

power efficiency: they say this, what, every year?

bridging x86&ARM: alright, cool I guess?

7nm: easier said than done. but good luck anyway.

 

honestly, she said nothing >.>

 

Pardon my french: but what the f*ck did you want her to say if those are AMD goals oO?

She commited to CPUs development when people were thinking they are forgoten, she said she wants to keep the leading in innovation crown (if you have doubts about this, look at the past 2/3 years), pointed out that they are going strong on ARM with several solutions from wearables to smartphones, reinforced the power effiency and process shrinking.

Link to comment
Share on other sites

Link to post
Share on other sites

Pardon my french: but what the f*ck did you want her to say if those are AMD goals oO?

She commited to CPUs development when people were thinking they are forgoten, she said she wants to keep the leading in innovation crown (if you have doubts about this, look at the past 2/3 years), pointed out that they are going strong on ARM with several solutions from wearables to smartphones, reinforced the power effiency and process shrinking.

I don't expect anything. just that what was said is what we already know/knew. I mean, no company would ever say "yea.. so we gave up on trying to be the lead in this market"

so yes, I believe it when a company says "we want to lead innovation"-in their respective market(s).

 

suppose I do expect something. I expect to be "Wow'd" by AMD. just getting a little frustrated with them is all :P

SPAAAAAACE!!!

Link to comment
Share on other sites

Link to post
Share on other sites

What are you talking about? Intel ships by far the most iGPUs and Nvidia ships the most discrete GPUs.

As for performance, the GK110 was released at the beggining of 2013 and AMD still have nothing to beat the now fully enabled version of that chip.

The 290X and 780TI trade blows once you get beyond the reference cooler options. The 295x2 also beats the Titan Z, even with the Devil 13 air cooler.

 

Volume doesn't entirely matter, and Nvidia owns the server accelerator world which was invested in CUDA a long time ago. In reality for PCs AMD does a lot better than you think.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

The 290X and 780TI trade blows once you get beyond the reference cooler options. The 295x2 also beats the Titan Z, even with the Devil 13 air cooler.

 

Volume doesn't entirely matter, and Nvidia owns the server accelerator world which was invested in CUDA a long time ago. In reality for PCs AMD does a lot better than you think.

What the... who hacked patrickjp93 account?!

Link to comment
Share on other sites

Link to post
Share on other sites

Hopefully if AMD goes with new design on the  CPU's I really hope they use te AM3+ socket.  I really dont feel like going out and buying a new mobo.

facepalm* No. They will have to jump up to DDR4 and PCIe 4 at this time to keep up with Intel for features.

 

People say PCIe 4 won't matter, but riddle me this: if you can use fewer lanes to get the same bandwidth or better, then wouldn't this drastically help SLI/XFire and continue adding to room for PCIe/M.2 SSDs?

 

If 4x4x4x4 became just as good as the old 8x8x8x8, then look at all the new room you get for other stuff. Suddenly you could have 4-way XFire and be able to run 2 4x M.2 drives, a sound card, a raid card, a 10 or 100gbps ethernet card/wireless AC card, and still have lanes and bandwith left for something else on an Intel x930K. Mind you this would require PCIe extension cables and either a modified cutout on the back of your case for them or creative wiring through the empty slots left over, but it's possible.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

What the... who hacked patrickjp93 account?!

No one. I'm objective. Intel may have finally created a graphics architecture with such performance density as to finally smack AMD across the face in the iGPU space, but unless they start making decent dGPUs out of it, they're still only useful for heterogeneous acceleration of small(er) programs and light gaming.

 

Also, it's no secret the R9 290X does better at higher resolutions, and if AMD fixed their DX 11 driver instead of sitting on their asses, following Nvidia's example from a few months back, then the 290X would lead again.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

wenn DX12 api comes, then this could be a nice game change for AMD cpu´s and the upcomming games.

DX12 will take allot of overhead away from from the cpu, so im very currious how this works out in the future with the new upcomming game titles.

 

People should stop looking back, look forward.

We have allready seen massive performance boosts, in games that supported mantle.

But mantle it self, didnt realy made it to the market.

With the new upcomming 300 series gpu´s, and DX12 api, im very currious, how much the cpu will make sense in the upcomming future.

 

I think, that the cpu will become less important in the future of gaming.

Link to comment
Share on other sites

Link to post
Share on other sites

That's due to poor game coding. If a game can properly utilize it then it's a different story. The 8350 cleans house on transcoding and rendering against the i5.

I would also chalk up the 8350's victory in that regard to poor coding. There's enough parallel resources on Intel's cores that it should be able to process just as much. IB had 3 ALUs per core and Haswell has 4 whereas Vishera has 2.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

wenn DX12 api comes, then this could be a nice game change for AMD cpu´s and the upcomming games.

DX12 will take allot of overhead away from from the cpu, so im very currious how this works out in the future with the new upcomming game titles.

 

And one nasty habit most of the game developers have, is to fill up any performance advance with even more work, hopefully useful. <_<

Link to comment
Share on other sites

Link to post
Share on other sites

This is expected. On the GPU side of things, we will be seeing some VERY interesting things in the short term. Eg: Launch of the 3xx GPU's that will (hopefully) blow Maxwell away.

 

On the CPU side of things, anyone who's been keeping up still knows that we won't learn anything new until late 2015/early 2016. ZEN won't launch until 2016, regardless of how often people whine that AMD should release something NOW!

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

the argument i keep seeing for amd is that when games are coded properly they will perform better because amd has more cores doesnt make sense to me, because intel chips would also benefit from better coding just as much. eg i7s would become beasts, i5s would use all 4 of the cores to the full, the i3 would become an amazing duel core etc

Link to comment
Share on other sites

Link to post
Share on other sites

Something else people need to understand with AMD - Creating a new architecture (which is what you all want) requires money and time. AMD doesn't have deep pockets like Intel and thus it's going to take a little longer. AMD is working on bringing you what you're asking for, just be patient. If you're currently running hardware that's fairly recent (new within the last two years +/-) then there's no need to upgrade for at least another year or more anyways.

I figure this is why they aren't releasing a desktop Carrizo and haven't released any new FX chips based on Bdver3. They are more than likely trying to conserve resources to invest into other IP.

 

I can't speak for everyone, but I'm not excited about die-sizes directly, as I'd rather have a good architecture on a larger node than a bad on a smaller one

I'm not either. AMD has proven time and time again that node isn't everything. Carrizo brings a 30% improvement in performance per watt on the same exact node as Kaveri. These technologies will eventually scale up once they move to FinFET designs.

 

Hopefully if AMD goes with new design on the  CPU's I really hope they use te AM3+ socket.  I really dont feel like going out and buying a new mobo.

Zen will be based around an entirely new architecture. A new motherboard and socket will most definitely be needed. Keep in mind Zen may be a complete SoC so the need for south bridges is ruled out entirely. The most interesting concept I think AMD may bring to the table as they have with Carrizo is uArch. Allowing you to run either an FX, APU, or even mobile chips in a single socket. In short taking AM1, FM2+, and AM3+ and smashing them all together. Tho that's just something I personally would like to see happen.

 

What the... who hacked patrickjp93 account?!

Wondering the same thing myself.

 

wenn DX12 api comes, then this could be a nice game change for AMD cpu´s and the upcomming games.

DX12 will take allot of overhead away from from the cpu, so im very currious how this works out in the future with the new upcomming game titles.

 

People should stop looking back, look forward.

We have allready seen massive performance boosts, in games that supported mantle.

But mantle it self, didnt realy made it to the market.

With the new upcomming 300 series gpu´s, and DX12 api, im very currious, how much the cpu will make sense in the upcomming future.

 

I think, that the cpu will become less important in the future of gaming.

The problem with DirectX 12 is that by the time it becomes widely adopted in the game industry. Zen would of already been launched and replaced Bdver2 in most of our machines. Certainly it will help people who tend to cling to their hardware for a very long time. Tho personally at that point anyone who has to complain about poor performance should reconsider running 5 year old hardware.

Link to comment
Share on other sites

Link to post
Share on other sites

I figure this is why they aren't releasing a desktop Carrizo and haven't released any new FX chips based on Bdver3. They are more than likely trying to conserve resources to invest into other IP.

 

Wondering the same thing myself.

 

The problem with DirectX 12 is that by the time it becomes widely adopted in the game industry. Zen would of already been launched and replaced Bdver2 in most of our machines. Certainly it will help people who tend to cling to their hardware for a very long time. Tho personally at that point anyone who has to complain about poor performance should reconsider running 5 year old hardware.

I somewhat disagree with that strategy. If Excavator really does bring that much extra performance, then releasing a small number of products to refresh the enthusiast line which its own fan base believes it abandoned would bolster support, even if clock rates suffered a bit.

 

I've always been objective. You're just too prideful to realize it.

 

Heh, well the old man made a Q6600 last almost 8 years. Haswell/Broadwell easily could.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

I somewhat disagree with that strategy. If Excavator really does bring that much extra performance, then releasing a small number of products to refresh the enthusiast line which its own fan base believes it abandoned would bolster support, even if clock rates suffered a bit.

 

I've always been objective. You're just too prideful to realize it.

 

Heh, well the old man made a Q6600 last almost 8 years. Haswell/Broadwell easily could.

Well I'm still running a Xeon w3520 (x58 - equiv to the i7 920), and that sucker is exactly 6 years old this quarter. It still wrecks most games. I'll probably need an upgrade this year or next, but I'm planning one this year anyway. Probably a balls to the wall x99 build with the flagship R9 3xx (Assuming it delivers the promised power, of course). Then I can upgrade again or have a secondary build in 2016 to see how kickass (or not) ZEN is.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

lol Nvidia needed 3 gens to finaly beat the good old tahiti. :D.

Those good old Tahiti cards still go strong now days.

 

7970GHz can still maxout 98% of todays games at 1080p.

 

You can max out any game with any card, even integrated.

 

You just won't get 60fps.

Link to comment
Share on other sites

Link to post
Share on other sites

facepalm* No. They will have to jump up to DDR4 and PCIe 4 at this time to keep up with Intel for features.

 

People say PCIe 4 won't matter, but riddle me this: if you can use fewer lanes to get the same bandwidth or better, then wouldn't this drastically help SLI/XFire and continue adding to room for PCIe/M.2 SSDs?

 

If 4x4x4x4 became just as good as the old 8x8x8x8, then look at all the new room you get for other stuff. Suddenly you could have 4-way XFire and be able to run 2 4x M.2 drives, a sound card, a raid card, a 10 or 100gbps ethernet card/wireless AC card, and still have lanes and bandwith left for something else on an Intel x930K. Mind you this would require PCIe extension cables and either a modified cutout on the back of your case for them or creative wiring through the empty slots left over, but it's possible.

 

PCI-E lanes haven't really been all that significant for years.  The difference between 3.0 x16 and 2.0 x4 is miniscule.  

 

For the needs of most people, the bandwidth isn't needed yet.  

 

image010.png

 

If performance is there, people will come.  If not, they'll likely still maneuver themselves into a nice budget slot.

4K // R5 3600 // RTX2080Ti

Link to comment
Share on other sites

Link to post
Share on other sites

PCI-E lanes haven't really been all that significant for years.  The difference between 3.0 x16 and 2.0 x4 is miniscule.  

 

For the needs of most people, the bandwidth isn't needed yet.  

 

-snip-

 

If performance is there, people will come.  If not, they'll likely still maneuver themselves into a nice budget slot.

Yes but I just described a scenario where it matters: SLI/XFire, where the communication between slots shoots up tremendously. Also, a 480 is not that good a card to run such a test these days. Try a 980.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

I am curious to see what they will bring, somehow I have a liking for AMD, but I have an ever stronger liking for performance :)

"Hope, what a concept." - Deunan Knute

Link to comment
Share on other sites

Link to post
Share on other sites

Yes but I just described a scenario where it matters: SLI/XFire, where the communication between slots shoots up tremendously. Also, a 480 is not that good a card to run such a test these days. Try a 980.

image038.png

 

I'm showing old benchmarks for a reason, this hasn't been relevant for years now.

 

But you know what,. lets try a 980.

http://www.techpowerup.com/reviews/NVIDIA/GTX_980_PCI-Express_Scaling/10.html

 

Your scenario is possible, but that doesn't really mean it's a selling point.  Most people don't even come close to touching their existing bandwidth.  For high end workstations it makes sense to keep pushing the envelope but for the vast majority of consumers under that level it's not needed.

4K // R5 3600 // RTX2080Ti

Link to comment
Share on other sites

Link to post
Share on other sites

-snip-

 

I'm showing old benchmarks for a reason, this hasn't been relevant for years now.

 

But you know what,. lets try a 980.

http://www.techpowerup.com/reviews/NVIDIA/GTX_980_PCI-Express_Scaling/10.html

There are no SLI benches in here, only a single card. Also, looks like it makes a decent difference.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

There are no SLI benches in here, only a single card. Also, looks like it makes a decent difference.

@sgloux3470 So basically what that link tells us is that yes, the PCIe lane does matter. Look at the difference it makes with just one card. That will scale with SLI/CFX setups. It might only be 5 FPS with one card, but that could turn into 20 FPS or 30 FPS difference in a SLI/CFX setup.

 

No one is arguing that a single GPU needs PCIe 3.0 x16. We all know that with a single card it doesn't matter. But that wasn't what he brought up. @patrickjp93's whole point was that AMD CANNOT put ZEN on any existing socket. Why would they? Intentionally crippling a brand new arch that has been years in the waiting? That would be literally stupid.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

There are no SLI benches in here, only a single card. Also, looks like it makes a decent difference.

 

PCIE 2.0 x4 and PCI-E 3.0 x4 both were within a handful of frames with PCI-E 3.0 x16 at 1080p.  I don't even know where you would find a PCIE 1.1 motherboard these days.  

 

LTT did a x16 and x8 comparison of 980 SLI. 

 

I'm not saying it doesn't make a difference.  I'm saying that it's a negligible difference.  

 

Of course AMD won't use an existing socket for a brand new CPU architecture.  The only time I can think of they even came close to doing that was maintaining backwards compatibility with AM2/AM2+ on their AM3 motherboards and some mobos having support for upgrading to AM3+ back before Bulldozer launched.  

4K // R5 3600 // RTX2080Ti

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×