Jump to content

AMD livestream has begun- Hawaii GPUs

Humbug

Ehhh... What? First of all your APU post is completely irrelevant here. Not sure why you're bringing it up.

Secondly, even in very integer heavy applications the 8350 is on par with the 4770K, it does not wipe the floor with it.

It's very relevant in fact, since Integer calculations will be the only thing the CPU needs to do, all floating point data crunching will be offloaded to the onboard GPU, hence why his APU post is very relevant.

Here is a 100% based integer application, there is 0% floating point data calculated here.

FX-8350-40.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

Impossible to say. If you ask me, the fact that they used such a vague measurement is a warning sign that it might not be all that great.

 

 

if the numbers are true (which I highly doubt, press releases are usually lie about numbers) then there is an 800% increase in the number of draw calls that can be made each second. A draw call is when you tell the GPU "hey, render this texture".

Please note that a draw call is not what the GPU actually renders, but rather what it is told to render. So as long as the GPU can't keep up with the workload, it doesn't matter if there was a 10000000% increase in draw calls, it would still only be able to do the same amount of work.

Mantle should (hopefully) have an edge in performance over Direct3D and OpenGL though, since it is lower level.

 

 

 

Ehhh... What? First of all your APU post is completely irrelevant here. Not sure why you're bringing it up.

Secondly, even in very integer heavy applications the 8350 is on par with the 4770K, it does not wipe the floor with it.

It's very relevant in fact, since Integer calculations will be the only thing the CPU needs to do, all floating point data crunching will be offloaded to the onboard GPU, hence why his APU post is very relevant.

Here is a 100% based integer application, there is 0% floating point data calculated here.

-snip-

 

Seems @TechFan@ic answered for me ^_^ - APUs are relevant in almost every CPU discussion due to GPGPUC

Console optimisations and how they will effect you | The difference between AMD cores and Intel cores | Memory Bus size and how it effects your VRAM usage |
How much vram do you actually need? | APUs and the future of processing | Projects: SO - here

Intel i7 5820l @ with Corsair H110 | 32GB DDR4 RAM @ 1600Mhz | XFX Radeon R9 290 @ 1.2Ghz | Corsair 600Q | Corsair TX650 | Probably too much corsair but meh should have had a Corsair SSD and RAM | 1.3TB HDD Space | Sennheiser HD598 | Beyerdynamic Custom One Pro | Blue Snowball

Link to comment
Share on other sites

Link to post
Share on other sites

yawn, its a quad core with two integer clusters per core. I've owned many more AMD chips than intel, but they threw in the towel after phenom ii and so i moved on with life.

Main Rig: http://linustechtips.com/main/topic/58641-the-i7-950s-gots-to-go-updated-104/ | CPU: Intel i7-4930K | GPU: 2x EVGA Geforce GTX Titan SC SLI| MB: EVGA X79 Dark | RAM: 16GB HyperX Beast 2400mhz | SSD: Samsung 840 Pro 256gb | HDD: 2x Western Digital Raptors 74gb | EX-H34B Hot Swap Rack | Case: Lian Li PC-D600 | Cooling: H100i | Power Supply: Corsair HX1050 |

 

Pfsense Build (Repurposed for plex) https://linustechtips.com/main/topic/715459-pfsense-build/

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

It's very relevant in fact, since Integer calculations will be the only thing the CPU needs to do, all floating point data crunching will be offloaded to the onboard GPU, hence why his APU post is very relevant.

Funny, because the CPU Kuzma talked about doesn't even have an onboard GPU, while the i7-4770K does. So saying that integer performance is the only thing that matter since other things can be offloaded to the onboard GPU makes absolutely no sense whatsoever if you're talking about the FX-8350.

Also, I am not sure what planet you live on but I am on Earth, and on here not all programs fully utilize GPGPU 100%, so CPU performance is still very much relevant.

 

The AIDA64 hash benchmark isn't even as much of a real world performance test as the 7zip compression benchmark I linked (which is also very very integer heavy).

Also, if you look at Zlib (also from AIDA64) which is another very integer heavy benchmark, the 8350 is even within margin of error from the i7-3770K (as well as the i7-4770K). Just because you find one or two benchmark which points at the 8350 being better does not mean that is the ultimate truth. Don't be a fanboy, please.

Link to comment
Share on other sites

Link to post
Share on other sites

random guy waving at camera then the stream crashes for me lol

 

Jimmy Thang, a maximum PC journalist.

Link to comment
Share on other sites

Link to post
Share on other sites

This is getting really intersting.
AMD has now its own Low Level API.
And Nvidia is getting together with SteamOS wich also will use a Low Level Version of OpenGL.

Im really exited to see where all of this is going.
 

RTX2070OC 

Link to comment
Share on other sites

Link to post
Share on other sites

And Nvidia is getting together with SteamOS wich also will use a Low Level Version of OpenGL.

Source? I haven't heard anything about that at all, and OpenGL is not any more "low level" than DirectX in normal cases.

Link to comment
Share on other sites

Link to post
Share on other sites

It's very relevant in fact, since Integer calculations will be the only thing the CPU needs to do, all floating point data crunching will be offloaded to the onboard GPU, hence why his APU post is very relevant.

Here is a 100% based integer application, there is 0% floating point data calculated here.FX-8350-40.jpg

The new ivy-e processors are beating the 8350 now though.

<p>Mobo - Asus P9X79 LE ----------- CPU - I7 4930K @ 4.4GHz ------ COOLER - Custom Loop ---------- GPU - R9 290X Crossfire ---------- Ram - 8GB Corsair Vengence Pro @ 1866 --- SSD - Samsung 840 Pro 128GB ------ PSU - Corsair AX 860i ----- Case - Corsair 900D

Link to comment
Share on other sites

Link to post
Share on other sites

Funny, because the CPU Kuzma talked about doesn't even have an onboard GPU, while the i7-4770K does. So saying that integer performance is the only thing that matter since other things can be offloaded to the onboard GPU makes absolutely no sense whatsoever if you're talking about the FX-8350.

Also, I am not sure what planet you live on but I am on Earth, and on here not all programs fully utilize GPGPU 100%, so CPU performance is still very much relevant.

 

The AIDA64 hash benchmark isn't even as much of a real world performance test as the 7zip compression benchmark I linked (which is also very very integer heavy).

Also, if you look at Zlib (also from AIDA64) which is another very integer heavy benchmark, the 8350 is even within margin of error from the i7-3770K (as well as the i7-4770K). Just because you find one or two benchmark which points at the 8350 being better does not mean that is the ultimate truth. Don't be a fanboy, please.

V_V I hope you know what you just did - I had to clear 8 multiquotes for this ._.

 

The AMD 8350 is completely relevent because it shows that AMD have been focusing on Integer based performance since they started working on their APUs and it seems many believe they have "thrown in the towel" but here's the thing - If AMD made a larger die they could simply put a relatively powerful GPU on the 8350 and call it the a20-9900k; that's not my point, the point is the fact that if they are ahead on integer performance and ahead on floating point performance by the very nature of GPGPUC that allows them to be ahead in everything, since Steamroller is going to be ~30% performance gains across the board due to them fixing issues with sharing and allowing an MP ratio equivalent to the number of cores this means that as soon as they get a full implementation of HSA and get OpenCL and GPGPUC in general being used more often then they're ahead of everyone by a long shot - Nvidia are too far behind in terms of CPU and Intel are too far behind in terms of GPU. The FX series CPUs were never about outperforming the i7s NOW they were designed for creating a base for their APUs - with the CPU power of an 8350 followed up by floating point performance of the internal graphics being used as literal FPUs it's literal win - win besides power consumption and they will be far ahead enough to focus on forcing power consumption down, just like Intel is doing now!

 

Seriously :/ I'm the fan boy? I recently went out and bought a Xeon... who's the fanboy please? I go for whatever fits my budget and use case scenario but right here I am talking about the future of computing. Good day.

 

P.S. Also next time you mention my name ^_^ please actually mention me

Console optimisations and how they will effect you | The difference between AMD cores and Intel cores | Memory Bus size and how it effects your VRAM usage |
How much vram do you actually need? | APUs and the future of processing | Projects: SO - here

Intel i7 5820l @ with Corsair H110 | 32GB DDR4 RAM @ 1600Mhz | XFX Radeon R9 290 @ 1.2Ghz | Corsair 600Q | Corsair TX650 | Probably too much corsair but meh should have had a Corsair SSD and RAM | 1.3TB HDD Space | Sennheiser HD598 | Beyerdynamic Custom One Pro | Blue Snowball

Link to comment
Share on other sites

Link to post
Share on other sites

Funny, because the CPU Kuzma talked about doesn't even have an onboard GPU, while the i7-4770K does. So saying that integer performance is the only thing that matter since other things can be offloaded to the onboard GPU makes absolutely no sense whatsoever if you're talking about the FX-8350.

Also, I am not sure what planet you live on but I am on Earth, and on here not all programs fully utilize GPGPU 100%, so CPU performance is still very much relevant.

 

The AIDA64 hash benchmark isn't even as much of a real world performance test as the 7zip compression benchmark I linked (which is also very very integer heavy).

Also, if you look at Zlib (also from AIDA64) which is another very integer heavy benchmark, the 8350 is even within margin of error from the i7-3770K (as well as the i7-4770K). Just because you find one or two benchmark which points at the 8350 being better does not mean that is the ultimate truth. Don't be a fanboy, please.

Some of the statements you're making are highly misleading you're also missing the point along the way.

This is not about any sort of fanboy argument of which you or whomever else should buy as a consumer you can do your own research & figure out what's best for you.

I'm speaking out of purely architectural scientific sense, none of what I'm saying is an opinion, it's all fact.

Zlib is a compression algorithm based on libraries which blatantly favor intel processors, it's practically developed by intel, it cannot be used to assess performance objectively.

http://software.intel.com/en-us/articles/intel-ipp-zlib-library-changed-the-default-compression-level

http://www.intel.com/content/www/us/en/intelligent-systems/wireless-infrastructure/ia-deflate-compression-paper.html

The HASH rate benchmark uses the universal hash function that does not favor any manufacturer over the other, it's also 100% integer based, so we can completely isolate the integer performance without having to worry about any intermediates that may cause inconsistencies, that's why I chose it, not because it's "pretty".

http://en.wikipedia.org/wiki/Hash_function

Link to comment
Share on other sites

Link to post
Share on other sites

(PSST, this is a thread about video cards not CPUs...)

Old shit no one cares about but me.

Link to comment
Share on other sites

Link to post
Share on other sites

V_V I hope you know what you just did - I had to clear 8 multiquotes for this ._.

 

The AMD 8350 is completely relevent because it shows that AMD have been focusing on Integer based performance since they started working on their APUs and it seems many believe they have "thrown in the towel" but here's the thing - If AMD made a larger die they could simply put a relatively powerful GPU on the 8350 and call it the a20-9900k; that's not my point, the point is the fact that if they are ahead on integer performance and ahead on floating point performance by the very nature of GPGPUC that allows them to be ahead in everything, since Steamroller is going to be ~30% performance gains across the board due to them fixing issues with sharing and allowing an MP ratio equivalent to the number of cores this means that as soon as they get a full implementation of HSA and get OpenCL and GPGPUC in general being used more often then they're ahead of everyone by a long shot - Nvidia are too far behind in terms of CPU and Intel are too far behind in terms of GPU. The FX series CPUs were never about outperforming the i7s NOW they were designed for creating a base for their APUs - with the CPU power of an 8350 followed up by floating point performance of the internal graphics being used as literal FPUs it's literal win - win besides power consumption and they will be far ahead enough to focus on forcing power consumption down, just like Intel is doing now!

 

Seriously :/ I'm the fan boy? I recently went out and bought a Xeon... who's the fanboy please? I go for whatever fits my budget and use case scenario but right here I am talking about the future of computing. Good day.

 

P.S. Also next time you mention my name ^_^ please actually mention me

When did I say they had thrown in the towel? I still haven't seen much proof that they are ahead in integer performance (in fact, both Anandtech and Guru3D shows that they seem to be on par in best case scenarios). You can't say that they are ahead just because they can use GPGPU though, since Intel can do that as well. In the case of i7-4770K and the FX-8350 then AMD CAN'T do GPGPU, but Intel can.

 

Yes AMD says that there will be a ~30% increase in performance but we will have to wait and see if that's actually true. Don't blindly trust numbers the manufacturers release, no matter which manufacturer it is. You will just end up disappointed if you do.

 

Nvidia is behind in CPU? What are you talking about? I assume you mean the Tegra parts in which case no, they are not behind. That's parts used in a completely different market segment and AMD isn't even in that market at all. Intel has also shown with Iris that they can beat AMD in terms of GPU performance if they try.

 

You can go on about how "the 8350 wasn't suppose to beat the i7, it's suppose to be a base for developing APUs upon" how much you like, but that is still completely irrelevant when comparing performance between the 4770K and the 8350. If you're really "talking about the future of computing" then why are you talking about CPUs that are already released? If you had said "APUs makes more sense in the long run, make the CPU focus on certain tasks and offload a lot of stuff to a GPU" then I would have agreed. That's not what you said though, you said that the FX 8350 wipes the floor with the 4770K which is simply not true, not even in integer heavy tasks (just because you find 1 benchmark on 1 particular site that agrees with you does not mean it is true).

Link to comment
Share on other sites

Link to post
Share on other sites

When did I say they had thrown in the towel? I still haven't seen much proof that they are ahead in integer performance (in fact, both Anandtech and Guru3D shows that they seem to be on par in best case scenarios). You can't say that they are ahead just because they can use GPGPU though, since Intel can do that as well. In the case of i7-4770K and the FX-8350 then AMD CAN'T do GPGPU, but Intel can.

I said many :P please read correct ^_^ I never mentioned you. The hash benchmark is the best Integer performance test and if you don't want to believe it then fine :) & Intel's GPGPUC (please remember the C :D ) is far weaker due to the weakness of their GPUs compared to AMD, who currently own the fastest single gpu? , and I didn't say the 8350 was superior to the 4770k I simply said that it's part of a 2 part plan; get good Integer performance and then floating-point performance can be settled via the GPGPUC.

Yes AMD says that there will be a ~30% increase in performance but we will have to wait and see if that's actually true. Don't blindly trust numbers the manufacturers release, no matter which manufacturer it is. You will just end up disappointed if you do.

I'm not blindly trusting the manufacturers, there was a very simple problem that needed to be fixed - the dependency of each core within the modules, the moment they fix that it's a 30% increase due to the fact that the MP Ratio would finally scale correctly.

 

Nvidia is behind in CPU? What are you talking about? I assume you mean the Tegra parts in which case no, they are not behind. That's parts used in a completely different market segment and AMD isn't even in that market at all. Intel has also shown with Iris that they can beat AMD in terms of GPU performance if they try.

Nvidia have their ARM segment yes, and also have Project Denver coming up - but let's face it, it's inferior to anything AMD or Intel can throw at them for high end. Also, I don't know where you've been :/ but I'm pretty sure than AMD are the superior GPU company - that's not even worth arguing about because at this point I'm struggling to take you seriously.

 

You can go on about how "the 8350 wasn't suppose to beat the i7, it's suppose to be a base for developing APUs upon" how much you like, but that is still completely irrelevant when comparing performance between the 4770K and the 8350. If you're really "talking about the future of computing" then why are you talking about CPUs that are already released? If you had said "APUs makes more sense in the long run, make the CPU focus on certain tasks and offload a lot of stuff to a GPU" then I would have agreed. That's not what you said though, you said that the FX 8350 wipes the floor with the 4770K which is simply not true, not even in integer heavy tasks (just because you find 1 benchmark on 1 particular site that agrees with you does not mean it is true).

I don't think you're quite understanding :/ the HSA (Heteregous System Architecture) would allow for floating-point operations, in which CPUs pale in comparison to GPUs, to be run on the internal GPU. I'm talking about the 8350, a modern CPU because it's a part of the future - step 1 was to announce APUs as an existance, step 2 was to focus on integer performance while improving APUs a bit and step 3 is to combine the APU with the amazing integer performance.

 

You've repeated yourself twice now, you're beginning to waffle on about the lack of internal GPU in the 8350 and you seem to be missing the point - if you can not understand that it is a step in a plan after I have explained it multiple times in multiple ways and after I have written an entire thread on it then :D I give up and this argument is over.

Console optimisations and how they will effect you | The difference between AMD cores and Intel cores | Memory Bus size and how it effects your VRAM usage |
How much vram do you actually need? | APUs and the future of processing | Projects: SO - here

Intel i7 5820l @ with Corsair H110 | 32GB DDR4 RAM @ 1600Mhz | XFX Radeon R9 290 @ 1.2Ghz | Corsair 600Q | Corsair TX650 | Probably too much corsair but meh should have had a Corsair SSD and RAM | 1.3TB HDD Space | Sennheiser HD598 | Beyerdynamic Custom One Pro | Blue Snowball

Link to comment
Share on other sites

Link to post
Share on other sites

Some of the statements you're making are highly misleading you're also missing the point along the way.

This is not about any sort of fanboy argument of which you or whomever else should buy as a consumer you can do your own research & figure out what's best for you.

I'm speaking out of purely architectural scientific sense, none of what I'm saying is an opinion, it's all fact.

Zlib is a compression algorithm based on libraries which blatantly favor intel processors, it's practically developed by intel, it cannot be used to assess performance objectively.

http://software.intel.com/en-us/articles/intel-ipp-zlib-library-changed-the-default-compression-level

http://www.intel.com/content/www/us/en/intelligent-systems/wireless-infrastructure/ia-deflate-compression-paper.html

The HASH rate benchmark uses the universal hash function that does not favor any manufacturer over the other, it's also 100% integer based, so we can completely isolate the integer performance without having to worry about any intermediates that may cause inconsistencies, that's why I chose it, not because it's "pretty".

http://en.wikipedia.org/wiki/Hash_function

You can't really just ignore benchmarks just because they don't fit your predetermined conclusion. I posted two different integer heavy benchmarks and you just went "nope those doesn't count, the only one that counts is this one". Even IF what you said was true, it would still only apply to integer heavy workloads (and as you can see in for example the 7zip benchmark Anand rant, it's not really "wiping the floor" with the 4770K in there). In the perfect world GPGPU would be used as much as possible, but we don't live in that world (yet) so CPUs still have to take cares of more things such as FPU and most of all, general single threaded performance is absolutely terrible in AMD chips (hopefully that will get fixed with Streamroller and/or Excavator).

Not sure why you're linking me to a wikipedia article about hashes. I know what it is. I got a feeling that if Intel had won the hash benchmark you would have complained that SSSE (which that benchmark uses) was developed by Intel (which it was).

 

Look, I am not saying that the 8350 is bad. What I am saying is that it does not "wipe the floor" with the 4770K. In a best case scenario it's slightly ahead, in most scenarios it's far behind.

Link to comment
Share on other sites

Link to post
Share on other sites

I said many :P please read correct ^_^ I never mentioned you. The hash benchmark is the best Integer performance test and if you don't want to believe it then fine :) & Intel's GPGPUC (please remember the C :D ) is far weaker due to the weakness of their GPUs compared to AMD, who currently own the fastest single gpu? , and I didn't say the 8350 was superior to the 4770k I simply said that it's part of a 2 part plan; get good Integer performance and then floating-point performance can be settled via the GPGPUC.

I'm not blindly trusting the manufacturers, there was a very simple problem that needed to be fixed - the dependency of each core within the modules, the moment they fix that it's a 30% increase due to the fact that the MP Ratio would finally scale correctly.

Why should I put a C after GPGPU? GPGPU stands for "general-purpose computing on graphics processing units", there is no C in there. What makes you think Intel has worse GPUs on their chips? If you look at Intel Iris (aka HD 5200) then it's actually better than anything AMD offers in an APU.

No, you said and I quote

fanboy is obvious - in integer-based operations the 8350 wipes the floor with the 4770k and I cbb to type a wall of text so please read my thread on APUs to know why that's pretty much all that matters.

So far we have seen two benchmarks that disagree with that statement, and one that agrees. Even if it was true, it's still only in 1 kind of tasks, and it usually trail behind in almost everything else. You never said anything about a "2 part plan". If you did I would have agreed. Do you have any proof that fixing it will result in a 30% increase? If not, please don't stop citing that number because it's not based on facts.

 

 

Nvidia have their ARM segment yes, and also have Project Denver coming up - but let's face it, it's inferior to anything AMD or Intel can throw at them for high end. Also, I don't know where you've been :/ but I'm pretty sure than AMD are the superior GPU company - that's not even worth arguing about because at this point I'm struggling to take you seriously.

You are comparing apples and oranges. Nvidia isn't trying to compete with AMD in the ARM space, and AMD isn't trying to compete with Nvidia in the ARM space either. You can't compare phone CPUs against desktop CPUs and go "well AMD wins because their CPU is more powerful". They are made for completely different devices.

AMD better than Intel in the GPU segment? Well not on the highest end but I will agree that in the mid/high, mid and low end they are far better than Intel when it comes to GPU performance in APUs. I don't think anyone would argue against that. However, you keep going "ohh but it will be good in the future" as soon as I post proof that AMD is behind. Intel showed with Iris that they can outperform what AMD is offering in terms of GPU performance in APUs if they want to. If you go "in the future" you also have to take into consideration what Intel could do in the future.

 

 

I don't think you're quite understanding :/ the HSA (Heteregous System Architecture) would allow for floating-point operations, in which CPUs pale in comparison to GPUs, to be run on the internal GPU. I'm talking about the 8350, a modern CPU because it's a part of the future - step 1 was to announce APUs as an existance, step 2 was to focus on integer performance while improving APUs a bit and step 3 is to combine the APU with the amazing integer performance.

No, I don't think YOU understand. I've already said that in the perfect world we would offload such things to the GPU, but we are not there yet. What will happen in the future is pretty irrelevant when comparing two CPus that are already on the market. Also, since we were talking i7-4770K vs 8350 it's silly to bring up GPGPU since the 8350 doesn't even have a GPU. You can go "ohh but it will in the future" all you want, but it still doesn't change what it is as of today, which was what we were talking about.

 

 

You've repeated yourself twice now, you're beginning to waffle on about the lack of internal GPU in the 8350 and you seem to be missing the point - if you can not understand that it is a step in a plan after I have explained it multiple times in multiple ways and after I have written an entire thread on it then :D I give up and this argument is over.

I do understand your point (I thought I made that clear before). What you don't get is that you were comparing the 8350 vs the 4770K, not "future version of 8350 which has a GPU on it" vs i7-4770K. Again, you are talking about unreleased products and trying to shoehorn it into a discussion about chips already on the market, in an attempt to make one of the chips on the market seem better. At this point in time, you can't say that integer performance is the only thing that matters for a CPU, which is what you implied in your post that started this debate. Will it be like that in the future? Maybe, but it's not like that today so that's why I think it is irrelevant to bring it up when comparing the 8350 vs the 4770K.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×