Jump to content

Why does Vishera suck?

15 hours ago, DildorTheDecent said:

That's what most people overlook. One FPU per module was not the best idea. 

 

Those FX 4100s would not have been an enjoyable time I imagine. 

IKR?
I actually run my FX 8320 in "One Core Per Module" setting, with all C-states enabled and voltage dropped by 75mv and this thing is litteraly 31oC idle and 40-50oC max temp on old Wraith xD
While in some apps actually performing this tiny bit better. xD
Also - it goes EASY on my VRMs :P

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Blebekblebek said:

The question instead why it suck, it should be, Why IPC gain is so low.

 

I think we all agreed and at least understand that most AM3+ processor IPC were low, even arguably lower than AM3 line up, at the same clock speed.

Because it's always been this way. I think people are cherry picking generational jumps in performance such as when Intel went from Prescott/Presler to Core and when AMD went from the Bulldozer family to Zen. They were coming from architectures that were bad from the start. Everything else when you compared them clock for clock only has had what people call incremental improvements.

 

But if I were to guess as to why incremental improvements have always been the thing, it's probably engineering trade offs. Do you try to be smart with your instructions to do more with less, or do approach with the ability to execute a bunch of instructions and hope everyone designs their applications that way? People have tried the latter with VLIW designs, but that didn't really take off (mostly because few people knew how to make existing applications work with it and designing for VLIW isn't intuitive)

 

And you can only improve your improvements so much before the effort outweighs the cost.

Link to comment
Share on other sites

Link to post
Share on other sites

A lot of reasons why FX sucked big time when it comes to single core performance or what others refer to as IPC+ Frequency

Number one is they went from 3ALU+3AGU per integer core to 2ALU+AGU many people like me were on Amd forums at the time questioning how IPC would go up if they changed to this JF-AMD at the time said Amd engineers claimed that they couldn’t keep all the pipelines filled so they were basically doing nothing with the K10 architecture.

 

Number 2 FX module composed of 2 integer cores can decode up to 4 instructions BUT its shared among 2 cores instead of the phenom II which could decode up to 3 per core so yet again this takes away from single core performance.

 

Number 3 is cache why on earth did Amd share L1 cache with two integer cores and not expect massive latency issues who knows why but the Amd engineers themselves. They

also shared L2 cache with 2 of the integer cores in one module when I say share I basically mean the 2 integer cores were fighting for resources.

 

Number 4 FX also shared a fetch and decode which they later changed with steamroller were each core was given their own instruction decoder.

 

Number 5 FX was also using a longer pipeline which increase latency which lowers performance per cycle

 

Number 6 shared FP unit see Amd was hoping that software in the future would move away from using the floating point and instead switch all that over to the GPU for processing which simply didn’t happen. This means FX 8 core has 4 floating point units not 8.

 

Number 7 Amd really expected bulldozer to have 4+ghz out of the box at release which didn’t happen cause global foundries simply couldn’t do it.  

 

The only time bulldozer was actually faster than the phenom is when it used some of the newer instruction sets that weren’t on the K10 

 

Also it's not like no one was questioning bulldozer from the start as many were wondering how on earth would bulldozer be better when they kept taking resources away and sharing the resources they did have. FX was also a module design which was supposed to save Amd money for manufacturing kind of like how Ryzen works today with the core complex.

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, M.Yurizaki said:

I would argue the two biggest issues with that architecture are:

  • Too many stages in the pipeline, which means if you have to branch (and there's a lot of branching) and it mispredicts, the processor has to basically dump what it was working on because it's no longer valid
  • Each "core" is weaker than the previous generation. It heavily relied on multithreaded applications, which most day-to-day programs don't scale well.

Deep pipelines let you increase the clock speed, which would make up for the weaker per-core performance. But you can only increase it so much before your TDP goes through the roof.

Not only that but when you increase the pipeline one must assure that nothing goes wrong with that information or it has to repeat the process again this means the branch predictor must be a beast at predicting what comes next. Longer the pipeline the better the branch predictor better be something Amd historically hasn't been as good as Intel at.  

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Strike105X said:

I'm alone on this (or will get hated on it) but the Vishiera didn't suck, and in a way it was ahead of its time, one way i do have a gripe with it however is how it was advertised, it was not a gaming cpu that's for sure. Don't get me wrong for 60 FPS gaming it could hold its ground, especially when overclocked, if a game could use all of its cores right, again it delivered nice results, but in the end due to how game usually focused on mostly between 2 up to 4 cores due to its low IPC it just didn't hold up. Same goes for emulation (which is one of the reasons why i was forced to part with mine). But despite some inconveniences i will not get into atm, if configured right and used for the right tasks boy could it deliver (again tweaks and settings i will not get into now), in terms of video editing/rendering and general tasks that use properly its 8 cores it was amazing for only a fraction of the price you would need to get the same results with an intel build, at least after it ended up cheaper, i admit that the launch price was way to high. When i switched from my OC FX8300 to my i7 6700 (because my needs had changed) while i did noticed a big improvement in things that require high IPC (things like gaming and emulation for example) i can't really say i noticed a difference that rely on multi-threading (like handbrake for example), so to sum it all up used right it could be a great CPU, and if it had its good points i can't see it as a fail. Generation upon generation of Celerons however... now there's your failed CPU's.

Even in a world where multi-core performance matters single core performance will always matter as well. 

 

1.25 X 6= 7.5

1 X 8= 8

9 times out of 10 in gaming the 6 core will be faster despite having 33% less cores even in games that can use all the cores. 

 

 

Software in general does not scale evenly with the amount of cores one has as part of Amdahl's law and in a world of Directx 9,10,11 the main thread in gaming is the rendering thread which once bottleneck it doesn't matter if one has 100 cores to back them up your gonna get stuttering or lower GPU usage over one core being pushed to its limits and with FX that didn't take much. 

 

FX series was a server first design and they wanted it to be cheaper to manufacture which is why they came up with the module design and they aimed for throughput instead of latency based operations and they would instead try and focus on 4+ghz out of the box to make up for the lackluster IPC, as toms reported during a review of the 8150 the words "hold the line" when it comes to IPC came up meaning they were trying to make the core smaller while keeping IPC the same as K10. 

 

What's sad for some of the slightly older timers here is FX used to mean something it used to make Intel cry in the corner cheating in benchmarks but with bulldozer that all changed fx brand naming turned out to be a massive joke. I owned the FX 8350 personally and even at 4.4ghz it was slower in emulation compared to my 1100T at 3.9ghz at least for a few years until they started to use some of the newer instruction sets that the FX supported. Also i switched to Intel for the time being sold my setup for enough money to buy I3 4360(was meant to be a  placeholder) and it was actually an upgrade a dual core with HT was beating the crap out of my 8350 at 4.4ghz i was even debating about keeping it instead of upgrading to a 4790K haha but i sold the I3 to my brother and bought the I7 until Amd had something worth buying which is why i own a R7 1700 now but boy did it take forever for a Amd comeback. 

 

Has been reported by many people that a updated K10 CPU but with 8 cores would have been better then making bulldozer class of CPU's basically Amd even knows this which is why Ryzen returns back to this approach but keeps a little tiny bit of bulldozer with them when it comes to making CCX be 4 cores but this time each core has its own dedicated fetch,decode,FPU,L1,L2 cache.  

 

Ryzen still has plenty of room to grow we should create a thread about the type of improvements Ryzen 2 should have. 

 

I'm personally happy that Amd came to their senses and made Zen its a safe approach design and maybe in the future we might even see 6 cores in one CPU complex? 

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Strike105X said:

The problem with the FX is that you also had to put some time into fine tuning it and tweaking it, it benefited a lot from tweaks to CPU-NB frequency, HT frequency, at 4.7 Ghz is where fast ram started to feel noticeable, it was a complicated chip but used right at least for me it felt more then worth its money, up to the moment when my priorities changed and i also needed considerable IPC gains it served its purpose quite well, and you can talk about shared FPU's mathematical calculation etc, but the point remains that the PC felt snappy and just as an example for video encoding handbrake for example at 4.7 Ghz it was on par with an intel i7 4th-6th gen while costing half.

Basic tasks are fast on even a g4400 paired with an SSD and enough ram. FX 8350 at release did quite well for the money in tasks that could use all 8 cores all the time but that wasn't really Amd's saving grace as sells were very lackluster even when it could use all its resources it was between a 2500K and 2600K when it couldn't use all 8 cores over software not scaling then it often lost to a I3 sandy making a 2500K a much smarter buy for overall CPU performance i'd also argue that today the Ryzen 1700 is a better buy than a 7700K over the same reasons. 

 

I wish Amd never made bulldozer class of CPU's it was like K6 all over again they were so far behind Intel it was simply bad for the market. 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×