Jump to content

Amd Bulldozer backstory

If anyone is interested in having a nice long conversation about Amd's failed Bulldozer architecture and history on the subject just ask and it would be a very fun subject indeed.  

 

This is in no way to bash Amd we all make mistakes and companies are notorious for these mistakes lets not forget "New Coke", or "Microsoft Zune" or even recently with Nintendo with the "Wii U". It happens but i feel with Bulldozer something else needs to be added as it makes no sense at all from a engineering perspective to why they even thought that would work out well, from a marketing perspective yes 8 cores and high frequency sounds great on paper but engineers would never sign on to that. 

 

Not to mention if there's any tech savvy people here with a lot of knowledge when it comes to the history of bulldozer that would be great to as i would love to compare our thoughts. 

Link to comment
Share on other sites

Link to post
Share on other sites

Engineers didn't have to. The higher ups wanted to compete with Sandy and Ivy. AMD knew their IPC wasn't keeping up with Intel at all, so the engineers came up with "six core" and "eight core" processors that were literally capable of setting the board on fire (but not capable of keeping up with an i5-3570K). The marketing guys saw "six core" and "eight core" and jumped all over it because Intel's best mainstream consumer chip was a quad. AMD released Bulldozer, predictable thing was predictable, Intel settled into upping IPC by 2-5% per generation from Ivy through Kaby and charging out the ass because no one could challenge them, and those of us building PCs in 2015 had no choice but to mortgage the house for a 4790K, burn down the house with a 9590 or live life with a G3258.

 

FWIW, that G3258 was one of the best little CPUs I've ever had, and I fucking hate myself for selling it in 2017.

Aerocool DS are the best fans you've never tried.

Link to comment
Share on other sites

Link to post
Share on other sites

Despite the hate, it was still a good cpu. An 8320 and an r9 280 got me 1080p 60 in GTA V with medium settings. It was just a scam to call it an 8 core when it was only 4. Still tho, gave me years of entertainment I could never have asked it to. 

 

Also, years later on an MSI 990FXa motherboard, it has a stable oc at 4.4ghz at 1.2 volts. Granted, I'm using a Wraith Prism at 100% fan speed. But still, didn't expect that tbh.  

Link to comment
Share on other sites

Link to post
Share on other sites

-Moved to CPU, Motherboards and Memory-

"Put as much effort into your question as you'd expect someone to give in an answer"- @Princess Luna

Make sure to Quote posts or tag the person with @[username] so they know you responded to them!

 RGB Build Post 2019 --- Rainbow 🦆 2020 --- Velka 5 V2.0 Build 2021

Purple Build Post ---  Blue Build Post --- Blue Build Post 2018 --- Project ITNOS

CPU i7-4790k    Motherboard Gigabyte Z97N-WIFI    RAM G.Skill Sniper DDR3 1866mhz    GPU EVGA GTX1080Ti FTW3    Case Corsair 380T   

Storage Samsung EVO 250GB, Samsung EVO 1TB, WD Black 3TB, WD Black 5TB    PSU Corsair CX750M    Cooling Cryorig H7 with NF-A12x25

Link to comment
Share on other sites

Link to post
Share on other sites

I'm still running a 8350 with no problem whatsoever. I've been running it for the last 6 or so years.

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, CPUguy101 said:

This is in no way to bash Amd we all make mistakes and companies are notorious for these mistakes lets not forget "New Coke", or "Microsoft Zune" or even recently with Nintendo with the "Wii U". It happens but i feel with Bulldozer something else needs to be added as it makes no sense at all from a engineering perspective to why they even thought that would work out well, from a marketing perspective yes 8 cores and high frequency sounds great on paper but engineers would never sign on to that. 

 

Not to mention if there's any tech savvy people here with a lot of knowledge when it comes to the history of bulldozer that would be great to as i would love to compare our thoughts. 

From what I gathered after people tested Bulldozer in applications today is that the major reason why it failed was because nobody was really using software that could take advantage of it. Yes, 8 cores sounded amazing on paper but this was at a time when most people had around 2 cores, with 4 cores in second place.

 

However the shortcomings were mostly from the following:

  • Deeper pipeline, which impacts branching mispredictions. Bulldozer had a 20 cycle penalty whereas Sandy Bridge had 14-17.
  • Cores that shared the same instruction decoding front-end. It had a peak instruction decode rate of 4 instructions per module (which has two cores), whereas Sandy Bridge had 4 per core.
  • The individual integer cores were weaker than the previous generation
  • Memory latency was worse than either Phenom II or Sandy Bridge overall.

I believe Bulldozer was designed for servers in mind first. Servers tend to run applications that are designed to scale across cores, and typically aren't exactly computation heavy. Now that more applications are taking advantage of higher core counts, Bulldozer is sort of seeing a resurgence.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Mira Yurizaki said:

From what I gathered after people tested Bulldozer in applications today is that the major reason why it failed was because nobody was really using software that could take advantage of it. Yes, 8 cores sounded amazing on paper but this was at a time when most people had around 2 cores, with 4 cores in second place.

From an enthusiast perspective, quads and hex cores were not unusual even before that era. The last AMD CPU I owned before Ryzen was the Phenom II X6 with 6 good cores, looking it up that was released 2008, going up against Nehalem, the first in the current series of Intel CPUs. The Core 2 Quads were released just before it in 2007 and I had a couple of those. Bulldozer came out much later in 2011, going up against Sandy Bridge. By this point we have the return of HT after earlier implementation in P4, so you had 8 threads available on a mainstream i7 quad core. I usually hate comparing threads and cores, but from a software perspective there isn't so much difference. 8 threads always was a thing in the mainstream before Bulldozer arrived. While software plays a part, I don't think it was the main limitation.

 

I do recall when Bulldozer was launched, AMD's story was around optimising it for integer workloads, and they nerfed the FPU saying people didn't need it. Something they didn't catch up to Intel with until Zen 2. This is a part of AMD I dislike, they hype up specific "typical uses" and less common cases fall to one side. This gives the illusion they are at parity or better than the competition, when in practice there are major areas of weakness, which they continued to have through Zen and Zen+. It's not unusual in business to pick flattering scenarios, but falling behind so much in other areas was worse than average.

 

The lack of FPU performance was in large part why I never owned something from Bulldozer family. I did get people who owned one to run Prime95 benchmarks on it. It was about as bad as expected. I could beat those with a dual core i3 of the time.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

The Bulldozers however are faster in modern games than comparable Intel CPU's from the time. AMD banked on more developers shifting FP operations onto GPU's as that's all they really do.

Link to comment
Share on other sites

Link to post
Share on other sites

Quote

I do recall when Bulldozer was launched, AMD's story was around optimising it for integer workloads, and they nerfed the FPU saying people didn't need it. Something they didn't catch up to Intel with until Zen 2. This is a part of AMD I dislike, they hype up specific "typical uses" and less common cases fall to one side. This gives the illusion they are at parity or better than the competition, when in practice there are major areas of weakness, which they continued to have through Zen and Zen+. It's not unusual in business to pick flattering scenarios, but falling behind so much in other areas was worse than average.

I think this hearkens to the point they seemed to design Bulldozer for server workloads primarily and hoped what they did there would trickle down to the rest of the market stack. Even then, Intel was still winning if not tying in standard tests.

 

15 minutes ago, Curious Pineapple said:

The Bulldozers however are faster in modern games than comparable Intel CPU's from the time. AMD banked on more developers shifting FP operations onto GPU's as that's all they really do.

Maybe in games where DirectCompute was a thing, but most non-3D applications use the CPU for FP operations because there's not enough FP operations to justify running it on the GPU.

 

GPUs are good for crunching a metric ton of data at once. They're not good for crunching a single 1.0 + 1.1 you entered in Calc.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Mira Yurizaki said:

GPUs are good for crunching a metric ton of data at once. They're not good for crunching a single 1.0 + 1.1 you entered in Calc.

Like game physics ;)

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, CPUguy101 said:

If anyone is interested in having a nice long conversation about Amd's failed Bulldozer architecture and history on the subject just ask and it would be a very fun subject indeed.  

 

This is in no way to bash Amd we all make mistakes and companies are notorious for these mistakes lets not forget "New Coke", or "Microsoft Zune" or even recently with Nintendo with the "Wii U". It happens but i feel with Bulldozer something else needs to be added as it makes no sense at all from a engineering perspective to why they even thought that would work out well, from a marketing perspective yes 8 cores and high frequency sounds great on paper but engineers would never sign on to that. 

 

Not to mention if there's any tech savvy people here with a lot of knowledge when it comes to the history of bulldozer that would be great to as i would love to compare our thoughts. 

Watcha wanna talk about? Had a lot of FX processors. Know the ins and outs pretty well. Phenom cpus as well. 

 

How can I assist the conversation of the old architecture?

The lawsuit??

The 5ghz 9590?

Where to start where to finish?

Link to comment
Share on other sites

Link to post
Share on other sites

Been running abusing BD since the very beginning here.

I've ran these on everything from stock aircooling right up to Liquid Nitrogen itself.

So.... Waddaya wanna know?

 

Believe it or not these chips once you learn them aren't that complicated, kinda simple in fact in what they want and need. There are certain things that differ based on it either being a Zambezi or Vishera but they are largely all the same thing in the end aside from core count itself.


 

"If you ever need anything please don't hesitate to ask someone else first"..... Nirvana
"Whadda ya mean I ain't kind? Just not your kind"..... Megadeth
Speaking of things being "All Inclusive", Hell itself is too.

 

Link to comment
Share on other sites

Link to post
Share on other sites

@Beerzerker 

I'm sure you have this beaten. I was gaming CSS /HL2DM on this set up (in between benching of course), Full LN2 pot :D

 

For those wondering, that board is 4+1 VRMs...... .  . .  .   . .  .   .  . .. ...  I was told couldn't be done. lol.

 

344e39d8_image_id_1141517.thumb.png.a9a39c3c51c5f7c6a3b31b27e2ff771c.png

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Curious Pineapple said:

The Bulldozers however are faster in modern games than comparable Intel CPU's from the time. AMD banked on more developers shifting FP operations onto GPU's as that's all they really do.

They would have been going up against the 2600k I think. In a quick search I found this:

Doesn't look so great still... unless you have better data? Now the video compares both overclocked. At stock, the 8350 would have more of a clock advantage but I don't think that would be enough to save it.

 

2 hours ago, Mira Yurizaki said:

Maybe in games where DirectCompute was a thing, but most non-3D applications use the CPU for FP operations because there's not enough FP operations to justify running it on the GPU.

I had to look it up to work out what timescale we're looking at. nvidia's 500 generation was current then, which is interesting as that was the last generation before FP64 compute got cut in consumer cards, although FP32 performance continued growing going forwards. GPU compute was still relatively young at the time. GPUs also weren't as flexible in what they can do compared to today. What they were good at, they were very good at. Everything else, you might as well run it on CPU.

 

The FP64 nerf on GPUs is also why I'm more interested in CPU FP performance, as FP64 rates remain decent on that.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

29 minutes ago, porina said:

I had to look it up to work out what timescale we're looking at. nvidia's 500 generation was current then, which is interesting as that was the last generation before FP64 compute got cut in consumer cards, although FP32 performance continued growing going forwards. GPU compute was still relatively young at the time. GPUs also weren't as flexible in what they can do compared to today. What they were good at, they were very good at. Everything else, you might as well run it on CPU.

 

The FP64 nerf on GPUs is also why I'm more interested in CPU FP performance, as FP64 rates remain decent on that.

I was more alluding to what GPUs were designed to do: crunch a bunch of data at once using the same instructions. The cost is higher latency. So doing a smaller number of operations, especially those that are likely not doing the same instructions, on GPUs is an incredibly inefficient way of doing things.

 

It's like you don't spin up a mass production facility to do a one-off prototype.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Mira Yurizaki said:

I was more alluding to what GPUs were designed to do: crunch a bunch of data at once using the same instructions. The cost is higher latency. So doing a smaller number of operations, especially those that are likely not doing the same instructions, on GPUs is an incredibly inefficient way of doing things.

Fair enough, the same problem also applies to more cores on CPUs too... it's difficult to fill up a wide execution potential efficiently.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

I do apologize  for the delay(had to go to the hospital) i plan on making a nice write-up and a good amount of it will be old posts from Amd themselves i love the feedback so far. Again this isn't about hating on this or that if anything i'm hard on Amd cause i expect better even then, Zen for  example proves that Amd is a amazing company and of course their older CPU's from 99-05

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, CPUguy101 said:

I do apologize  for the delay(had to go to the hospital) i plan on making a nice write-up and a good amount of it will be old posts from Amd themselves i love the feedback so far. Again this isn't about hating on this or that if anything i'm hard on Amd cause i expect better even then, Zen for  example proves that Amd is a amazing company and of course their older CPU's from 99-05

 

Still have plenty of those too.
Here's the list of what I either have or had and I do have multiple examples of a few that's listed: https://hwbot.org/user/bones/#Hardware_Library

I have more that's not within the list too, not much but I've got it and most shown I still have in working order too.

"If you ever need anything please don't hesitate to ask someone else first"..... Nirvana
"Whadda ya mean I ain't kind? Just not your kind"..... Megadeth
Speaking of things being "All Inclusive", Hell itself is too.

 

Link to comment
Share on other sites

Link to post
Share on other sites

Pretty sure my old Phenom IIX4 965 was faster than my FX 6300

Before you reply to my post, REFRESH. 99.99% chance I edited my post. 

 

My System: i7-13700KF // Corsair iCUE H150i Elite Capellix // MSI MPG Z690 Edge Wifi // 32GB DDR5 G. SKILL RIPJAWS S5 6000 CL32 // Nvidia RTX 4070 Super FE // Corsair 5000D Airflow // Corsair SP120 RGB Pro x7 // Seasonic Focus Plus Gold 850w //1TB ADATA XPG SX8200 Pro/1TB Teamgroup MP33/2TB Seagate 7200RPM Hard Drive // Displays: LG Ultragear 32GP83B x2 // Royal Kludge RK100 // Logitech G Pro X Superlight // Sennheiser DROP PC38x

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Mister Woof said:

Pretty sure my old Phenom IIX4 965 was faster than my FX 6300

Probably was. My 1090T was faster than my FX-8350. 6 vs 8.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×