Jump to content

What's Wrong with the FX Series CPUs?

Go to solution Solved by Lurick,
Just now, jaysangwan32 said:

What specifically makes the G4560 more powerful?

Sorry I'm pretty new at the PC building process, so I really only know to look at cores and frequency..

IPC or Instructions per Clock is what makes a chip more powerful and better.

It's the number of instructions that can be executed each clock cycle on the chip, so think of it like this. If I can run really fast but can only do 1 thing each time I go around the racetrack but my buddy runs a little slower but can do 3 things per cycle, he'll be more efficient. Obviously there is more to it than that but just the easiest way I can think of to explain it.

Hey guys!

So I've been helping a friend plan a budget build and have been comparing a slew of various CPUs. The top 2 CPUs I was looking at were the R3 1200 and the G4560.

However, recently I've noticed an older AMD FX series of CPUs. Why aren't these CPUs discussed as much compared to Ryzen. Looking at a 3.5ghz 6 core processor for $80 (FX-8300) and a 4.7 ghz 8 core processor for only $120 (FX-9590) seems fairly promising.

Why aren't these chips used more often? Would an FX chip be better for a budget CPU, since I can get more physical cores and higher frequencies for the same price or cheaper?

Thanks for the help!

Link to comment
Share on other sites

Link to post
Share on other sites

They are ancient chips from 2011-2012. They don't have the performance of modern day stuff like the G4560 and R3 1200 and consume much more power and put out more heat as well. Cores and frequency do not equal performance.

Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Lurick said:

They are ancient chips from 2011-2012. They don't have the performance of modern day stuff like the G4560 and R3 1200 and consume much more power and put out more heat as well. Cores and frequency do not equal performance.

What specifically makes the G4560 more powerful?

Sorry I'm pretty new at the PC building process, so I really only know to look at cores and frequency..

Link to comment
Share on other sites

Link to post
Share on other sites

They are old and just overall terrible in every-way possible. 

زندگی از چراغ

Intel Core i7 7800X 6C/12T (4.5GHz), Corsair H150i Pro RGB (360mm), Asus Prime X299-A, Corsair Vengeance LPX 32GB (4X4GB & 2X8GB 3000MHz DDR4), MSI GeForce GTX 1070 Gaming X 8G (2.113GHz core & 9.104GHz memory), 1 Samsung 970 Evo Plus 1TB NVMe M.2, 1 Samsung 850 Pro 256GB SSD, 1 Samsung 850 Evo 500GB SSD, 1 WD Red 1TB mechanical drive, Corsair RM750X 80+ Gold fully modular PSU, Corsair Obsidian 750D full tower case, Corsair Glaive RGB mouse, Corsair K70 RGB MK.2 (Cherry MX Red) keyboard, Asus VN247HA (1920x1080 60Hz 16:9), Audio Technica ATH-M20x headphones & Windows 10 Home 64 bit. 

 

 

The time Linus replied to me on one of my threads: 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, jaysangwan32 said:

What specifically makes the G4560 more powerful?

Sorry I'm pretty new at the PC building process, so I really only know to look at cores and frequency..

IPC or Instructions per Clock is what makes a chip more powerful and better.

It's the number of instructions that can be executed each clock cycle on the chip, so think of it like this. If I can run really fast but can only do 1 thing each time I go around the racetrack but my buddy runs a little slower but can do 3 things per cycle, he'll be more efficient. Obviously there is more to it than that but just the easiest way I can think of to explain it.

Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, jaysangwan32 said:

What specifically makes the G4560 more powerful?

Sorry I'm pretty new at the PC building process, so I really only know to look at cores and frequency..

right now with ryzen vs intel both have similar number of instructions per clock, and those older cpus had much lower levels of instructions per clock

Link to comment
Share on other sites

Link to post
Share on other sites

Because FX has like half the IPC of anything modern, so you have to divide that 3.6 GHz by about 2 to get a comparable figure

Solve your own audio issues  |  First Steps with RPi 3  |  Humidity & Condensation  |  Sleep & Hibernation  |  Overclocking RAM  |  Making Backups  |  Displays  |  4K / 8K / 16K / etc.  |  Do I need 80+ Platinum?

If you can read this you're using the wrong theme.  You can change it at the bottom.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Lurick said:

IPC or Instructions per Clock is what makes a chip more powerful and better.

It's the number of instructions that can be executed each clock cycle on the chip, so think of it like this. If I can run really fast but can only do 1 thing each time I go around the racetrack but my buddy runs a little slower but can do 3 things per cycle, he'll be more efficient. Obviously there is more to it than that but just the easiest way I can think of to explain it.

Ohh thank you for the simple explanation makes a ton of sense now!

Link to comment
Share on other sites

Link to post
Share on other sites

As someone who owns two computers using FX chips: They're terrible at single threaded operations and their multi-threaded scaling is also bad. I have an 8 year old laptop with a dual core CPU that has a better single core Cinebench score than an OCed FX CPU.

 

Just don't bother with them, trust me.

Primary PC-

CPU: Intel i7-6800k @ 4.2-4.4Ghz   CPU COOLER: Bequiet Dark Rock Pro 4   MOBO: MSI X99A SLI Plus   RAM: 32GB Corsair Vengeance LPX quad-channel DDR4-2800  GPU: EVGA GTX 1080 SC2 iCX   PSU: Corsair RM1000i   CASE: Corsair 750D Obsidian   SSDs: 500GB Samsung 960 Evo + 256GB Samsung 850 Pro   HDDs: Toshiba 3TB + Seagate 1TB   Monitors: Acer Predator XB271HUC 27" 2560x1440 (165Hz G-Sync)  +  LG 29UM57 29" 2560x1080   OS: Windows 10 Pro

Album

Other Systems:

Spoiler

Home HTPC/NAS-

CPU: AMD FX-8320 @ 4.4Ghz  MOBO: Gigabyte 990FXA-UD3   RAM: 16GB dual-channel DDR3-1600  GPU: Gigabyte GTX 760 OC   PSU: Rosewill 750W   CASE: Antec Gaming One   SSD: 120GB PNY CS1311   HDDs: WD Red 3TB + WD 320GB   Monitor: Samsung SyncMaster 2693HM 26" 1920x1200 -or- Steam Link to Vizio M43C1 43" 4K TV  OS: Windows 10 Pro

 

Offsite NAS/VM Server-

CPU: 2x Xeon E5645 (12-core)  Model: Dell PowerEdge T610  RAM: 16GB DDR3-1333  PSUs: 2x 570W  SSDs: 8GB Kingston Boot FD + 32GB Sandisk Cache SSD   HDDs: WD Red 4TB + Seagate 2TB + Seagate 320GB   OS: FreeNAS 11+

 

Laptop-

CPU: Intel i7-3520M   Model: Dell Latitude E6530   RAM: 8GB dual-channel DDR3-1600  GPU: Nvidia NVS 5200M   SSD: 240GB TeamGroup L5   HDD: WD Black 320GB   Monitor: Samsung SyncMaster 2693HM 26" 1920x1200   OS: Windows 10 Pro

Having issues with a Corsair AIO? Possible fix here:

Spoiler

Are you getting weird fan behavior, speed fluctuations, and/or other issues with Link?

Are you running AIDA64, HWinfo, CAM, or HWmonitor? (ASUS suite & other monitoring software often have the same issue.)

Corsair Link has problems with some monitoring software so you may have to change some settings to get them to work smoothly.

-For AIDA64: First make sure you have the newest update installed, then, go to Preferences>Stability and make sure the "Corsair Link sensor support" box is checked and make sure the "Asetek LC sensor support" box is UNchecked.

-For HWinfo: manually disable all monitoring of the AIO sensors/components.

-For others: Disable any monitoring of Corsair AIO sensors.

That should fix the fan issue for some Corsair AIOs (H80i GT/v2, H110i GTX/H115i, H100i GTX and others made by Asetek). The problem is bad coding in Link that fights for AIO control with other programs. You can test if this worked by setting the fan speed in Link to 100%, if it doesn't fluctuate you are set and can change the curve to whatever. If that doesn't work or you're still having other issues then you probably still have a monitoring software interfering with the AIO/Link communications, find what it is and disable it.

Link to comment
Share on other sites

Link to post
Share on other sites

They're old and the FX chips have relatively poor IPC in comparison to its Intel rivals.

 

That's why IPC was a critical point of discussion with the release of Ryzen.

The Workhorse (AMD-powered custom desktop)

CPU: AMD Ryzen 7 3700X | GPU: MSI X Trio GeForce RTX 2070S | RAM: XPG Spectrix D60G 32GB DDR4-3200 | Storage: 512GB XPG SX8200P + 2TB 7200RPM Seagate Barracuda Compute | OS: Microsoft Windows 10 Pro

 

The Portable Workstation (Apple MacBook Pro 16" 2021)

SoC: Apple M1 Max (8+2 core CPU w/ 32-core GPU) | RAM: 32GB unified LPDDR5 | Storage: 1TB PCIe Gen4 SSD | OS: macOS Monterey

 

The Communicator (Apple iPhone 13 Pro)

SoC: Apple A15 Bionic | RAM: 6GB LPDDR4X | Storage: 128GB internal w/ NVMe controller | Display: 6.1" 2532x1170 "Super Retina XDR" OLED with VRR at up to 120Hz | OS: iOS 15.1

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, jaysangwan32 said:

Ohh thank you for the simple explanation makes a ton of sense now!

As for your original question. If you can get the R3 1200 + B350 motherboard for about the same price as a G4560 and B250 board, I would pick the Ryzen 1200 chip and board. You can overclock it to get more life out of it down the road and the stock cooler is pretty good as well.

Current Network Layout:

Current Build Log/PC:

Prior Build Log/PC:

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Lurick said:

As for your original question. If you can get the R3 1200 + B350 motherboard for about the same price as a G4560 and B250 board, I would pick the Ryzen 1200 chip and board. You can overclock it to get more life out of it down the road and the stock cooler is pretty good as well.

Yeah I've been researching both significantly and I'll probably advise he go with the 1200, even if its slightly more.

RIP Kaby Lake so it'd be nice if he can get an upgrade path and not have to invest in an aftermarket cooler.

Thanks for the suggestion!

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, jaysangwan32 said:

Hey guys!

So I've been helping a friend plan a budget build and have been comparing a slew of various CPUs. The top 2 CPUs I was looking at were the R3 1200 and the G4560.

However, recently I've noticed an older AMD FX series of CPUs. Why aren't these CPUs discussed as much compared to Ryzen. Looking at a 3.5ghz 6 core processor for $80 (FX-8300) and a 4.7 ghz 8 core processor for only $120 (FX-9590) seems fairly promising.

Why aren't these chips used more often? Would an FX chip be better for a budget CPU, since I can get more physical cores and higher frequencies for the same price or cheaper?

Thanks for the help!

From my personal use, I can tell you two things. 1) Clock speeds alone aren't an accurate representation of speed. The jump I had from a 6350 to a Ryzen 1600  is insane. 2) They are known for creating quite a lot of heat and being power hogs.

Fan Comparisons          F@H          PCPartPicker         Analysis of Market Trends (Coming soon? Never? Who knows!)

Designing a mITX case. Working on aluminum prototypes.

Open for intern / part-time. Good at maths, CAD and airflow stuff. Dabbled with Python.

Please fill out this form! It helps a ton! https://linustechtips.com/main/topic/841400-the-poll-to-end-all-polls-poll/

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, jaysangwan32 said:

Yeah I've been researching both significantly and I'll probably advise he go with the 1200, even if its slightly more.

RIP Kaby Lake so it'd be nice if he can get an upgrade path and not have to invest in an aftermarket cooler.

Thanks for the suggestion!

What is the budget? If you were considering an aftermarket cooler not getting it may put you in r5 1400 range.

Want to custom loop?  Ask me more if you are curious

 

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Damascus said:

What is the budget? If you were considering an aftermarket cooler not getting it may put you in r5 1400 range.

600-650ish including a monitor. Unfortunately no room for a 1400.

Thank you for the suggestion though!

Link to comment
Share on other sites

Link to post
Share on other sites

Well, as other said they are an older version which is inferior to ryzen in pretty much every way. While they offered some decent value for certain workloads, in most things they fell behind since they relied too much on taking advantage of all their cores. They were also less power efficient than intel's offerings. It's not that there's anything "wrong" with them, they just weren't very good compared to the competition. They can still be worth buying in certain niche scenarios since their price has dropped dramatically since ryzen's launch. In most cases a ryzen 3 is just better though.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

26 minutes ago, jaysangwan32 said:

Why aren't these chips used more often? Would an FX chip be better for a budget CPU, since I can get more physical cores and higher frequencies for the same price or cheaper?

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, KenjiUmino said:

 

Definitely will check out.

Thank you for the link!

Link to comment
Share on other sites

Link to post
Share on other sites

Part of the reason why Bulldozer wasn't really as great as Sandy Bridge at the time was:

  • Bulldozer relied too much on a deep instruction pipeline. This means there were more stages to get one instruction out the door. While it's hard to say how many clock cycles each stage would take on Bulldozer vs. Sandy Bridge, it basically meant that if the branch prediction was wrong (and it can be), then there was a longer delay in getting instructions flowing again vs. Sandy Bridge.
  • Bulldozer had less execution resources per core than K-10. Sharing the FPU aside (since a lot of tasks most people do usually don't touch the FPU), each core had only two execution units compared to three on K-10. This is why in some tests, Bulldozer had worse performance than a similarly clocked Phenom II chip.

I think it would've been better if AMD didn't pursue a deep pipeline. Because every time someone pursues that (well, there's only really one other time I can remember that someone did), it doesn't end well.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, M.Yurizaki said:

Part of the reason why Bulldozer wasn't really as great as Sandy Bridge at the time was:

  • Bulldozer relied too much on a deep instruction pipeline. This means there were more stages to get one instruction out the door. While it's hard to say how many clock cycles each stage would take on Bulldozer vs. Sandy Bridge, it basically meant that if the branch prediction was wrong (and it can be), then there was a longer delay in getting instructions flowing again vs. Sandy Bridge.
  • Bulldozer had less execution resources per core than K-10. Sharing the FPU aside (since a lot of tasks most people do usually don't touch the FPU), each core had only two execution units compared to three on K-10. This is why in some tests, Bulldozer had worse performance than a similarly clocked Phenom II chip.

I think it would've been better if AMD didn't pursue a deep pipeline. Because every time someone pursues that (well, there's only really one other time I can remember that someone did), it doesn't end well.

I still wonder why they didn't just die shrink the Phenom IIs if the alternative was bulldozer. Even if it was complicated it couldn't have been worse than designing a brand new architecture.

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Sauron said:

I still wonder why they didn't just die shrink the Phenom IIs if the alternative was bulldozer. Even if it was complicated it couldn't have been worse than designing a brand new architecture.

I thought AMD's goals were noble to some extent.  They just made a bunch of boneheaded decisions getting there.

 

But in retrospect, most of them (aside from the ones that were historically not a good idea) were probably boneheaded.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, M.Yurizaki said:

I thought AMD's goals were noble to some extent.  They just made a bunch of boneheaded decisions getting there.

 

But in retrospect, most of them (aside from the ones that were historically not a good idea) were probably boneheaded.

At least now they know. As long as you learn from your mistakes they aren't really as bad.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×