Jump to content

AMD Zen CPUs will have PGA mounting systems

kameshss

Good. I like PGA better.

CPU: I7 3770k @4.8 ghz | GPU: GTX 1080 FE SLI | RAM: 16gb (2x8gb) gskill sniper 1866mhz | Mobo: Asus P8Z77-V LK | PSU: Rosewill Hive 1000W | Case: Corsair 750D | Cooler:Corsair H110| Boot: 2X Kingston v300 120GB RAID 0 | Storage: 1 WD 1tb green | 2 3TB seagate Barracuda|

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Misanthrope said:

Your exaggeration is extremely disingenuous: it is far more likely for you to accidentally bend pins on the cpu by simply removing it from the box, switching to another rig, etc than it is for me to basically maneuver a mobo socket first into the corner of a case. 

Given that I store multiple AMD and pre-LGA CPUs in a box without any protection, and I have legitimately thrown the box across my room with no casualties, I think my point still stands.

 

And given just how much I handle PGA CPUs comapred to your barely-if-any (more than likely), I think that only re-enforces my answer.

Main rig on profile

VAULT - File Server

Spoiler

Intel Core i5 11400 w/ Shadow Rock LP, 2x16GB SP GAMING 3200MHz CL16, ASUS PRIME Z590-A, 2x LSI 9211-8i, Fractal Define 7, 256GB Team MP33, 3x 6TB WD Red Pro (general storage), 3x 1TB Seagate Barracuda (dumping ground), 3x 8TB WD White-Label (Plex) (all 3 arrays in their respective Windows Parity storage spaces), Corsair RM750x, Windows 11 Education

Sleeper HP Pavilion A6137C

Spoiler

Intel Core i7 6700K @ 4.4GHz, 4x8GB G.SKILL Ares 1800MHz CL10, ASUS Z170M-E D3, 128GB Team MP33, 1TB Seagate Barracuda, 320GB Samsung Spinpoint (for video capture), MSI GTX 970 100ME, EVGA 650G1, Windows 10 Pro

Mac Mini (Late 2020)

Spoiler

Apple M1, 8GB RAM, 256GB, macOS Sonoma

Consoles: Softmodded 1.4 Xbox w/ 500GB HDD, Xbox 360 Elite 120GB Falcon, XB1X w/2TB MX500, Xbox Series X, PS1 1001, PS2 Slim 70000 w/ FreeMcBoot, PS4 Pro 7015B 1TB (retired), PS5 Digital, Nintendo Switch OLED, Nintendo Wii RVL-001 (black)

Link to comment
Share on other sites

Link to post
Share on other sites

36 minutes ago, cayphed said:

I'm just saying, that'll impress me.

Also I'd like to see VM in VM's completely conflict free, now wouldn't that be nice?

And I completely agree with the RAM speeds, but I'm not sure even HMB or HMC could get us there either.

My whole reason for even entertaining the argument is simply because I just don't believe ANYONE is even trying to come up with something new, ground breaking, or even useful...

I'm curious, can you come up with any use case scenarios for multi-layered VM setups? Why would you need to run a VM inside a VM, instead of say, beside the VM in the same Host?

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, i_build_nanosuits said:

Can someone with a 4K monitor count how many pins please?! :P

 

1331.

 

Think im lying?

 

check it urself. :P

Intel Core i3 2100 @ 3.10GHz - Intel Stock Cooler - Zotac Geforce GT 610 2GB Synergy Edition

Intel DH61WW - Corsair® Value Select 4GBx1 DDR3 1600 MHz - Antec BP-300P PSU

WD Green 1TB - Seagate 2.5" HDD 1TB - Seagate Barracuda 500GB - Antec X1 E.

Link to comment
Share on other sites

Link to post
Share on other sites

Random thought regarding pin density/count:

 

Why has no one tried to implement a multi-layered pin system to increase connections without increasing physical pins. Something like a headphone jack where you have multiple connections on one pin by embedding pins within each other separated by insulating material. I'm sure that would increase the price of production a bit but you could double, triple, or quadruple the number of connections without actually increasing the number of pins. Seems like they'll eventually reach a point where they cannot realistically add more pins to a socket (just look at how may pins Skylake-e has) without exclusively selling the CPUs as permanently soldered to the motherboard.

Primary PC-

CPU: Intel i7-6800k @ 4.2-4.4Ghz   CPU COOLER: Bequiet Dark Rock Pro 4   MOBO: MSI X99A SLI Plus   RAM: 32GB Corsair Vengeance LPX quad-channel DDR4-2800  GPU: EVGA GTX 1080 SC2 iCX   PSU: Corsair RM1000i   CASE: Corsair 750D Obsidian   SSDs: 500GB Samsung 960 Evo + 256GB Samsung 850 Pro   HDDs: Toshiba 3TB + Seagate 1TB   Monitors: Acer Predator XB271HUC 27" 2560x1440 (165Hz G-Sync)  +  LG 29UM57 29" 2560x1080   OS: Windows 10 Pro

Album

Other Systems:

Spoiler

Home HTPC/NAS-

CPU: AMD FX-8320 @ 4.4Ghz  MOBO: Gigabyte 990FXA-UD3   RAM: 16GB dual-channel DDR3-1600  GPU: Gigabyte GTX 760 OC   PSU: Rosewill 750W   CASE: Antec Gaming One   SSD: 120GB PNY CS1311   HDDs: WD Red 3TB + WD 320GB   Monitor: Samsung SyncMaster 2693HM 26" 1920x1200 -or- Steam Link to Vizio M43C1 43" 4K TV  OS: Windows 10 Pro

 

Offsite NAS/VM Server-

CPU: 2x Xeon E5645 (12-core)  Model: Dell PowerEdge T610  RAM: 16GB DDR3-1333  PSUs: 2x 570W  SSDs: 8GB Kingston Boot FD + 32GB Sandisk Cache SSD   HDDs: WD Red 4TB + Seagate 2TB + Seagate 320GB   OS: FreeNAS 11+

 

Laptop-

CPU: Intel i7-3520M   Model: Dell Latitude E6530   RAM: 8GB dual-channel DDR3-1600  GPU: Nvidia NVS 5200M   SSD: 240GB TeamGroup L5   HDD: WD Black 320GB   Monitor: Samsung SyncMaster 2693HM 26" 1920x1200   OS: Windows 10 Pro

Having issues with a Corsair AIO? Possible fix here:

Spoiler

Are you getting weird fan behavior, speed fluctuations, and/or other issues with Link?

Are you running AIDA64, HWinfo, CAM, or HWmonitor? (ASUS suite & other monitoring software often have the same issue.)

Corsair Link has problems with some monitoring software so you may have to change some settings to get them to work smoothly.

-For AIDA64: First make sure you have the newest update installed, then, go to Preferences>Stability and make sure the "Corsair Link sensor support" box is checked and make sure the "Asetek LC sensor support" box is UNchecked.

-For HWinfo: manually disable all monitoring of the AIO sensors/components.

-For others: Disable any monitoring of Corsair AIO sensors.

That should fix the fan issue for some Corsair AIOs (H80i GT/v2, H110i GTX/H115i, H100i GTX and others made by Asetek). The problem is bad coding in Link that fights for AIO control with other programs. You can test if this worked by setting the fan speed in Link to 100%, if it doesn't fluctuate you are set and can change the curve to whatever. If that doesn't work or you're still having other issues then you probably still have a monitoring software interfering with the AIO/Link communications, find what it is and disable it.

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, pyrojoe34 said:

Random thought regarding pin density/count:

 

Why has no one tried to implement a multi-layered pin system to increase connections without increasing physical pins. Something like a headphone jack where you have multiple connections on one pin by embedding pins within each other separated by insulating material. I'm sure that would increase the price of production a bit but you could double, triple, or quadruple the number of connections without actually increasing the number of pins. Seems like they'll eventually reach a point where they cannot realistically add more pins to a socket (just look at how may pins Skylake-e has) without exclusively selling the CPUs as permanently soldered to the motherboard.

ANY switching mechanism within the pins would contribute to increased latency and interference cross talk. Not things you want to deal with.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, Curufinwe_wins said:

ANY switching mechanism within the pins would contribute to increased latency and interference cross talk. Not things you want to deal with.

No it wouldn't require any switching, it would work just like a headphone jack, each layer would be constantly connected and able to transmit data. The connection would just be layered rather than a single "2D" grid.

Primary PC-

CPU: Intel i7-6800k @ 4.2-4.4Ghz   CPU COOLER: Bequiet Dark Rock Pro 4   MOBO: MSI X99A SLI Plus   RAM: 32GB Corsair Vengeance LPX quad-channel DDR4-2800  GPU: EVGA GTX 1080 SC2 iCX   PSU: Corsair RM1000i   CASE: Corsair 750D Obsidian   SSDs: 500GB Samsung 960 Evo + 256GB Samsung 850 Pro   HDDs: Toshiba 3TB + Seagate 1TB   Monitors: Acer Predator XB271HUC 27" 2560x1440 (165Hz G-Sync)  +  LG 29UM57 29" 2560x1080   OS: Windows 10 Pro

Album

Other Systems:

Spoiler

Home HTPC/NAS-

CPU: AMD FX-8320 @ 4.4Ghz  MOBO: Gigabyte 990FXA-UD3   RAM: 16GB dual-channel DDR3-1600  GPU: Gigabyte GTX 760 OC   PSU: Rosewill 750W   CASE: Antec Gaming One   SSD: 120GB PNY CS1311   HDDs: WD Red 3TB + WD 320GB   Monitor: Samsung SyncMaster 2693HM 26" 1920x1200 -or- Steam Link to Vizio M43C1 43" 4K TV  OS: Windows 10 Pro

 

Offsite NAS/VM Server-

CPU: 2x Xeon E5645 (12-core)  Model: Dell PowerEdge T610  RAM: 16GB DDR3-1333  PSUs: 2x 570W  SSDs: 8GB Kingston Boot FD + 32GB Sandisk Cache SSD   HDDs: WD Red 4TB + Seagate 2TB + Seagate 320GB   OS: FreeNAS 11+

 

Laptop-

CPU: Intel i7-3520M   Model: Dell Latitude E6530   RAM: 8GB dual-channel DDR3-1600  GPU: Nvidia NVS 5200M   SSD: 240GB TeamGroup L5   HDD: WD Black 320GB   Monitor: Samsung SyncMaster 2693HM 26" 1920x1200   OS: Windows 10 Pro

Having issues with a Corsair AIO? Possible fix here:

Spoiler

Are you getting weird fan behavior, speed fluctuations, and/or other issues with Link?

Are you running AIDA64, HWinfo, CAM, or HWmonitor? (ASUS suite & other monitoring software often have the same issue.)

Corsair Link has problems with some monitoring software so you may have to change some settings to get them to work smoothly.

-For AIDA64: First make sure you have the newest update installed, then, go to Preferences>Stability and make sure the "Corsair Link sensor support" box is checked and make sure the "Asetek LC sensor support" box is UNchecked.

-For HWinfo: manually disable all monitoring of the AIO sensors/components.

-For others: Disable any monitoring of Corsair AIO sensors.

That should fix the fan issue for some Corsair AIOs (H80i GT/v2, H110i GTX/H115i, H100i GTX and others made by Asetek). The problem is bad coding in Link that fights for AIO control with other programs. You can test if this worked by setting the fan speed in Link to 100%, if it doesn't fluctuate you are set and can change the curve to whatever. If that doesn't work or you're still having other issues then you probably still have a monitoring software interfering with the AIO/Link communications, find what it is and disable it.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, pyrojoe34 said:

No it wouldn't require any switching, it would work just like a headphone jack, each layer would be constantly connected and able to transmit data. The connection would just be layered rather than a single "2D" grid.

Cross talk would still be a huge issue.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Curufinwe_wins said:

Cross talk would still be a huge issue.

"could", not would. I'm assuming Intel/AMD Engineers have already examined such a possibility, but the reasons why it hasn't been tried yet may have nothing to do with crosstalk. It might simply be because it's more expensive or not a limitation yet, etc.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, dalekphalm said:

"could", not would. I'm assuming Intel/AMD Engineers have already examined such a possibility, but the reasons why it hasn't been tried yet may have nothing to do with crosstalk. It might simply be because it's more expensive or not a limitation yet, etc.

no... it WOULD be a huge issue. Look at all the insulation added to cat cables in the quest for 600Mhz coherence. Now consider transmissions 5-10x more frequent than that, in a smaller space, with thinner conductors, with 0! tolerance for bit rot.

 

 

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Curufinwe_wins said:

no... it WOULD be a huge issue. Look at all the insulation added to cat cables in the quest for 600Mhz coherence. Now consider transmissions 5-10x more frequent than that, in a smaller space, with thinner wire, with 0! tolerance for bit rot.

Again, since I am not a CPU engineer, I cannot just take your word for that. Ethernet Cables and CPU Socket Pins are significantly different. Ethernet Cables also need to run literally 20,000 times the distance over a Socket Pin (estimated length of a PGA pin = 5mm, maximum length of Ethernet Cable = 100m without crosstalk being an issue).

 

So no, I cannot say it would be an issue. If you have some sources that back up that, feel free to post them. However, I'm not even sure anyone's even tried a multi-barrel pin before, so I would be surprised if there was any data available on the subject.

 

As I said, crosstalk very well might be an issue - or maybe it isn't. If you do have a source to backup your claim, I would love to read it, and I don't mind being wrong when presented with evidence.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, tmcclelland455 said:

Given that I store multiple AMD and pre-LGA CPUs in a box without any protection, and I have legitimately thrown the box across my room with no casualties, I think my point still stands.

 

And given just how much I handle PGA CPUs comapred to your barely-if-any (more than likely), I think that only re-enforces my answer.

Anectdotal points never stand to rational minded people so bye. 

-------

Current Rig

-------

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Misanthrope said:

Anectdotal points never stand to rational minded people so bye. 

To be fair, no one countered his claims with a verifiable source, so the group saying "PGA Pins are durable as fuck" and the group saying "PGA Pins literally murdered my dog" are basically in the same group.

 

It's entirely anecdotal evidence all the way around. I'd be interested in seeing some RMA stats about bent CPU PGA pins vs bent Motherboard LGA pins, and the rate of occurrence... but I'm not holding my breath on those stats ever appearing.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, dalekphalm said:

Again, since I am not a CPU engineer, I cannot just take your word for that. Ethernet Cables and CPU Socket Pins are significantly different. Ethernet Cables also need to run literally 20,000 times the distance over a Socket Pin (estimated length of a PGA pin = 5mm, maximum length of Ethernet Cable = 100m without crosstalk being an issue).

 

So no, I cannot say it would be an issue. If you have some sources that back up that, feel free to post them. However, I'm not even sure anyone's even tried a multi-barrel pin before, so I would be surprised if there was any data available on the subject.

 

As I said, crosstalk very well might be an issue - or maybe it isn't. If you do have a source to backup your claim, I would love to read it, and I don't mind being wrong when presented with evidence.

Cat 6a can't run 600 Mhz at even 1m. That was literally the point of Cat 7. 

 

Additionally, we have already seen memory stability differences in DDR4 going down from ATX to mITX motherboards purely due to the dimm slots being closer to the socket (hence why the world record mem speed is currently on a fairly mid-teir itx board), crosstalk and bit rot is already an issue literally everywhere today.

 

IT WOULD NOT WORK. END OF STORY.

 

http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6670506

 

Here is a paper for you about it.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

59 minutes ago, pyrojoe34 said:

Random thought regarding pin density/count:

 

Why has no one tried to implement a multi-layered pin system to increase connections without increasing physical pins. Something like a headphone jack where you have multiple connections on one pin by embedding pins within each other separated by insulating material. I'm sure that would increase the price of production a bit but you could double, triple, or quadruple the number of connections without actually increasing the number of pins. Seems like they'll eventually reach a point where they cannot realistically add more pins to a socket (just look at how may pins Skylake-e has) without exclusively selling the CPUs as permanently soldered to the motherboard.

You can't. Like others told you already, it would make it hard to solder the pins onto the substrate and the insulation thickness between the segments would kill any added benefit and the socket would be several times more complicated to manufacture and you'd have capacitance between the segments of the pin, leaking etc etc ... too messy.

 

As it stands now, pins are spaced just enough to allow a few traces to go between the socket contacts and make it easier to route all the signal wires without resorting to 8-10+ layers on the PCB.  For a modern processor, we already have 6 layers or so on each motherboard, just to route all the wires to the pci-e connectors and memory sots and route the power planes to VRMs to pass tens of amps of electricity to the cpu.

 

Link to comment
Share on other sites

Link to post
Share on other sites

32 minutes ago, Curufinwe_wins said:

Cat 6a can't run 600 Mhz at even 1m. That was literally the point of Cat 7. 

 

Additionally, we have already seen memory stability differences in DDR4 going down from ATX to mITX motherboards purely due to the dimm slots being closer to the socket (hence why the world record mem speed is currently on a fairly mid-teir itx board), crosstalk and bit rot is already an issue literally everywhere today.

 

IT WOULD NOT WORK. END OF STORY.

 

http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6670506

 

Here is a paper for you about it.

I'll have to take your word for that article, since it's behind a paywall, unless you want to quote sections of it that specifically reference your claims? The abstract references how pin location and consideration are very important (No disagreements there), but obviously, you need to access the full text to actually see their findings.

 

Anyway, we'll leave it at that. I didn't even claim we should even be trying to use this method of pin configurations. Simply that engineers may not have ever even explored the possibility due to other concerns which would prevent them from even doing any testing on it.

 

Your point about Cat 6a vs Cat 7 though, I'm really not sure what your point is? Cat 6a could not achieve 600MHz, and Cat 7 did. A clear evolution of dealing with crosstalk (Albeit, in a way that likely has little to no relevance on CPU Pins), and achieving a higher transmission frequency.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, Dabombinable said:

Still, they aren't actually specifying what Zen is 40% faster at-because it won't be 40% faster at everything.

40% more IPC

Link to comment
Share on other sites

Link to post
Share on other sites

i think it would actually be easier to bend the tiny pins on the intel motherboards than the broader ones on amd cpus and it shouldnt be an issue for either one unless you are throwing your cpu and motherboard around your room

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Misanthrope said:

Anectdotal points never stand to rational minded people so bye. 

You're just mad you don't have anything to counter with. So bye. -_-

Main rig on profile

VAULT - File Server

Spoiler

Intel Core i5 11400 w/ Shadow Rock LP, 2x16GB SP GAMING 3200MHz CL16, ASUS PRIME Z590-A, 2x LSI 9211-8i, Fractal Define 7, 256GB Team MP33, 3x 6TB WD Red Pro (general storage), 3x 1TB Seagate Barracuda (dumping ground), 3x 8TB WD White-Label (Plex) (all 3 arrays in their respective Windows Parity storage spaces), Corsair RM750x, Windows 11 Education

Sleeper HP Pavilion A6137C

Spoiler

Intel Core i7 6700K @ 4.4GHz, 4x8GB G.SKILL Ares 1800MHz CL10, ASUS Z170M-E D3, 128GB Team MP33, 1TB Seagate Barracuda, 320GB Samsung Spinpoint (for video capture), MSI GTX 970 100ME, EVGA 650G1, Windows 10 Pro

Mac Mini (Late 2020)

Spoiler

Apple M1, 8GB RAM, 256GB, macOS Sonoma

Consoles: Softmodded 1.4 Xbox w/ 500GB HDD, Xbox 360 Elite 120GB Falcon, XB1X w/2TB MX500, Xbox Series X, PS1 1001, PS2 Slim 70000 w/ FreeMcBoot, PS4 Pro 7015B 1TB (retired), PS5 Digital, Nintendo Switch OLED, Nintendo Wii RVL-001 (black)

Link to comment
Share on other sites

Link to post
Share on other sites

48 minutes ago, ScootsMcgoots said:

 I dunno much about this but having " my cpu is more likely to be damaged than the motherboard" ( I'm referring to how the pins are on the CPU) is a less convincing argument when compared to saying " LGA would possibly allow a more advanced piece of hardware".  Yeah that's cool that your CPU has pins on it and might be more resistant to shock but that's an odd quality.  That's sorta like saying "Well your laptop may have better battery life but mine has a larger space on top of it to place my coffee cup.  I guess it's cool if the feature matters to you but I'd think if the feature was really important reviewers would actually criticize that quality of hardware.  I haven't heard someone reviewing a mobo complain of how fragile the pins are on LGA, I've just heard them  say "be careful etc" since it's common sense.  it's not exactly something manufacturers should have to worry about whilst designing their hardware.  I understand breaking the pins by accident once in awhile ( I dropped my cpu from a couple inches up down into the socket) but if you treat your hardware soo badly that being able to be rough with it is a quality you desire than maybe you should expect broken components.        

I don't think there's any reason to believe LGA allows more advanced hardware. There seems to be speculation that LGA allows more pins. Are more pins necessarily more advanced/better? Intel's consumer platform reduces and increases the number of pins by a single digit number from year to year with no apparent difference. Intel's prosumer/enterprise products have a lot more pins. Zen would supposedly have some 1300 pins. More than Intel's consumer products, less than enterprise. Does it then need LGA? No. It already has more pins than the consumer products.

Are all those extra pins necessary on the enterprise stuff? Perhaps. But supposedly Zen for enterprise will have LGA boards, so it will use LGA when necessary. Problem solved. PGA won't hold the platform back.

 

At best one could question why AMD would go for both approaches instead of sticking to one and how feasible it is but I doubt it changes much practically and financially. Both Intel and AMD already has to do BGA in addition to whatever they're doing anyway.

 

I think we've come to the point where people obsess over the pin count and pin placement. It would seem that people idealize the LGA approach because Intel does it. Do people think Intel owes the performance of its Core processors to using LGA? Seems ridiculous but that's the sense you get in this thread.

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, ace_cheaply said:

Um, yes they are?  Zen has 40% faster ipc.

Considering a Phenom II P920 is still faster in some areas despite having the same IPC as the A8 4555M in my new laptop......AMD's stated performance is never all across the board.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, dalekphalm said:

To be fair, no one countered his claims with a verifiable source, so the group saying "PGA Pins are durable as fuck" and the group saying "PGA Pins literally murdered my dog" are basically in the same group.

 

It's entirely anecdotal evidence all the way around. I'd be interested in seeing some RMA stats about bent CPU PGA pins vs bent Motherboard LGA pins, and the rate of occurrence... but I'm not holding my breath on those stats ever appearing.

Conceding that, CPUs are generally more valuable anyway so assuming both cases are equally as likely, for a consumer LGA it's still preferable. 

-------

Current Rig

-------

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, spartaman64 said:

40% more IPC

Than AMD's own lack luster chips which are now a good 20% or worst behind Intel in ICP today? 

 

Meaning that after the usual hype BS of cherry picked tests and such and consider the would compete with not current but future Intel chips, suddenly it's about the same performance if not a little worst than Intel. 

 

You can quote me as predicting Zen will, as usual, only have a price to performance edge but not actually better or equal to high end Intel. 

-------

Current Rig

-------

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×