Jump to content

CPU no pins, pcie plugin. Thoughts?

I know i didn't do a good job on the picture but just bare with me

Only reason i suggested this was i bent a pin once and it pisses me off. Good thing i manage to fix it tho.

cpu-upgrade.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

This was already a thing with old Pentium II`s and some early Pentium iii`s. The real reason I think they moved away from this was terrible pin density and they were also able to move the cache to the cpu die itself

32-bit Linux is Disappearing: What It Means for the Channel – Channel  Futures

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Arkland909 said:

I know i didn't do a good job on the picture but just bare with me

cpu-upgrade.jpg

There were a few pentiums, which were on a card.
SL3XM (Intel Pentium III 700 MHz)

Intel PENTIUM II (2) Processor SL2U6 80523PY400512PE CPU | eBay

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, WickedThunder86 said:

This was already a thing with old Pentium II`s and some early Pentium iii`s. The real reason I think they moved away from this was terrible pin density and they were also able to move the cache to the cpu die itself

32-bit Linux is Disappearing: What It Means for the Channel – Channel  Futures

I had no idea 😐

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Latvian Video said:

There were a few pentiums, which were on a card.
SL3XM (Intel Pentium III 700 MHz)

Intel PENTIUM II (2) Processor SL2U6 80523PY400512PE CPU | eBay

Shock! 😲

Link to comment
Share on other sites

Link to post
Share on other sites

At high frequencies, the wires between the cpu silicon die and everything else must be as short as possible. 

 

In the case of some components like RAM, the connection between the CPU and RAM involves a bunch of pairs of wires (let's say 64 pairs of wires). Not only the wires in a pair have to be the same length, but also all the pairs have to be the same length. 

So with your slot design, you'll have uneven lengths of wire from where the wire comes out the CPU silicon die to the individual pin on the edge connector, and that will make it much harder for a motherboard manufacturer to adjust the length of the wires on the motherboards so that at the end, the sum of both segments is the same for all those pairs of wires.

It also makes it much harder to route the wires so that some go to pci-e slots, some to ram - on modern CPUs, a section of the CPU socket is RAM, a section is PCI-e ... for example ram wires go straight to the right towards the ram slots, the pci-e section is towards the bottom, so that wires can go straight down to the pci-e slots. 

 

Intel went with the slot configuration because they needed room for the cache chips near the CPU - back then the manufacturing process was not advanced enough to put that much cache inside the cpu chip - there were too many manufacturing flaws because the chips ended up too big.

It made more sense to make the actual processors small (and make more cpus out of a silicon disc) and get more fully working processors, and make the cache chips separately.

The cache chips  were big but they're basically repetitive inside, so if there was some manufacturing flaw it was possible to isolate the chunk of cache memory with flaws and then simply sell the chip as a smaller quantity chip (for example, make 256 kbit ram chip, but 1 KB is bad, so they isolate the half that contains that 1 KB of bad memory cells and sell the chip as 128 kbit)

Back then, we didn't have the high frequencies we currenly used... back then we had 133-200 Mhz SD-RAM and DDRAM, nowadays they run at 1800 Mhz and higher.

Everything has to be more precise and of better quality compared to how things were back then. 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Some of the Dell Precision workstations used processor cards for the second CPU, but the CPU is still socketed. I could never understand why they did this. Sure it saves space (though these workstations are monsters- space was clearly not the biggest concern), but it adds a whole host of issues.

 

IMO, the sockets and pins used for microprocessors aren't that fragile. Adding processor cards like this makes an already hard layout a hell of a lot harder. You have to keep in mind that we're dealing with 6 - 8 layer boards (or is it more now???) and 1000+ pin BGA packages with tight spacing. To make everything harder, we're dealing with frequencies that would traditionally fall well into the microwave range, so the layout is extremely critical to avoid timing errors and crosstalk issues. To add insult to injury, CPUs take a lot of current (IIRC, it's in the 100 A range). So you've also got that to worry about.


I don't do high speed digital work, but I've designed my fare share of high speed amplifiers, and a couple millimeters of trace here or there can be enough to make or break a design. A couple picofarads of stray capacitance can be the difference between meeting spec and being an unstable mess.

 

If you still think that processor cards are a good idea, go read Linear Technology Application Note #47. Now keep in mind that we're dealing with rise / fall times that are much faster than Jim Williams was in that app note. There are a lot of things that become a lot harder when you add processor cards like this. 

 

Also, this is a ridiculous number of pins- lots of CPUs have over 1000 pins. Okay, maybe you can eliminate some on the card-to-board connector, but you're still dealing with a several hundred pin connector. No thank you!

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×