Jump to content

All in one motherboard

Why are motherboard manufacturers not making gpu chip inputs like there are for cpus and one more for vram ? 

this way we could have very customizable computers easier to cool, and they would be smaller too !

Link to comment
Share on other sites

Link to post
Share on other sites

you want a pga/lga gpu?

 

not only are the more pins, but the ram is different on differnt gpu's, it costs more, its less reliable.

Link to comment
Share on other sites

Link to post
Share on other sites

Because it would take up needed space and PCI is a great standard.

Current System: CPU - I5-6500 | Motherboard - ASRock H170M-ITX/ac | RAM - Mushkin Blackline 16GB DDR4 @ 2400mHz | GPU - EVGA 1060 3GB | Case - Fractal Design Nano S | Storage - 250GB 850 EVO, 3TB Barracuda | PSU - EVGA 450W 80+ Bronze | Display - AOC 22" 1080p IPS | Cooling - Phanteks PH-TC12DX_BK | Keyboard - Cooler Master QuickFire Rapid(MX Blues) | Mouse - Logitech G602 | Sound - Schiit Stack | Operating System - Windows 10

 

The OG System: I3-2370M @ 2.4 GHz, 750GB 5400 RPM HDD, 8GB RAM @1333Mhz, Lenovo Z580 Laptop (Ubuntu 16.04 LTS).

 

Peripherals: G602, AKG 240, Sennheiser HD 6XX, Audio-Technica 2500, Oneplus 5T, Odroid C2(NAS).

Link to comment
Share on other sites

Link to post
Share on other sites

how I think of it is that, as PCIe is a universal standard manufacturers could make a new standard that would be more scalable, forget lga/pga make a new one if those dont work and for memory the same thing would happen, all prosseccors run on ddr ram , there could be a unniversal standard for mem too.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Keybraker said:

how I think of it is that, as PCIe is a universal standard manufacturers could make a new standard that would be more scalable, forget lga/pga make a new one if those dont work and for memory the same thing would happen, all prosseccors run on ddr ram , there could be a unniversal standard for mem too.

But why would this "new standard" be more scalable?

 

A new one? What do you propose? How do you plan on connecting the GPU to the motherboard that is superior to LGA/PGA? 

 

Memory can't be standardized. CPUs have different needs than GPUs -- CPUs need low latency and GPUs need high bandwidth. Also, HBM is potentially going to become a thing where the memory ends up on the GPU die. 

 

Even in this "better" world of yours, you would still need to upgrade the GPU core and the VRAM any time you wanted to upgrade, but you would also need separate coolers for each. How is that superior to what we have now where everything is on a single replaceable card? 

PSU Tier List | CoC

Gaming Build | FreeNAS Server

Spoiler

i5-4690k || Seidon 240m || GTX780 ACX || MSI Z97s SLI Plus || 8GB 2400mhz || 250GB 840 Evo || 1TB WD Blue || H440 (Black/Blue) || Windows 10 Pro || Dell P2414H & BenQ XL2411Z || Ducky Shine Mini || Logitech G502 Proteus Core

Spoiler

FreeNAS 9.3 - Stable || Xeon E3 1230v2 || Supermicro X9SCM-F || 32GB Crucial ECC DDR3 || 3x4TB WD Red (JBOD) || SYBA SI-PEX40064 sata controller || Corsair CX500m || NZXT Source 210.

Link to comment
Share on other sites

Link to post
Share on other sites

For example what you say about the HBM memory and everything being in one die it would be as easy to change a gpu as it is a cpu. 

My question is why not create a standard for gpus that is purely optimized with the gpu. I understand that the gpu is covered by the speed of pcie but it would make the motherboards hight smaller and make cooling easier to upgrade and change. 

This is just a though I had about motherboard design.

Just now, djdwosk97 said:

But why would this "new standard" be more scalable?

 

A new one? What do you propose? How do you plan on connecting the GPU to the motherboard that is superior to LGA/PGA? 

 

Memory can't be standardized. CPUs have different needs than GPUs -- CPUs need low latency and GPUs need high bandwidth. Also, HBM is potentially going to become a thing where the memory ends up on the GPU die. 

 

Even in this "better" world of yours, you would still need to upgrade the GPU core and the VRAM any time you wanted to upgrade, but you would also need separate coolers for each. How is that superior to what we have now where everything is on a single replaceable card? 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Keybraker said:

For example what you say about the HBM memory and everything being in one die it would be as easy to change a gpu as it is a cpu. 

It's already that easy. 

Quote

My question is why not create a standard for gpus that is purely optimized with the gpu. I understand that the gpu is covered by the speed of pcie but it would make the motherboards hight smaller

The height of a GPU isn't an issue for 99% of users because there are other components in the case that cause the case to be large enough to accomodate a large GPU anyway (not to mention you still need to accomodate for every other PCIE card). And for the other 1% of users, there are PCIE risers and half-height cards.

Quote

and make cooling easier to upgrade and change. 

To what end? You don't need to change the cooler on most GPUs -- just buy a good cooler from day one. 

PSU Tier List | CoC

Gaming Build | FreeNAS Server

Spoiler

i5-4690k || Seidon 240m || GTX780 ACX || MSI Z97s SLI Plus || 8GB 2400mhz || 250GB 840 Evo || 1TB WD Blue || H440 (Black/Blue) || Windows 10 Pro || Dell P2414H & BenQ XL2411Z || Ducky Shine Mini || Logitech G502 Proteus Core

Spoiler

FreeNAS 9.3 - Stable || Xeon E3 1230v2 || Supermicro X9SCM-F || 32GB Crucial ECC DDR3 || 3x4TB WD Red (JBOD) || SYBA SI-PEX40064 sata controller || Corsair CX500m || NZXT Source 210.

Link to comment
Share on other sites

Link to post
Share on other sites

You are probably right but there is something I really like about that idea..

Because for example you would have an almost flat mobo, and for example you could by a gpu with whatever memory you would like and if you want more you could just add. If the gpu core was the problem than you could change that and keep the memory and if you wanted to overclock lets say and you wanted to buy a new cooler you could.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Keybraker said:

Because for example you would have an almost flat mobo

You can already have that, just buy a PCIE riser.

5 minutes ago, Keybraker said:

for example you could by a gpu with whatever memory you would like and if you want more you could just add.

Having the memory so far away and not directly wired to the core would add a ton of latency, which would be bad. Plus a lot of cards can't handle more VRAM as they are limited by design. 

5 minutes ago, Keybraker said:

If the gpu core was the problem than you could change that and keep the memory

Memory isn't the expensive part of the GPU though. 

5 minutes ago, Keybraker said:

if you wanted to overclock lets say and you wanted to buy a new cooler you could.

But you could just buy a good cooler from day one. It's not like a GPU with a good cooler is much more expensive than one with a shitty cooler. Pascal for example barely benefits at all from a top tier cooler compared to a single fan mini cooler.

PSU Tier List | CoC

Gaming Build | FreeNAS Server

Spoiler

i5-4690k || Seidon 240m || GTX780 ACX || MSI Z97s SLI Plus || 8GB 2400mhz || 250GB 840 Evo || 1TB WD Blue || H440 (Black/Blue) || Windows 10 Pro || Dell P2414H & BenQ XL2411Z || Ducky Shine Mini || Logitech G502 Proteus Core

Spoiler

FreeNAS 9.3 - Stable || Xeon E3 1230v2 || Supermicro X9SCM-F || 32GB Crucial ECC DDR3 || 3x4TB WD Red (JBOD) || SYBA SI-PEX40064 sata controller || Corsair CX500m || NZXT Source 210.

Link to comment
Share on other sites

Link to post
Share on other sites

If you mean why can't someone invent a socket for GPU chips and memory slots for video card memory so you could just plug new gpu chips without replacing the rest of your video , or to add more memory at a later time... this isn't possible for several reasons.

 

GPU chips use wide bus connections to transfer data from memory to the chip and back, most video cards have either a 128 bit wide or a 256 bit wide memory, which means for each of those bits, there needs to be 2 wires between the gpu chip and the memory, plus a lot of other signal wires for each memory chip. So for a 256 bit video card, there's probably at least 600 wires connecting 8 memory chips to the gpu chip.

 

To compare with your processor, your processor has DDR3 or DDR4 memory controller and each channel is 64 bits wide so there's less than around 150 wires required to create a connection between one memory channel (2 memory slots) and the processor. A DDR3 memory stick has 240 pins and A DDR4 memory has 288 pins, but a lot of those are for power.

In order to make memory sticks for your video card, the sanest design would need to have memory sticks that are 128 bit wide, so that you would install one memory stick to have a 128 bit video card, and two sticks to have 256 bit video card. However, such memory sticks would need to have around 400-600 pins and you can imagine they'd probably be almost  twice as wide as the current memory sticks in order to fit all those pins on both sides of a circuit board.

And that's what complicates things.

Think about why all the motherboards put the memory slots always in the same spot on motherboards, why they're pretty much as close as possible to the processor, just far enough to clear the area where a cpu cooler would be. The further the memory chips are from a processor, the harder it is to transfer data at very fast speeds between the processor and the memory sticks.   The length is actually so important that if you look carefully at a motherboard, you will see that all those 200+ wires going from a memory slot to the processor are arranged in such a way on the circuit board that if you'd measure their lengths, all those wires would have the same length. Even 1mm extra on one wire, can make a memory slot unable to reach certain speeds.

Now, think how hard it would be to arrange not 200, but 400-600 wires which are spread even wider on a circuit board - from the start you can imagine the length of a wire routed directly from the first pin of such a wide slot would be much longer than the length of a wire from the center of such a slot to the processor, so the circuit board designers would have to add a lot of wiggles and bends in the wires closer to the center of a memory slot in order to make those wires longer, so that in the end all those hundreds of wires would have the same lengths.

Now, think about the frequencies used by DDR3 and DDR4 - they're barely reaching 3200 Mhz, maybe a bit more, and it's very difficult. In contrast, GDDR5 is already at 7 gbps per pin (let's say 7 Ghz) and GDDR5x is probably at 10 gbps per pin (let's just simplify and say 10 Ghz) and the aim is to reach 12 Ghz. 

It would simply be impossible to reach these very high frequencies with memory slots that spread those pins so wide on a video card ... just look at modern video cards and see how everyone arranges the memory chips around the processor so that the wires between each chip and the gpu chip are as small as possible.

The smaller the wires, the less power is required to transfer data and higher frequencies can be reached.

 

Here's a 352 bit GTX1080TI (384 bit with one memory chip disabled, not installed), look at how many wires are under each memory chip (those tiny silver dots).. each potential memory slot would need to have at least 4 times as many pins so imagine how wide those memory sticks would be.

Look at how the wires go from each chip to the gpu which sits in the center and look how they're bending and wiggling so that all wires will have the same length and look how close they are to the gpu in order to reach those 10 Ghz frequency... and imagine how difficult it would be if all 3 sets of 4 memory chips would be on one side of the card.. if not impossible it would make the card very expensive:

 

front.jpg.201bb39ce227cd04837f94dc699df4b9.jpg

 

 

And last but not least, slots would only add another failure point, another reason for returns. Someone doesn't plug a memory stick fully , RMA the board. Some contacts in the memory slot have oxides on them so there's poor connection,you get random crashes, random errors so RMA ..

 

later edit: and also to add any socket means there would be added contact resistance between the cpu and socket, between memory stick and slot, that extra resistance means more power has to be used and the potential for high frequencies lowers. A gpu chip that's soldered to a board has nearly perfect very low resistance connections to the other components.

 

Also .. gpu chips work at low voltages like as low as 0.6v and as high as 1.3-1.5v or something like that, but could use up to 200-250 watts of power for small periods of times.. that's hundreds of amps, which means you'd need much more pins on the bottom of the gpu chip to safely carry that current, compared to the number of contacts that are soldered these days directly to the circuit board. Sockets also can't handle so much heat, so it would be more difficult to have the gpu chip running at 90 degrees Celsius like you have now on some cards between the gpu chip and the circuit board, the sockets would be damaged by such high heat.

Link to comment
Share on other sites

Link to post
Share on other sites

Really great answer, thank you that you took the time to answer the exact question I made !

Just now, mariushm said:

If you mean why can't someone invent a socket for GPU chips and memory slots for video card memory so you could just plug new gpu chips without replacing the rest of your video , or to add more memory at a later time... this isn't possible for several reasons.

 

GPU chips use wide bus connections to transfer data from memory to the chip and back, most video cards have either a 128 bit wide or a 256 bit wide memory, which means for each of those bits, there needs to be 2 wires between the gpu chip and the memory, plus a lot of other signal wires for each memory chip. So for a 256 bit video card, there's probably at least 600 wires connecting 8 memory chips to the gpu chip.

 

To compare with your processor, your processor has DDR3 or DDR4 memory controller and each channel is 64 bits wide so there's less than around 150 wires required to create a connection between one memory channel (2 memory slots) and the processor. A DDR3 memory stick has 240 pins and A DDR4 memory has 288 pins, but a lot of those are for power.

In order to make memory sticks for your video card, the sanest design would need to have memory sticks that are 128 bit wide, so that you would install one memory stick to have a 128 bit video card, and two sticks to have 256 bit video card. However, such memory sticks would need to have around 400-600 pins and you can imagine they'd probably be almost  twice as wide as the current memory sticks in order to fit all those pins on both sides of a circuit board.

And that's what complicates things.

Think about why all the motherboards put the memory slots always in the same spot on motherboards, why they're pretty much as close as possible to the processor, just far enough to clear the area where a cpu cooler would be. The further the memory chips are from a processor, the harder it is to transfer data at very fast speeds between the processor and the memory sticks.   The length is actually so important that if you look carefully at a motherboard, you will see that all those 200+ wires going from a memory slot to the processor are arranged in such a way on the circuit board that if you'd measure their lengths, all those wires would have the same length. Even 1mm extra on one wire, can make a memory slot unable to reach certain speeds.

Now, think how hard it would be to arrange not 200, but 400-600 wires which are spread even wider on a circuit board - from the start you can imagine the length of a wire routed directly from the first pin of such a wide slot would be much longer than the length of a wire from the center of such a slot to the processor, so the circuit board designers would have to add a lot of wiggles and bends in the wires closer to the center of a memory slot in order to make those wires longer, so that in the end all those hundreds of wires would have the same lengths.

Now, think about the frequencies used by DDR3 and DDR4 - they're barely reaching 3200 Mhz, maybe a bit more, and it's very difficult. In contrast, GDDR5 is already at 7 gbps per pin (let's say 7 Ghz) and GDDR5x is probably at 10 gbps per pin (let's just simplify and say 10 Ghz) and the aim is to reach 12 Ghz. 

It would simply be impossible to reach these very high frequencies with memory slots that spread those pins so wide on a video card ... just look at modern video cards and see how everyone arranges the memory chips around the processor so that the wires between each chip and the gpu chip are as small as possible.

The smaller the wires, the less power is required to transfer data and higher frequencies can be reached.

 

Here's a 352 bit GTX1080TI (384 bit with one memory chip disabled, not installed), look at how many wires are under each memory chip (those tiny silver dots).. each potential memory slot would need to have at least 4 times as many pins so imagine how wide those memory sticks would be.

Look at how the wires go from each chip to the gpu which sits in the center and look how they're bending and wiggling so that all wires will have the same length and look how close they are to the gpu in order to reach those 10 Ghz frequency... and imagine how difficult it would be if all 3 sets of 4 memory chips would be on one side of the card.. if not impossible it would make the card very expensive:

 

front.jpg.201bb39ce227cd04837f94dc699df4b9.jpg

 

 

And last but not least, slots would only add another failure point, another reason for returns. Someone doesn't plug a memory stick fully , RMA the board. Some contacts in the memory slot have oxides on them so there's poor connection,you get random crashes, random errors so RMA ..

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×