Jump to content

How Motherboards Work - Turbo Nerd Edition

AlexTheGreatish

Linus was on vacation and we went full nerd.

 

More about VRMs: https://youtu.be/oDRHV3qtSWc

More about Memory Topology: https://youtu.be/3vQwGGbW1AE

 

Buy AMD X570 Motherboards:

On Amazon (PAID LINK): https://geni.us/9Agc

On Newegg (PAID LINK): https://geni.us/XZGR

On B&H (PAID LINK): https://geni.us/5Doxt

 

Buy Intel Z590 Motherboards:

On Amazon (PAID LINK): https://geni.us/Ym2F

On Newegg (PAID LINK): https://geni.us/Pr5V

On B&H (PAID LINK): https://geni.us/9OyWS

Purchases made through some store links may provide some compensation to Linus Media Group

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Very cool, not the VAG drop I thought I finally caught, but definitely a solid Anthony (and Alex) video.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Dan9 said:

Very cool, not the VAG drop I thought I finally caught, but definitely a solid Anthony (and Alex) video.

U forgot James, he was in the vid too.

Link to comment
Share on other sites

Link to post
Share on other sites

Largely "correct" video, but a few inaccuracies here and there sprinkled in.

 

The animation at 1:40 is a bit abhorrent as the blips indicating current flow run back and forth like maniacs. This isn't in line with how the circuit would actually operate and could cause confusion for viewers.

 

VRMs have some other intricacies that could be interesting to note, like how the socket has a Vsense pin since contact resistance in the socket, and trace resistance over the board, and sometimes even on the PCB of the CPU itself can introduce voltage drops that are too large for stable operation. Ie, the Vsense pin is used to measure the supplied voltage closer to our load. (Voltage sense pins are actually fairly common in high current applications.) One could also go into transient response, and the effects of too much bulk capacitance in regards to load release, but these are more in depth topics.

 

The statement at 4:50 that "the signal can only go this far, so this is physically impossible. RAM makers have lied to you."

It is important to note the difference between the wave length of a signal for a given propagation speed in a medium, and the longest a signal can travel in said medium.

And the description of "Double Data Rate" is also a bit incorrect, the data is actually clocked at the stated frequency, but our reference clock is running at half of that frequency. In some applications Quad Data Rate is a thing, where our data is clocked 4 times faster than our reference clock. There is practical reasons for having a lower frequency for our clock, one is for EMI, another is cross talk, and a third is to increase the signal integrity of our clock, broken data can be recovered through various error correction schemes. (Something that is a standard in DDR5)

 

The statement "DDR means that data is sent at the begging and end of each individual Hz" is almost face palm worthy. Correct thing to say is at the beginning and end of each clock Cycle. (At least DDR isn't a QAM modulated channel, since then we would be talking about Symbol rate, and this is a can of worms that only RF engineers cares to venture down.)

 

Not to mention that here it would have been more interesting to talk about trace length matching and the intricacies of matching up the length of 64 differential pairs for data, plus a whole bunch of address and control ones as well. And how DDR5 has moved to have 2 sub channels that are only 32 bits wide instead as to make length matching easier for achieving higher clock speeds in the future.

 

The video is though overall of decent quality.

But it leaves a lot to be desired. (I am though heavily biased and therefor critical due to being an electrical engineer myself, with a particular focus on computer architecture design on a hardware level.)

 

Though, I also do understand that one can't really take a deep dive into the more engineering side of things, since it rather quickly becomes boring.

Link to comment
Share on other sites

Link to post
Share on other sites

Honestly, having extra PCIEx16 slots (that on't get disabled when some other slots are plugged in) is a decent value add, as that makes for decent future expandability (like more USB connections, or faster wifi/ethernet, or more NVMe slots).  

Link to comment
Share on other sites

Link to post
Share on other sites

SLI may be dead, but there are definitely use cases for having multiple GPU's in one system, and I don't immediately hint at mining. F@H, professional rendering and encoding/transcoding tasks have for years been running on graphics hardware. And these aren't single tasking applications either - you can effectively game and fold at the same time when you have a second GPU in there.

 

Consider also multi monitor applications that need more than four outputs from your GPU. With Windows anyway you can have a maximum of 32 display outputs. Then on the other end of the line you have headless cards with no display output - pure compute boards meant for nothing but number crunching. GPU servers in particular can have as many as 20 GPU's installed in the same system, and with the beauty of VM, have them assigned in whatever number to whatever OS the job needs.

 

In my case I just enjoy having four GPU's in my rig, and the flexibility to use them however I want. I can fold on three while I game on my main card. If I accidentally disable my main card in device manager I can plug in to the next one and get my display back (actually had it happen). And if one card bites the dust I don't have to run out and get a replacement - good luck finding a GPU now if what you have suddenly dies. Never mind the fact that having four water cooled GPU's just looks bad ass and screams performance.

 

Fun fact: the Toshiba X305 laptop came with THREE nVidia GPU's inside - one dedicated for 2D work and two running in SLI for 3D/gaming.

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, plurus said:

Honestly, having extra PCIEx16 slots is a decent value add, as that makes for decent future expandability

True but I'd still file this under "lies we tell ourselves when we upgrade". Speaking for myself I build myself a new machine every 5 years or so, I definitely told myself this lie for the last two at the very least. In 2014 I was telling myself that "multi-gigabit LAN" or "NVME storage" were things that I might want to get in the next few years. But other than going from 8 to 16GB of RAM I didn't actually upgrade my 2014 PC at all.

 

Most of the expansion I think I might want a couple of years into the future seems to come built into motherboards by the time I actually upgrade. 2.5Gbps LAN is starting to become common on motherboards, NVMe is pretty much the default for storage, if you go ITX in a new build odds are you'll have WiFi 6 built in. Any other expansion can be done over USB with pretty much zero downsides.

If I'm being honest with myself the only reason not to go ITX is that mATX boards don't include WiFi, which I don't need, and are therefore cheaper. That's it.

Fools think they know everything, experts know they know nothing

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...
On 3/29/2021 at 6:40 AM, skywake said:

True but I'd still file this under "lies we tell ourselves when we upgrade". Speaking for myself I build myself a new machine every 5 years or so, I definitely told myself this lie for the last two at the very least. In 2014 I was telling myself that "multi-gigabit LAN" or "NVME storage" were things that I might want to get in the next few years. But other than going from 8 to 16GB of RAM I didn't actually upgrade my 2014 PC at all.

 

Most of the expansion I think I might want a couple of years into the future seems to come built into motherboards by the time I actually upgrade. 2.5Gbps LAN is starting to become common on motherboards, NVMe is pretty much the default for storage, if you go ITX in a new build odds are you'll have WiFi 6 built in. Any other expansion can be done over USB with pretty much zero downsides.

If I'm being honest with myself the only reason not to go ITX is that mATX boards don't include WiFi, which I don't need, and are therefore cheaper. That's it.

The extra nvme slot is one that's not just "lies we tell ourselves". Mobos largely come with 2 nvme slots at most, and something like the Asus expansion card allows adding in more nvme cards without having to step down to slower SATA (plus its much easier to switch ssds in the expansion card than the mobo).   

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×