Jump to content

We ACTUALLY downloaded more RAM

James

Buy G.SKILL Ripjaws V 16GB (2x8GB) DDR4 3200

Amazon: https://geni.us/zpntB

Newegg: https://geni.us/NXl4lb

 

Buy G.SKILL Trident Z Neo 32GB (2x16GB) DDR4 3600

Amazon: https://geni.us/0mPIQ1

Newegg: https://geni.us/vky1v7I

 

Buy Kingston FURY Renegade 32GB (2x16GB) DDR4 3600

Amazon: https://geni.us/zAZ2P5

Newegg: https://geni.us/zAZ2P5

 

It is no longer just a joke. You can download more RAM! Join us on this foolish escapade to make a system with unholy amounts of memory.

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

> Used smb as a network share to use swap on it

> A lot of people recomends to disable swap

 

LTT needs to hire some people who knows what they are doing, instead of coming with video ideas and doing poorly research on it. This video is extremely full of shit practices

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, kumicota said:

> Used smb as a network share to use swap on it

> A lot of people recomends to disable swap

 

LTT needs to hire some people who knows what they are doing, instead of coming with video ideas and doing poorly research on it. This video is extremely full of shit practices

I want to add too the size comparison between ram+swap vs large amount of ram sizes. This is normal and not knowing why, just adds to my point

Link to comment
Share on other sites

Link to post
Share on other sites

To be fair.

On my windows workstation I turned off all memory pooling on system drives. Since Windows have this abhorrent tendency to swap stuff out into slower storage if one leaves an application in the background for even a few minutes... Regardless of how much of one's memory one uses. It becomes rather silly to have to wait seconds for an application to read back in, and sometimes an application crashes during this process.... (and so far, I have only managed to crash my computer twice in about 4-5 years of running at a "measly" 32 GB of RAM, but now with 64GB I have only done it once more. (simulating fur in blender is memory intensive!))

 

So far I haven't noticed much swap related issues on my Ubuntu machine, so maybe Linux is less eager to throw stuff to disk just because one haven't touched it in a bit.

 

But logically, if one has a good amount of high speed memory, then a swap space shouldn't be a bad idea to have as a backup for when memory starts overfilling. But if the memory fills up too suddenly, then there won't be time to shuffle stuff away regardless, so a crash is more or less imminent if one gets close to 100% RAM utilization if one isn't careful. And as stated in the video, RAM isn't particularly expensive. (and most users don't really use a lot of RAM to start with.)

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, kumicota said:

> Used smb as a network share to use swap on it

> A lot of people recomends to disable swap

 

LTT needs to hire some people who knows what they are doing, instead of coming with video ideas and doing poorly research on it. This video is extremely full of shit practices

I don't think they're actually recommending anyone do it. It's more of a joke video. Regardless, if you look into some details here

 

 

you can see that GDrive/SMB as swap is potentially reasonable as rotation latency of spinning rust is surprisingly high.

I'm actually glad they did this video as I expect it to be very informative.

PLEASE QUOTE ME IF YOU ARE REPLYING TO ME

Desktop Build: Ryzen 7 2700X @ 4.0GHz, AsRock Fatal1ty X370 Professional Gaming, 48GB Corsair DDR4 @ 3000MHz, RX5700 XT 8GB Sapphire Nitro+, Benq XL2730 1440p 144Hz FS

Retro Build: Intel Pentium III @ 500 MHz, Dell Optiplex G1 Full AT Tower, 768MB SDRAM @ 133MHz, Integrated Graphics, Generic 1024x768 60Hz Monitor


 

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, kumicota said:

> Used smb as a network share to use swap on it

> A lot of people recomends to disable swap

 

LTT needs to hire some people who knows what they are doing, instead of coming with video ideas and doing poorly research on it. This video is extremely full of shit practices

I do think it's worth noting that they mention specifically that swap isn't recommended. I am a bit curious though who was the writer for the video since it's presumably someone on probation as the Writer says sentient boooooger.

Link to comment
Share on other sites

Link to post
Share on other sites

Swap file settings is quite an interesting discussion, I think a lot of people tend to have quite outdated views on it.

 

Back when PCs had less than ~4GB ram, it was quite important to stop out of memory crashing (even though thrashing on a mechanical drive made the system unresponsive anyway).

When PCs started having 8-16GB ram, but tiny ssds, disabling swap made sense since a home user was unlikely to run out of ram, but they might struggle for ssd space.

I ran into issues with a PC that had 32GB ram and a 40GB SSD since windows would basically eat the whole drive for swap space (It assigns 50~200% ram size as swap space in previous versions of windows.)

Now that ssd sizes are much larger (and affordable) the swap setting has become a bit irrelevant, most of the time you don't encounter any issues with it disabled, but some software refuses to run without it.

 

Regarding the main theme of this topic, I would be interested to see if this would work on a really old system. A system still using SDRAM might be limited to 256MB or 512MB RAM, but you could still fit a 1GB network adapter via PCI. Running SATA SSD via an IDE to m.2 sata would probably make more sense though.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Kizoto said:

Now that ssd sizes are much larger (and affordable) the swap setting has become a bit irrelevant, most of the time you don't encounter any issues with it disabled, but some software refuses to run without it.

Yep. Even hibernate doesn't even really exist anymore. I think it's disabled by default on computers in favor of "Fast Startup". Why hibernate the entire system when it really doesn't take long to load anything anymore.

PLEASE QUOTE ME IF YOU ARE REPLYING TO ME

Desktop Build: Ryzen 7 2700X @ 4.0GHz, AsRock Fatal1ty X370 Professional Gaming, 48GB Corsair DDR4 @ 3000MHz, RX5700 XT 8GB Sapphire Nitro+, Benq XL2730 1440p 144Hz FS

Retro Build: Intel Pentium III @ 500 MHz, Dell Optiplex G1 Full AT Tower, 768MB SDRAM @ 133MHz, Integrated Graphics, Generic 1024x768 60Hz Monitor


 

Link to comment
Share on other sites

Link to post
Share on other sites

The video is a good refresher on how the memory hierarchy works and it is fairly accurate for consumer systems but things are a bit different at the top of the computing world.  As always, stuff that first appears at the top of the computing tends to trickle down as time goes on.

 

IBM high end server/mainframe hardware purchasing and licensing:  you buy the hardware and then you need a license to actually enable it to be used.  So download a license key and enable more of the memory.  Woah, video title can true!  Bonus:  these systems support hardware accelerated memory compression and encryption.  Intel has toyed with such things as enabling additional L2 cache on some Celerons in the past via license key.  Similar in concept was the VROC functionality on Sky Lake workstations (socket 2011/3467).  While not a secondary license, Intel notoriously did artificially limit how much memory could be connected to a Sky Lake-SP Xeons based upon its SKU forcing server owners to pay up more if they wanted that capacity unlocked.  

 

High end clustered servers are migrating to a flat memory space where memory in one system can be directly addressed by a remote node using RoCE.  This is a bit different than paging as RAM in the remote systems is seen a bit higher up in the hierarchy, effectively another tier in a NUMA topology.   While still over a network, the RoCE protocols remove much of the software stack that adds latency on both ends of the connection.  Bandwidth is of course dependent on the network topology but RoCE typically uses 100 Gbit Ethernet so bandwidth is not terrible and can be increased by adding additional 100 Gbit Ethernet connections.

 

At the very top instead of RoCE, some networks are dropping Ethernet entirely and moving towards PCIe.  This removes the traditional network switch in favor of large PCIe switches interlinked together. The advantage here is massive bandwidth and far lower latency than Ethernet.  The trade off is scalability and cost due to the expense of PCIe switch chips, cabling, and limited number of devices that can interlinked.  While this type of topology is rarely deployed today even in the server niche, with PCIe 5.0 and the adoption of CXL (and similar technologies) to put memory directly on the PCIe bus more frequency. This would permit dedicated memory boxes to sit alongside otherwise normal servers to expand memory capacity.  Other features like hot memory add and hot host process node additions become possible to increase capacity on demand. 

 

The last thing to really upset the memory hierarchy in the server space has been non-volatile memory.  Intel's Optane DIMM technology is the premier version of this which allows high capacity Optane modules to sit on the memory bus and be addressed as an extension of DRAM.  This is vastly faster than even paging to NVMe SSDs in both latency and bandwidth but slower than DRAM.  The three main advantages are higher capacity, lower cost than DRAM and it survives shutdown of a system.  Personally I'm surprised that Intel has kept this technology exclusive to the server market as a DRAM alternative as this would have tangible benefits in the consumer space (it has hit consumers as a NVMe drive).  Longer battery life, more main memory and instant resume are some immediate potential benefits even if performance took a hit.  Large L3 or even an eDRAM L4 cache would greatly offset the usage of Optane as main memory on the consumer side.

 

Link to comment
Share on other sites

Link to post
Share on other sites

So this is pretty much the opposite of taking RAM and setting a chunk of it up as a separate RAMDISK - something that has been done in the past for when super fast storage was necessary. I actually have it working on my Windows machine right now just to test. Except that today you can get that super fast storage from just a spare SSD, and especially an Optane drive.

 

In my case I moved the Windows page file from my boot SSD that's running low on space to a 2nd fast drive as it was constantly filling up. On a system with 32GB RAM that would eat upto 64GB of your SSD, but for a system with 128GB of RAM you're looking at 256GB of storage space lost on your SSD. That's significant, and back when I was building my system when 128GB RAM kits were becoming a thing a 2TB SSD was still an eye-watering four digit purchase.

 

Thankfully prices have gone down on both RAM and SSD so it probably wouldn't break the budget to have a reasonable 32GB RAM and 2TB SSD. The issue today is M.2 since the form factor limits how big that SSD can go compared to a 2.5 inch sata drive. With some mobos only giving you a single M.2 slot you most definitely will be needing a 2.5 inch SSD as well to make it all work nicely.

Link to comment
Share on other sites

Link to post
Share on other sites

Back in ye olde Mac System 7 days, you could do the opposite of a swap file, and use physical RAM as a virtual disk. (Much like tmpfs on Linux.) You could also specify which physical drive your virtual memory swap file was stored on, if you chose to use virtual memory at all.

 

Some galaxy-brained genius users would try to have their cake and eat it too, by creating a RAM disk and dedicating it to virtual memory swap. You can imagine how well it worked.

 

At least nowadays, you can dedicate an NVME SSD as a swap disk if you really want to.

I sold my soul for ProSupport.

Link to comment
Share on other sites

Link to post
Share on other sites

I feel like this video is a huge clickbait even by LTT standards. It's not useful, exciting or innovative, only purpose of this video was to generate absurd sounding title to lure watchers and fill slot in schedule.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, rcmaehl said:

Yep. Even hibernate doesn't even really exist anymore. I think it's disabled by default on computers in favor of "Fast Startup". Why hibernate the entire system when it really doesn't take long to load anything anymore.

Correct. For those that do not know, the difference being hibernate dumped the entire memory and fast startup only dumps Windows and the state of some apps that were modified to support that.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Luscious said:

So this is pretty much the opposite of taking RAM and setting a chunk of it up as a separate RAMDISK - something that has been done in the past for when super fast storage was necessary. I actually have it working on my Windows machine right now just to test. Except that today you can get that super fast storage from just a spare SSD, and especially an Optane drive.

 

 

Paging/swap has been around in the PC space in some form since the 80's as well as RAM disks.  Most people don't tune these features like they did decades ago.  In fact with more modern hardware it has increasingly become difficult to use a RAM disk properly.

 

1 hour ago, Luscious said:

In my case I moved the Windows page file from my boot SSD that's running low on space to a 2nd fast drive as it was constantly filling up. On a system with 32GB RAM that would eat upto 64GB of your SSD, but for a system with 128GB of RAM you're looking at 256GB of storage space lost on your SSD. That's significant, and back when I was building my system when 128GB RAM kits were becoming a thing a 2TB SSD was still an eye-watering four digit purchase.

 

Thankfully prices have gone down on both RAM and SSD so it probably wouldn't break the budget to have a reasonable 32GB RAM and 2TB SSD. The issue today is M.2 since the form factor limits how big that SSD can go compared to a 2.5 inch sata drive. With some mobos only giving you a single M.2 slot you most definitely will be needing a 2.5 inch SSD as well to make it all work nicely.

 

The thing about paging/swapping is that for SSD's they can conceptually wear out the typical NAND flash quicker than traditional consumer bulk storage.  The one thing that has saved SSDs is that all the major operating systems have defaulted to a dynamic schema where they'll leverage disk space only as necessary.  The default maximum is still twice the amount of physical memory in a system.

 

If you need more than what a M.2 NVMe can provide on a desktop, you can get some M.2 to U.2 adapters and cable a U.2 drive into the system.  At that point the drives are server class with server class capacities and prices.  For laptops, limitations are indeed just M.2.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, kumicota said:

> Used smb as a network share to use swap on it

> A lot of people recomends to disable swap

 

LTT needs to hire some people who knows what they are doing, instead of coming with video ideas and doing poorly research on it. This video is extremely full of shit practices

Game server admin here,

 

A point that was made clear in the video is that you shouldn't use network storage as swap because of the extreme latencies. LTT never actually recommended that as a viable way to increase swap.

 

However, whether or not to use swap IS a question that people should consider as for my case of game server hosting at least, swap does cause some performance and/or stability issues in most game servers besides Minecraft, which basically requires swap because java processes will just keep using more and more memory until they are restarted, dumping old stuff into swap, unless they have some kind of built-in memory cleanup function like desktop Minecraft and some sketchtastic server plugins.

 

Overall I'd say the video was entertaining while displaying the obvious downsides of the presented solutions without making it so technical that non-tech-savvy computer users would get lost in the sauce. But you can come to whatever solution you wish as long as you don't watch the whole video, right kumicota?

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Nystemy said:

To be fair.

On my windows workstation I turned off all memory pooling on system drives. Since Windows have this abhorrent tendency to swap stuff out into slower storage if one leaves an application in the background for even a few minutes... Regardless of how much of one's memory one uses. It becomes rather silly to have to wait seconds for an application to read back in, and sometimes an application crashes during this process.... (and so far, I have only managed to crash my computer twice in about 4-5 years of running at a "measly" 32 GB of RAM, but now with 64GB I have only done it once more. (simulating fur in blender is memory intensive!))

 

So far I haven't noticed much swap related issues on my Ubuntu machine, so maybe Linux is less eager to throw stuff to disk just because one haven't touched it in a bit.

 

But logically, if one has a good amount of high speed memory, then a swap space shouldn't be a bad idea to have as a backup for when memory starts overfilling. But if the memory fills up too suddenly, then there won't be time to shuffle stuff away regardless, so a crash is more or less imminent if one gets close to 100% RAM utilization if one isn't careful. And as stated in the video, RAM isn't particularly expensive. (and most users don't really use a lot of RAM to start with.)

With Linux you can change how swappy it is, I had my laptop with only 8GB tuned to be a little less swappy than stock and by god it will fill all 8GB of RAM then start chugging into swap on the SATA SSD noticeably and get very slow but it hasn't crashed if you can just be patient and close some stuff.  Before altering the settings it would start swapping and slowing well before memory was exhausted which made the whole system slow at higher memory load, now it only gets slow once memory is actually full.

It's one of the tweaks I forgot to re-tweak after a clean install.

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, Bitter said:

With Linux you can change how swappy it is, I had my laptop with only 8GB tuned to be a little less swappy than stock and by god it will fill all 8GB of RAM then start chugging into swap on the SATA SSD noticeably and get very slow but it hasn't crashed if you can just be patient and close some stuff.  Before altering the settings it would start swapping and slowing well before memory was exhausted which made the whole system slow at higher memory load, now it only gets slow once memory is actually full.

It's one of the tweaks I forgot to re-tweak after a clean install.

That does sound nice that it has a threshold for when it starts swapping. Unlike Windows that just does it whenever...

 

It is honestly annoying when windows decides to swap away half of a 18 GB blender session off to disk just because one haven't poked at it in a couple of hours. (because one might be working on another blender project in parallel...) And I at least don't have fast enough storage to casually pull back a few GB fast enough for the program to not consider it annoying.

 

Though, having no swap space is fine as long as one stays under like 70% usage, when one gets above that, it is good to keep an eye on memory usage, and if one pokes above 90%, then one has to be careful in most cases. (since any program can allocate a bit more memory sporadically, and with potentially hundreds of processes and a central process that just stalls temporarily can lead to a bit of buffering for all processes that uses it and a sudden spike in memory usage, so above 95% a system crash is more or less imminent without swap space in my experience.)

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, Jackson Barker said:

A point that was made clear in the video is that you shouldn't use network storage as swap because of the extreme latencies. LTT never actually recommended that as a viable way to increase swap.

You can lower the latency a lot by using something like NVME-oF but this still would be have a big lag, my main problem is mostly for SMB, SMB on linux and windows is extremely slow and heavy. Like on the video of them showing they using it for video ingestion and then complaining that isn't fast, even with extremely powerful hardware. They should at least use NFS there instead of SMB.

 

12 hours ago, Jackson Barker said:

However, whether or not to use swap IS a question that people should consider

True, swap isn't bad and can help a lot but outright saying that is bad, is wrong in so many levels

Link to comment
Share on other sites

Link to post
Share on other sites

"leaving only the question of :how will that perform?"

 

and i immediately said out loud ..."not well" 😛

 

Conclusion ofc, dont bother with Swap space.

 

Though Linus didnt mention that actively relying on your swap space on a SSD, unnecessarily puts wear on the SSD.

One of the earliest tweaks people made to OS settings when SSDs came along was disabling the swap space so the system never put undue wear on the drive.

 

Ofc as Linus mentioned, RAM is pritty cheap, so just buy more RAM.

 

IMO a budget <£600 build should stick with 16Gb, but above that nowadays you should aim for 32Gb.

~$100 will get you 2x16Gb set of 3200mhz, and ~$60 will get you 2x8Gb 3200mhz.

On the higher end, 2x32Gb 3600mhz can be had for ~$250 which isnt bad imo considering such a system would likely be budgeting a total of $2000.

I think 10-15% of total budget for RAM is pritty solid.

 

CPU: Intel i7 3930k w/OC & EK Supremacy EVO Block | Motherboard: Asus P9x79 Pro  | RAM: G.Skill 4x4 1866 CL9 | PSU: Seasonic Platinum 1000w Corsair RM 750w Gold (2021)|

VDU: Panasonic 42" Plasma | GPU: Gigabyte 1080ti Gaming OC & Barrow Block (RIP)...GTX 980ti | Sound: Asus Xonar D2X - Z5500 -FiiO X3K DAP/DAC - ATH-M50S | Case: Phantek Enthoo Primo White |

Storage: Samsung 850 Pro 1TB SSD + WD Blue 1TB SSD | Cooling: XSPC D5 Photon 270 Res & Pump | 2x XSPC AX240 White Rads | NexXxos Monsta 80x240 Rad P/P | NF-A12x25 fans |

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...

What an informative video. Linux is a great operating system. Now I know what cache is within a processor.

Link to comment
Share on other sites

Link to post
Share on other sites

  • 4 weeks later...

Windows REQUIRES a minimal amount of swap space because its memory manager does not support overcommit.

 

It needs to be able to promise overeager or poorly written programs more capacity than they will ever require. This is the difference between system commit and physical memory usage: Commit=how much was promised, memory usage=how much of what was promised contains actual data.

 

Example: I on my machine currently have 27.3 GB of commit charge and 18.0 GB of physical usage.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×