Jump to content

The video RAM information guide

D2ultima
18 hours ago, Sett said:

Well, like I said, It's technically correct.

The issue I have is that it might be somewhat misleading, just adding up the bandwidth - kind of like just adding up VRAM. You can do neighter, that's not how it works. A cards performance is a cards performance. You can sort of add up the results in SLI, but if a card has some issue, SLI won't fix it.
Anyways, that's just nitpicking on my part. I learned quite a bit here, and will definitely be redirecting people with questions this way. ?

Haha that's all fine, I'm good with the technicalities. There's often a line where you simply CANNOT explain something in more depth or people just won't get it, this would be one of those things. But yeah, it doesn't make one card faster at all.

I have finally moved to a desktop. Also my guides are outdated as hell.

 

THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday

Link to comment
Share on other sites

Link to post
Share on other sites

  • 4 weeks later...

We do not need 16GB HBM2 for 1080p? O3O

Lake-V-X6-10600 (Gaming PC)

R23 score MC: 9190pts | R23 score SC: 1302pts

R20 score MC: 3529cb | R20 score SC: 506cb

Spoiler

Case: Cooler Master HAF XB Evo Black / Case Fan(s) Front: Noctua NF-A14 ULN 140mm Premium Fans / Case Fan(s) Rear: Corsair Air Series AF120 Quiet Edition (red) / Case Fan(s) Side: Noctua NF-A6x25 FLX 60mm Premium Fan / Controller: Sony Dualshock 4 Wireless (DS4Windows) / Cooler: Cooler Master Hyper 212 Evo / CPU: Intel Core i5-10600, 6-cores, 12-threads, 4.4/4.8GHz, 13,5MB cache (Intel 14nm++ FinFET) / Display: ASUS 24" LED VN247H (67Hz OC) 1920x1080p / GPU: Gigabyte Radeon RX Vega 56 Gaming OC @1501MHz (Samsung 14nm FinFET) / Keyboard: Logitech Desktop K120 (Nordic) / Motherboard: ASUS PRIME B460 PLUS, Socket-LGA1200 / Mouse: Razer Abyssus 2014 / PCI-E: ASRock USB 3.1/A+C (PCI Express x4) / PSU: EVGA SuperNOVA G2, 850W / RAM A1, A2, B1 & B2: DDR4-2666MHz CL13-15-15-15-35-1T "Samsung 8Gbit C-Die" (4x8GB) / Operating System: Windows 10 Home / Sound: Zombee Z300 / Storage 1 & 2: Samsung 850 EVO 500GB SSD / Storage 3: Seagate® Barracuda 2TB HDD / Storage 4: Seagate® Desktop 2TB SSHD / Storage 5: Crucial P1 1000GB M.2 SSD/ Storage 6: Western Digital WD7500BPKX 2.5" HDD / Wi-fi: TP-Link TL-WN851N 11n Wireless Adapter (Qualcomm Atheros)

Zen-II-X6-3600+ (Gaming PC)

R23 score MC: 9893pts | R23 score SC: 1248pts @4.2GHz

R23 score MC: 10151pts | R23 score SC: 1287pts @4.3GHz

R20 score MC: 3688cb | R20 score SC: 489cb

Spoiler

Case: Medion Micro-ATX Case / Case Fan Front: SUNON MagLev PF70251VX-Q000-S99 70mm / Case Fan Rear: Fanner Tech(Shen Zhen)Co.,LTD. 80mm (Purple) / Controller: Sony Dualshock 4 Wireless (DS4Windows) / Cooler: AMD Near-silent 125w Thermal Solution / CPU: AMD Ryzen 5 3600, 6-cores, 12-threads, 4.2/4.2GHz, 35MB cache (T.S.M.C. 7nm FinFET) / Display: HP 24" L2445w (64Hz OC) 1920x1200 / GPU: MSI GeForce GTX 970 4GD5 OC "Afterburner" @1450MHz (T.S.M.C. 28nm) / GPU: ASUS Radeon RX 6600 XT DUAL OC RDNA2 32CUs @2607MHz (T.S.M.C. 7nm FinFET) / Keyboard: HP KB-0316 PS/2 (Nordic) / Motherboard: ASRock B450M Pro4, Socket-AM4 / Mouse: Razer Abyssus 2014 / PCI-E: ASRock USB 3.1/A+C (PCI Express x4) / PSU: EVGA SuperNOVA G2, 550W / RAM A2 & B2: DDR4-3600MHz CL16-18-8-19-37-1T "SK Hynix 8Gbit CJR" (2x16GB) / Operating System: Windows 10 Home / Sound 1: Zombee Z500 / Sound 2: Logitech Stereo Speakers S-150 / Storage 1 & 2: Samsung 850 EVO 500GB SSD / Storage 3: Western Digital My Passport 2.5" 2TB HDD / Storage 4: Western Digital Elements Desktop 2TB HDD / Storage 5: Kingston A2000 1TB M.2 NVME SSD / Wi-fi & Bluetooth: ASUS PCE-AC55BT Wireless Adapter (Intel)

Vishera-X8-9370 | R20 score MC: 1476cb

Spoiler

Case: Cooler Master HAF XB Evo Black / Case Fan(s) Front: Noctua NF-A14 ULN 140mm Premium Fans / Case Fan(s) Rear: Corsair Air Series AF120 Quiet Edition (red) / Case Fan(s) Side: Noctua NF-A6x25 FLX 60mm Premium Fan / Case Fan VRM: SUNON MagLev KDE1209PTV3 92mm / Controller: Sony Dualshock 4 Wireless (DS4Windows) / Cooler: Cooler Master Hyper 212 Evo / CPU: AMD FX-8370 (Base: @4.4GHz | Turbo: @4.7GHz) Black Edition Eight-Core (Global Foundries 32nm) / Display: ASUS 24" LED VN247H (67Hz OC) 1920x1080p / GPU: MSI GeForce GTX 970 4GD5 OC "Afterburner" @1450MHz (T.S.M.C. 28nm) / GPU: Gigabyte Radeon RX Vega 56 Gaming OC @1501MHz (Samsung 14nm FinFET) / Keyboard: Logitech Desktop K120 (Nordic) / Motherboard: MSI 970 GAMING, Socket-AM3+ / Mouse: Razer Abyssus 2014 / PCI-E: ASRock USB 3.1/A+C (PCI Express x4) / PSU: EVGA SuperNOVA G2, 850W PSU / RAM 1, 2, 3 & 4: Corsair Vengeance DDR3-1866MHz CL8-10-10-28-37-2T (4x4GB) 16.38GB / Operating System 1: Windows 10 Home / Sound: Zombee Z300 / Storage 1: Samsung 850 EVO 500GB SSD (x2) / Storage 2: Seagate® Barracuda 2TB HDD / Storage 3: Seagate® Desktop 2TB SSHD / Wi-fi: TP-Link TL-WN951N 11n Wireless Adapter

Godavari-X4-880K | R20 score MC: 810cb

Spoiler

Case: Medion Micro-ATX Case / Case Fan Front: SUNON MagLev PF70251VX-Q000-S99 70mm / Case Fan Rear: Fanner Tech(Shen Zhen)Co.,LTD. 80mm (Purple) / Controller: Sony Dualshock 4 Wireless (DS4Windows) / Cooler: AMD Near-silent 95w Thermal Solution / Cooler: AMD Near-silent 125w Thermal Solution / CPU: AMD Athlon X4 860K Black Edition Elite Quad-Core (T.S.M.C. 28nm) / CPU: AMD Athlon X4 880K Black Edition Elite Quad-Core (T.S.M.C. 28nm) / Display: HP 19" Flat Panel L1940 (75Hz) 1280x1024 / GPU: EVGA GeForce GTX 960 SuperSC 2GB (T.S.M.C. 28nm) / GPU: MSI GeForce GTX 970 4GD5 OC "Afterburner" @1450MHz (T.S.M.C. 28nm) / Keyboard: HP KB-0316 PS/2 (Nordic) / Motherboard: MSI A78M-E45 V2, Socket-FM2+ / Mouse: Razer Abyssus 2014 / PCI-E: ASRock USB 3.1/A+C (PCI Express x4) / PSU: EVGA SuperNOVA G2, 550W PSU / RAM 1, 2, 3 & 4: SK hynix DDR3-1866MHz CL9-10-11-27-40 (4x4GB) 16.38GB / Operating System 1: Ubuntu Gnome 16.04 LTS (Xenial Xerus) / Operating System 2: Windows 10 Home / Sound 1: Zombee Z500 / Sound 2: Logitech Stereo Speakers S-150 / Storage 1: Samsung 850 EVO 500GB SSD (x2) / Storage 2: Western Digital My Passport 2.5" 2TB HDD / Storage 3: Western Digital Elements Desktop 2TB HDD / Wi-fi: TP-Link TL-WN851N 11n Wireless Adapter

Acer Aspire 7738G custom (changed CPU, GPU & Storage)
Spoiler

CPU: Intel Core 2 Duo P8600, 2-cores, 2-threads, 2.4GHz, 3MB cache (Intel 45nm) / GPU: ATi Radeon HD 4570 515MB DDR2 (T.S.M.C. 55nm) / RAM: DDR2-1066MHz CL7-7-7-20-1T (2x2GB) / Operating System: Windows 10 Home / Storage: Crucial BX500 480GB 3D NAND SATA 2.5" SSD

Complete portable device SoC history:

Spoiler
Apple A4 - Apple iPod touch (4th generation)
Apple A5 - Apple iPod touch (5th generation)
Apple A9 - Apple iPhone 6s Plus
HiSilicon Kirin 810 (T.S.M.C. 7nm) - Huawei P40 Lite / Huawei nova 7i
Mediatek MT2601 (T.S.M.C 28nm) - TicWatch E
Mediatek MT6580 (T.S.M.C 28nm) - TECNO Spark 2 (1GB RAM)
Mediatek MT6592M (T.S.M.C 28nm) - my|phone my32 (orange)
Mediatek MT6592M (T.S.M.C 28nm) - my|phone my32 (yellow)
Mediatek MT6735 (T.S.M.C 28nm) - HMD Nokia 3 Dual SIM
Mediatek MT6737 (T.S.M.C 28nm) - Cherry Mobile Flare S6
Mediatek MT6739 (T.S.M.C 28nm) - my|phone myX8 (blue)
Mediatek MT6739 (T.S.M.C 28nm) - my|phone myX8 (gold)
Mediatek MT6750 (T.S.M.C 28nm) - honor 6C Pro / honor V9 Play
Mediatek MT6765 (T.S.M.C 12nm) - TECNO Pouvoir 3 Plus
Mediatek MT6797D (T.S.M.C 20nm) - my|phone Brown Tab 1
Qualcomm MSM8926 (T.S.M.C. 28nm) - Microsoft Lumia 640 LTE
Qualcomm MSM8974AA (T.S.M.C. 28nm) - Blackberry Passport
Qualcomm SDM710 (Samsung 10nm) - Oppo Realme 3 Pro

 

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, Nena360 said:

We do not need 16GB HBM2 for 1080p? O3O

Give it another 2 years, soon 8GB will become minimum for 1080p or something with new consoles.

I have finally moved to a desktop. Also my guides are outdated as hell.

 

THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday

Link to comment
Share on other sites

Link to post
Share on other sites

One thing I've been looking at recently is whether or not VRAM usage as reported by some utilities like GPU-Z is an accurate representation of what is required. That is, if you run a game and GPU-Z says say 5GB is being used, that means the game requires 5GB of VRAM for whatever particular settings you are using and having less results in memory swapping issues. As much research as I can find on the interwebs suggests that games may not actually use all of the RAM that's being reported as "used", but rather the game requested to reserve that space and this is merely reported as used by whatever reporting mechanism.

 

The thing that made me start to question the accuracy of VRAM reporting was this: https://www.techspot.com/article/1600-far-cry-5-benchmarks/

 

The tl;dr is that Far Cry 5 reported using up to 4GB of VRAM at 4K, which you'd expect on a card with less memory to perform horribly. Though the author states that on a card with 3GB of VRAM (notably the 1060 3GB since it's easy to compare to the 6GB version), it wasn't a stuttery mess on 1440p and 4K, the resolutions where the game stated it used more VRAM than the card had. What's even stranger is on a 2GB card like the GT 1030, performance didn't suffer beyond what was expected and more or less kept a linear performance drop with higher resolutions.

 

The website also ran a test with the RTX 2060, a 6GB VRAM card: https://www.techspot.com/article/1785-nvidia-geforce-rtx-2060-vram-enough/

 

They picked the games that reported using more VRAM on the RTX 2070 (an 8GB card) than the RTX 2060 had. They also found that the RTX 2060 can handle those games, reporting no issue with stuttering or whatnot.

 

So there appears to be something about how much video RAM the game requested vs. how much it's actually using to the point where it negatively affects performance.

Link to comment
Share on other sites

Link to post
Share on other sites

On 4/9/2019 at 5:43 PM, Mira Yurizaki said:

One thing I've been looking at recently is whether or not VRAM usage as reported by some utilities like GPU-Z is an accurate representation of what is required. That is, if you run a game and GPU-Z says say 5GB is being used, that means the game requires 5GB of VRAM for whatever particular settings you are using and having less results in memory swapping issues. As much research as I can find on the interwebs suggests that games may not actually use all of the RAM that's being reported as "used", but rather the game requested to reserve that space and this is merely reported as used by whatever reporting mechanism.

 

The thing that made me start to question the accuracy of VRAM reporting was this: https://www.techspot.com/article/1600-far-cry-5-benchmarks/

 

The tl;dr is that Far Cry 5 reported using up to 4GB of VRAM at 4K, which you'd expect on a card with less memory to perform horribly. Though the author states that on a card with 3GB of VRAM (notably the 1060 3GB since it's easy to compare to the 6GB version), it wasn't a stuttery mess on 1440p and 4K, the resolutions where the game stated it used more VRAM than the card had. What's even stranger is on a 2GB card like the GT 1030, performance didn't suffer beyond what was expected and more or less kept a linear performance drop with higher resolutions.

 

The website also ran a test with the RTX 2060, a 6GB VRAM card: https://www.techspot.com/article/1785-nvidia-geforce-rtx-2060-vram-enough/

 

They picked the games that reported using more VRAM on the RTX 2070 (an 8GB card) than the RTX 2060 had. They also found that the RTX 2060 can handle those games, reporting no issue with stuttering or whatnot.

 

So there appears to be something about how much video RAM the game requested vs. how much it's actually using to the point where it negatively affects performance.

Okay, so, it's a bit convoluted. But basically, it's similar to how windows uses more RAM the more you have in your system.

 

It's caching, really. But the problem with testing high resolutions on fairly weak video cards like 1060 3GB is that if your GPU fps is lower than what the vRAM choke would bring, you'd see no change. The best way to test is with high memory requirement, low resolution games. Games that can eat up craptons of vRAM at low resolutions... for example, Killing Floor 2 with texture streaming disabled. That will DEVOUR vRAM like nothing else, well up to and over 7GB at 1080p. On a 2GB or 3GB card, compared to higher vRAM variants, the difference should be visible. Try a RX 580 4GB vs 8GB, or a 980M 4GB vs 8GB, or a 1050 2GB vs 4GB, etc. See if performance changes between them when they are otherwise similar with texture streaming enabled.

 

Basically, it's hard to test. But high resolutions are NOT the way to test properly, because the GPU needs to have room to flex to be limited by vRAM. Also, it may not be stutters all the time, it could be pop-in issues and other such things. Like if you've ever managed to drive fast enough for long enough in a GTA game it eventually only loads in a basic road texture and no people or cars or buildings take form, because they can't be streamed in quickly enough due to the way the engine is designed (GTA SA, GTA 4 do this reliably).

I have finally moved to a desktop. Also my guides are outdated as hell.

 

THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, D2ultima said:

It's caching, really. But the problem with testing high resolutions on fairly weak video cards like 1060 3GB is that if your GPU fps is lower than what the vRAM choke would bring, you'd see no change. The best way to test is with high memory requirement, low resolution games. Games that can eat up craptons of vRAM at low resolutions... for example, Killing Floor 2 with texture streaming disabled. That will DEVOUR vRAM like nothing else, well up to and over 7GB at 1080p. On a 2GB or 3GB card, compared to higher vRAM variants, the difference should be visible. Try a RX 580 4GB vs 8GB, or a 980M 4GB vs 8GB, or a 1050 2GB vs 4GB, etc. See if performance changes between them when they are otherwise similar with texture streaming enabled.

I don't see how a lower resolution would mean anything other than freeing up VRAM. Though I would see how disabling texture streaming would, except this isn't a feature you can toggle in most games these days.

 

16 minutes ago, D2ultima said:

Basically, it's hard to test. But high resolutions are NOT the way to test properly, because the GPU needs to have room to flex to be limited by vRAM. Also, it may not be stutters all the time, it could be pop-in issues and other such things. Like if you've ever managed to drive fast enough for long enough in a GTA game it eventually only loads in a basic road texture and no people or cars or buildings take form, because they can't be streamed in quickly enough due to the way the engine is designed (GTA SA, GTA 4 do this reliably).

Either way, this is telling me that whatever the game or tools reports as VRAM consumption doesn't exactly correlate to how this affects overall performance. All this is telling me is even if a game wants say 7GB and you only have 6GB, it won't result in appreciably degraded performance unless you're severely starving the game of VRAM in some other way.

Link to comment
Share on other sites

Link to post
Share on other sites

23 hours ago, Mira Yurizaki said:

I don't see how a lower resolution would mean anything other than freeing up VRAM. Though I would see how disabling texture streaming would, except this isn't a feature you can toggle in most games these days.

 

Either way, this is telling me that whatever the game or tools reports as VRAM consumption doesn't exactly correlate to how this affects overall performance. All this is telling me is even if a game wants say 7GB and you only have 6GB, it won't result in appreciably degraded performance unless you're severely starving the game of VRAM in some other way.

Low render resolution (number of pushed pixels) will reduce vRAM usage A LITTLE, but improve performance a LOT. If your GPU is producing 30fps then a vRAM choke is probably not going to show up. If your game is getting 100fps then it's more likely you'll notice the hitching. Raising texture resolution and shadowmap resolution etc can vastly increase vRAM requirements without increasing the graphical power required to run the game. This is why low resolution + high resolution assets is the way to check what vRAM limits do to games. If you're already at a VERY hard GPU bottleneck, then vRAM downsides won't even need to show themselves because the time between each frame is long enough to get over the lapse in readily available assets in memory.

 

And yes, what a game says it's using isn't what it *needs*, but it can only improve your experience to let it have more. I still feel 4GB is the bare minimum though, and 6GB is a comfortable floor with 8GB being nice and 12GB or more being a good topout. Also remember things like using NVENC adds vRAM usage, whether it be OBS or shadowplay etc.

I have finally moved to a desktop. Also my guides are outdated as hell.

 

THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday

Link to comment
Share on other sites

Link to post
Share on other sites

  • 1 month later...
On 4/15/2019 at 3:06 AM, D2ultima said:

Low render resolution (number of pushed pixels) will reduce vRAM usage A LITTLE, but improve performance a LOT. If your GPU is producing 30fps then a vRAM choke is probably not going to show up. If your game is getting 100fps then it's more likely you'll notice the hitching. Raising texture resolution and shadowmap resolution etc can vastly increase vRAM requirements without increasing the graphical power required to run the game. This is why low resolution + high resolution assets is the way to check what vRAM limits do to games. If you're already at a VERY hard GPU bottleneck, then vRAM downsides won't even need to show themselves because the time between each frame is long enough to get over the lapse in readily available assets in memory.

 

And yes, what a game says it's using isn't what it *needs*, but it can only improve your experience to let it have more. I still feel 4GB is the bare minimum though, and 6GB is a comfortable floor with 8GB being nice and 12GB or more being a good topout. Also remember things like using NVENC adds vRAM usage, whether it be OBS or shadowplay etc.

Well I have some confusion about this, there is a video where both CPU and gpu is not being utilized to the max because of ram is this the fault of ram not being fast enough to read drives and display it? Well is this out of topic? To put it simple faster gives more performance, what I want to know is why.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Oalei said:

Well I have some confusion about this, there is a video where both CPU and gpu is not being utilized to the max because of ram is this the fault of ram not being fast enough to read drives and display it? Well is this out of topic? To put it simple faster gives more performance, what I want to know is why.

You're going to need to explain yourself more. I do not get what you're asking. It sounds like you're talking about system RAM which is not the point of this topic, and while I could write a fairly competent RAM guide, it would not really go anywhere.

I have finally moved to a desktop. Also my guides are outdated as hell.

 

THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, D2ultima said:

You're going to need to explain yourself more. I do not get what you're asking. It sounds like you're talking about system RAM which is not the point of this topic, and while I could write a fairly competent RAM guide, it would not really go anywhere.

OK ok, so there is some benchmarks with 2133mhz and 3000mhz, the 2133mhz is only showing about 70cpu and 60gpu, the 3000mhz shoeing 90-100% and gpu 90%,well maybe this is where the game is what the problem is. 

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Oalei said:

OK ok, so there is some benchmarks with 2133mhz and 3000mhz, the 2133mhz is only showing about 70cpu and 60gpu, the 3000mhz shoeing 90-100% and gpu 90%,well maybe this is where the game is what the problem is. 

This is the video RAM guide, as in memory on your video card.

 

You are talking about system RAM. And yes, system RAM is a big factor in CPU performance and can be a massive bottleneck, but this isn't the thread for it.

I have finally moved to a desktop. Also my guides are outdated as hell.

 

THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, D2ultima said:

This is the video RAM guide, as in memory on your video card.

 

You are talking about system RAM. And yes, system RAM is a big factor in CPU performance and can be a massive bottleneck, but this isn't the thread for it.

well sorry for wasting you time honestly.

 

well for vram i think i get a grasp of it, i literally read all of the comments and its giving me more knowledge (im just too lazy to read the stuff you make). I hope you will update it if you find something that is needed to be change.

Link to comment
Share on other sites

Link to post
Share on other sites

57 minutes ago, Oalei said:

well sorry for wasting you time honestly.

 

well for vram i think i get a grasp of it, i literally read all of the comments and its giving me more knowledge (im just too lazy to read the stuff you make). I hope you will update it if you find something that is needed to be change.

I have things I could change and add, mainly simply adding GDDR5X/GDDR6/etc but this thread is broken because it was written on IPB v3 and the IPB v4 transition forever broke the article. I would have to start an entirely new thread and copy everything over manually for it to save correctly.

I have finally moved to a desktop. Also my guides are outdated as hell.

 

THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, D2ultima said:

I have things I could change and add, mainly simply adding GDDR5X/GDDR6/etc but this thread is broken because it was written on IPB v3 and the IPB v4 transition forever broke the article. I would have to start an entirely new thread and copy everything over manually for it to save correctly.

Yeah I seen that you complained about the new update. Yeah it would be very annoying to redo it all even if you just read it, it's already like who would if you don't needed/wanted to. Well I still wait for new topic maybe like V.2 because it just broke.

Link to comment
Share on other sites

Link to post
Share on other sites

  • 4 months later...
2 hours ago, ALIBENWA said:

I started writing this guy mainly for the top section, to denounce misinformation people seem to have regarding vRAM and its relation to memory bandwidth, but I figured I might as well just go the full mile and explain as best I can about what most people need to know about GPU memory anyway. If I've somehow screwed up somewhere, let me know. I probably have. I'll fix whatever I get wrong. And thank you to everyone who has contributed and corrected things I didn't get right! Unlike my SLI guide, much of the information here was confirmed post-writing.

Why do my posts always get picked for necros

.

Link to comment
Share on other sites

Link to post
Share on other sites

  • 2 weeks later...

looking for some advice here related to gpu

 

I need to assemble several PCs as part of a education program and I'm bound to low budget like 300 ~ 450 bucks each

 

I'm focused on FM2/FM2+ IGPU (2 AND 4 cores) (corp donations)

already got 8gb 1600 ram each (donation)

 

PCs should be used for office, web, intranet, and 1080p/30 streaming like youtube

 

do I need to look for a dedicated gpu even a cheap one

Link to comment
Share on other sites

Link to post
Share on other sites

This is a good guide OP. You should do another one on video card hardware. Like on the gpu, shaders, supposed multi cores(which are just shader units?), compute abilities.

 

For example why do so many video cards have around the same clock speed but a huge difference in performance?

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, blaze909 said:

looking for some advice here related to gpu

 

I need to assemble several PCs as part of a education program and I'm bound to low budget like 300 ~ 450 bucks each

 

I'm focused on FM2/FM2+ IGPU (2 AND 4 cores) (corp donations)

already got 8gb 1600 ram each (donation)

 

PCs should be used for office, web, intranet, and 1080p/30 streaming like youtube

 

do I need to look for a dedicated gpu even a cheap one

iGPUs should be perfectly fine

5 hours ago, masethekiller said:

This is a good guide OP. You should do another one on video card hardware. Like on the gpu, shaders, supposed multi cores(which are just shader units?), compute abilities.

 

For example why do so many video cards have around the same clock speed but a huge difference in performance?

I'm not sure what specifically I could do, I don't know enough about that since each architecture is very different and it's hard to quantify changes, or specify when they will make improvements. I could make general statements like how backend has barely any changes between maxwell and pascal but Turing is significantly faster in a lot of aspects that don't show up on tests like Firestrike, and how as more games take to using those hardware they'll pull far ahead of pascal counterparts (like how 2070 Super > 1080Ti in COD: MW and Superposition and Apex Legends but sticks near a 1080 in Sekiro or something), but that's just going to be too much things I can't prove.

I have finally moved to a desktop. Also my guides are outdated as hell.

 

THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday

Link to comment
Share on other sites

Link to post
Share on other sites

On 8/16/2014 at 3:53 AM, jmaster299 said:

So yes you can stick 8192MB of VRAM on a 128bit bus, but you will never in a 100 years be capable of achieving a memory clock rate that will be capable of using all that VRAM on such a small bus size.

A bit off topic I will be, but this comment is rapidly approaching being proven false with the RX 5500. (it will have a 128-bit bus, and a version with 8GB of GDDR6, I do believe there will be games that will be able to use it all, and it's only been 5 years)

CPURyzen 7 5800X Cooler: Arctic Liquid Freezer II 120mm AIO with push-pull Arctic P12 PWM fans RAM: G.Skill Ripjaws V 4x8GB 3600 16-16-16-30

MotherboardASRock X570M Pro4 GPUASRock RX 5700 XT Reference with Eiswolf GPX-Pro 240 AIO Case: Antec P5 PSU: Rosewill Capstone 750M

Monitor: ASUS ROG Strix XG32VC Case Fans: 2x Arctic P12 PWM Storage: HP EX950 1TB NVMe, Mushkin Pilot-E 1TB NVMe, 2x Constellation ES 2TB in RAID1

https://hwbot.org/submission/4497882_btgbullseye_gpupi_v3.3___32b_radeon_rx_5700_xt_13min_37sec_848ms

Link to comment
Share on other sites

Link to post
Share on other sites

On 10/13/2019 at 11:36 AM, BTGbullseye said:

A bit off topic I will be, but this comment is rapidly approaching being proven false with the RX 5500. (it will have a 128-bit bus, and a version with 8GB of GDDR6, I do believe there will be games that will be able to use it all, and it's only been 5 years)

That comment was false from the start

I have finally moved to a desktop. Also my guides are outdated as hell.

 

THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday

Link to comment
Share on other sites

Link to post
Share on other sites

On 10/14/2019 at 11:41 AM, D2ultima said:

That comment was false from the start

Not entirely...

CPURyzen 7 5800X Cooler: Arctic Liquid Freezer II 120mm AIO with push-pull Arctic P12 PWM fans RAM: G.Skill Ripjaws V 4x8GB 3600 16-16-16-30

MotherboardASRock X570M Pro4 GPUASRock RX 5700 XT Reference with Eiswolf GPX-Pro 240 AIO Case: Antec P5 PSU: Rosewill Capstone 750M

Monitor: ASUS ROG Strix XG32VC Case Fans: 2x Arctic P12 PWM Storage: HP EX950 1TB NVMe, Mushkin Pilot-E 1TB NVMe, 2x Constellation ES 2TB in RAID1

https://hwbot.org/submission/4497882_btgbullseye_gpupi_v3.3___32b_radeon_rx_5700_xt_13min_37sec_848ms

Link to comment
Share on other sites

Link to post
Share on other sites

On 10/15/2019 at 6:26 PM, BTGbullseye said:

Not entirely...

Nope, it was always false. 100%. It was never not false. Filling a memory buffer is never impossible. Whether the card's memory bandwidth was useful for the situations which were likely to fill that buffer (large shadowmap resolutions, high resolutions and multisampling techniques, etc) is another story. But filling it was never impossible.

I have finally moved to a desktop. Also my guides are outdated as hell.

 

THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, D2ultima said:

Whether the card's memory bandwidth was useful for the situations which were likely to fill that buffer (large shadowmap resolutions, high resolutions and multisampling techniques, etc) is another story.

That is the implied situation if you read all the previous comments. Context is king.

CPURyzen 7 5800X Cooler: Arctic Liquid Freezer II 120mm AIO with push-pull Arctic P12 PWM fans RAM: G.Skill Ripjaws V 4x8GB 3600 16-16-16-30

MotherboardASRock X570M Pro4 GPUASRock RX 5700 XT Reference with Eiswolf GPX-Pro 240 AIO Case: Antec P5 PSU: Rosewill Capstone 750M

Monitor: ASUS ROG Strix XG32VC Case Fans: 2x Arctic P12 PWM Storage: HP EX950 1TB NVMe, Mushkin Pilot-E 1TB NVMe, 2x Constellation ES 2TB in RAID1

https://hwbot.org/submission/4497882_btgbullseye_gpupi_v3.3___32b_radeon_rx_5700_xt_13min_37sec_848ms

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, BTGbullseye said:

That is the implied situation if you read all the previous comments. Context is king.

That is not what was said. The statement read "you are never going to get a memory clock fast enough to fill that vRAM buffer on 128-bit".

 

This statement, as a whole, is false. I said whether it was useful, but that was also a statement I shouldn't have said. What I should have said was "whether the card is useful", because higher resolutions and multisample AA types etc hurt the core performance far more than memory, but shadowmap is still a situation that can eat up a vRAM buffer and not particularly hurt core or memory bandwidth except for initial loading. In other words, using such low end cards where you would find a 128-bit memory bus in a situation where it would need 8GB of vRAM is ill-advised, and there are very few cases where it makes sense, but if you want to throw a 128-bit memory bus 8GB card at something like Aliens: Isolation where 16k x 16k shadowmap resolution was possible via cfg edits and only cared about your vRAM buffer size, it would use it perfectly fine.

 

Edit: To clarify my statement about the core performance... vRAM chokes don't usually have hitching longer than the inherent delay in low-fps gameplay around 30fps. It is why the low-vRAM-buffer R9 Fury and R9 Fury X never showed low vRAM hitching/stallouts when testing because they ran the tests at 4k and aimed to keep around a 30fps give or take performance level, actually adjusting gaming settings down or up.

 

There have also been tests between 2GB and 4GB vRAM cards at higher resolutions where the 2GB cards don't show much variance in frametime, but then the games were between 25 and 32fps anyway, so any stutter is masked by already-low FPS.

Edited by D2ultima

I have finally moved to a desktop. Also my guides are outdated as hell.

 

THE INFORMATION GUIDES: SLI INFORMATION || vRAM INFORMATION || MOBILE i7 CPU INFORMATION || Maybe more someday

Link to comment
Share on other sites

Link to post
Share on other sites

  • 4 weeks later...
On 10/12/2019 at 10:50 AM, masethekiller said:

For example why do so many video cards have around the same clock speed but a huge difference in performance?

If you're talking about core clock speed in general, it's because those higher performing GPUs have more processing units. Graphics processing as a whole is what's known as embarrassingly parallel. The best way of increasing graphics rendering performance is to literally just throw more processing units at it. The only reason I can think of why GPUs appear to have floated around the same clock speed over time is clock speed likely has a much stronger effect on power consumption over adding more processing units, and hardware has to be within a certain power envelope.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×