Jump to content

NVIDIA releases CMP lineup and reduces hashing rates on GeForce cards

Guest
18 minutes ago, porina said:

Random thoughts: "Solutions" don't have to be perfect. They just have to work well enough for long enough to have an impact. Too often some here seem to think if the solution isn't 100% then it isn't worth doing at all.

It doesn't have to be perfect but it actually has to work, which it may not for the targets of these changes, and it may well impact other things. It's not an issue of needing a perfect solution it's a matter of introducing something which has a risk of inadvertent impact with little to no recourse to fix it.

 

The one who is claiming to have a perfect solution is Nvidia, they are the ones saying it only affects Eth while giving no evidence to back that statement.

 

Something is not worth doing if the risk reward is not there to merit it, which I would say is a very easy argument to justify. First it's the only GPU with this so helps nobody wanting faster gaming cards, second you can mine something else, third wholesale bulk buyers still have a fair decent shot to get around it if they want to.

 

18 minutes ago, porina said:

Implicitly it will be a bigger risk to those workloads that are memory heavy.

That's most compute workloads now days.

Link to comment
Share on other sites

Link to post
Share on other sites

I have however thought of a potential solution to get "gaming GPUs" to "gamers". Sell the GPU packages to AIBs at above 'cost' with an MDF rebate program attached so they can claim back with each sale, Nvidia sees any funky business then no rebate and they lose money. The rebate needs to be at the AIB so retailers don't just raise prices and ignore it or pass it on to the customer, the AIB then has to create their own rebate program or equivalent retail sales information feedback which would put all the risk on to the AIB so they will actually want to get every and all sales record so they can claim back their MDF.

 

It's actually one of the few times Nvidia being jerks to AIBs is the better solution lol.

 

Supply and demand may be the core of the issue but if large scale mining purchases are thought to be the cause then the only possible solution that has any chance of working is within the supply chain itself, not down the chain artificial performance limitations to hopefully make them less appealing. If miners are buying laptops to mine on then they are just as likely to buy RTX 3060's anyway even with the lower performance so long as they are net profit.

Link to comment
Share on other sites

Link to post
Share on other sites

26 minutes ago, leadeater said:

It doesn't have to be perfect but it actually has to work, which it may not for the targets of these changes, and it may well impact other things. It's not an issue of needing a perfect solution it's a matter of introducing something which has a risk of inadvertent impact with little to no recourse to fix it.

They way I see it, the push for the 3060 is very heavily towards gaming. Not to say it can't be used for compute, and we don't know the impact until test results are released. I'd guess it should be in the hands of the reviewers so they can get their reviews out around the launch days away, and this will no doubt be a push of that testing. But looking at it from a gaming first perspective, what other kinds of workloads are likely within that audience?

 

In the worst case (for nvidia) let's say the nerf is bypassed, and significant use cases were identified which were unintentionally and noticeably impacted. They could simply issue a revised driver without nerf. They get a little egg on face but it is not big deal.

 

26 minutes ago, leadeater said:

The one who is claiming to have a perfect solution is Nvidia, they are the ones saying it only affects Eth while giving no evidence to back that statement.

Solution, yes. I don't see them saying it is perfect. I don't for a moment think they want to have to do this, but it is something they can try to help.

 

26 minutes ago, leadeater said:

Something is not worth doing if the risk reward is not there to merit it, which I would say is a very easy argument to justify. First it's the only GPU with this so helps nobody wanting faster gaming cards, second you can mine something else, third wholesale bulk buyers still have a fair decent shot to get around it if they want to.

Risk/reward of course depends on on what you want to do. I don't see a high risk in this approach. Again we have to await real world test results but I'd imagine nvidia would consider this a success if it goes some way to satisfy gaming demand, is not hacked or otherwise bypassed let's say within a month of launch. I don't for a minute think that even if miners don't buy a single 3060 there will be enough to satisfy the gaming GPU hole we have now. Also we all know nothing is uncrackable. In cryptography the goal is good enough such that a hack is unlikely within a meaningful timescale. Don't confuse that with the marketing message. We'll have to wait and see how that really goes.

 

If this ends up working well on 3060 I can see them applying it to future higher end refreshes, such as the long rumoured high vram models. 

 

26 minutes ago, leadeater said:

That's most compute workloads now days.

Again, we have to look at possible workloads that a gamer might run. Ram performance is important, but I'm not so sure ethereum levels of access are representative of many other cases.

 

 

Overall I feel that many are attacking this because they don't like the idea of a nerf, but they're viewing it from a position not that of the majority of the actual target market. Chances are people actively participating on this forum are more advanced than average and look at things in a different way. I am curious, how did I arrive at my viewpoint, which seems to be in a minority compared to posts around here? I can't blame all of it on AMD fanboyism which is about the only thing in PC tech community that isn't in short supply right now. And I don't think I've disagreed as strongly with Linus at any point in past. I've stuck around because unlike many others in the tech media community, I generally agree with his interpretation on other events, but the gap on this one seems too big to close. He has admitted to saying things before to provoke a reaction, and I don't know if this might be another case of that.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

On 2/18/2021 at 3:49 PM, porina said:

That's a very interesting strategy. My first thought was simply, you can get around it by rolling back to older drivers? If they're just applying this to 3060, which hasn't launched yet, I guess that is possible, but this wouldn't work for existing released models. This assumes there is no "wild" 3060 driver. If press has release driver, is that baked in also?

 

There's also another concern about this approach. I'm not a fan in principle of crippling. I get the reasons why in this case, but could there be unintended effects for people who use GPU for compute but not mining? What if it interferes with your video editing performance for example? Or folding@home? They have to make it targeted enough towards Eth mining so that it can't be worked around by changing the mining code, but not so wide other uses are impacted also. Unless nvidia wants to make the 3060 strictly only a gaming GPU.

 

It'll be interesting to see how this plays out.

Another reason is maybe that people might start a lawsuit if they tampered with existing cards performance they bought before a driver nerf were implemented, but I totally agree about folding at home and other apps... UwU

Lake-V-X6-10600 (Gaming PC)

R23 score MC: 9190pts | R23 score SC: 1302pts

R20 score MC: 3529cb | R20 score SC: 506cb

Spoiler

Case: Cooler Master HAF XB Evo Black / Case Fan(s) Front: Noctua NF-A14 ULN 140mm Premium Fans / Case Fan(s) Rear: Corsair Air Series AF120 Quiet Edition (red) / Case Fan(s) Side: Noctua NF-A6x25 FLX 60mm Premium Fan / Controller: Sony Dualshock 4 Wireless (DS4Windows) / Cooler: Cooler Master Hyper 212 Evo / CPU: Intel Core i5-10600, 6-cores, 12-threads, 4.4/4.8GHz, 13,5MB cache (Intel 14nm++ FinFET) / Display: ASUS 24" LED VN247H (67Hz OC) 1920x1080p / GPU: Gigabyte Radeon RX Vega 56 Gaming OC @1501MHz (Samsung 14nm FinFET) / Keyboard: Logitech Desktop K120 (Nordic) / Motherboard: ASUS PRIME B460 PLUS, Socket-LGA1200 / Mouse: Razer Abyssus 2014 / PCI-E: ASRock USB 3.1/A+C (PCI Express x4) / PSU: EVGA SuperNOVA G2, 850W / RAM A1, A2, B1 & B2: DDR4-2666MHz CL13-15-15-15-35-1T "Samsung 8Gbit C-Die" (4x8GB) / Operating System: Windows 10 Home / Sound: Zombee Z300 / Storage 1 & 2: Samsung 850 EVO 500GB SSD / Storage 3: Seagate® Barracuda 2TB HDD / Storage 4: Seagate® Desktop 2TB SSHD / Storage 5: Crucial P1 1000GB M.2 SSD/ Storage 6: Western Digital WD7500BPKX 2.5" HDD / Wi-fi: TP-Link TL-WN851N 11n Wireless Adapter (Qualcomm Atheros)

Zen-II-X6-3600+ (Gaming PC)

R23 score MC: 9893pts | R23 score SC: 1248pts @4.2GHz

R23 score MC: 10151pts | R23 score SC: 1287pts @4.3GHz

R20 score MC: 3688cb | R20 score SC: 489cb

Spoiler

Case: Medion Micro-ATX Case / Case Fan Front: SUNON MagLev PF70251VX-Q000-S99 70mm / Case Fan Rear: Fanner Tech(Shen Zhen)Co.,LTD. 80mm (Purple) / Controller: Sony Dualshock 4 Wireless (DS4Windows) / Cooler: AMD Near-silent 125w Thermal Solution / CPU: AMD Ryzen 5 3600, 6-cores, 12-threads, 4.2/4.2GHz, 35MB cache (T.S.M.C. 7nm FinFET) / Display: HP 24" L2445w (64Hz OC) 1920x1200 / GPU: MSI GeForce GTX 970 4GD5 OC "Afterburner" @1450MHz (T.S.M.C. 28nm) / GPU: ASUS Radeon RX 6600 XT DUAL OC RDNA2 32CUs @2607MHz (T.S.M.C. 7nm FinFET) / Keyboard: HP KB-0316 PS/2 (Nordic) / Motherboard: ASRock B450M Pro4, Socket-AM4 / Mouse: Razer Abyssus 2014 / PCI-E: ASRock USB 3.1/A+C (PCI Express x4) / PSU: EVGA SuperNOVA G2, 550W / RAM A2 & B2: DDR4-3600MHz CL16-18-8-19-37-1T "SK Hynix 8Gbit CJR" (2x16GB) / Operating System: Windows 10 Home / Sound 1: Zombee Z500 / Sound 2: Logitech Stereo Speakers S-150 / Storage 1 & 2: Samsung 850 EVO 500GB SSD / Storage 3: Western Digital My Passport 2.5" 2TB HDD / Storage 4: Western Digital Elements Desktop 2TB HDD / Storage 5: Kingston A2000 1TB M.2 NVME SSD / Wi-fi & Bluetooth: ASUS PCE-AC55BT Wireless Adapter (Intel)

Vishera-X8-9370 | R20 score MC: 1476cb

Spoiler

Case: Cooler Master HAF XB Evo Black / Case Fan(s) Front: Noctua NF-A14 ULN 140mm Premium Fans / Case Fan(s) Rear: Corsair Air Series AF120 Quiet Edition (red) / Case Fan(s) Side: Noctua NF-A6x25 FLX 60mm Premium Fan / Case Fan VRM: SUNON MagLev KDE1209PTV3 92mm / Controller: Sony Dualshock 4 Wireless (DS4Windows) / Cooler: Cooler Master Hyper 212 Evo / CPU: AMD FX-8370 (Base: @4.4GHz | Turbo: @4.7GHz) Black Edition Eight-Core (Global Foundries 32nm) / Display: ASUS 24" LED VN247H (67Hz OC) 1920x1080p / GPU: MSI GeForce GTX 970 4GD5 OC "Afterburner" @1450MHz (T.S.M.C. 28nm) / GPU: Gigabyte Radeon RX Vega 56 Gaming OC @1501MHz (Samsung 14nm FinFET) / Keyboard: Logitech Desktop K120 (Nordic) / Motherboard: MSI 970 GAMING, Socket-AM3+ / Mouse: Razer Abyssus 2014 / PCI-E: ASRock USB 3.1/A+C (PCI Express x4) / PSU: EVGA SuperNOVA G2, 850W PSU / RAM 1, 2, 3 & 4: Corsair Vengeance DDR3-1866MHz CL8-10-10-28-37-2T (4x4GB) 16.38GB / Operating System 1: Windows 10 Home / Sound: Zombee Z300 / Storage 1: Samsung 850 EVO 500GB SSD (x2) / Storage 2: Seagate® Barracuda 2TB HDD / Storage 3: Seagate® Desktop 2TB SSHD / Wi-fi: TP-Link TL-WN951N 11n Wireless Adapter

Godavari-X4-880K | R20 score MC: 810cb

Spoiler

Case: Medion Micro-ATX Case / Case Fan Front: SUNON MagLev PF70251VX-Q000-S99 70mm / Case Fan Rear: Fanner Tech(Shen Zhen)Co.,LTD. 80mm (Purple) / Controller: Sony Dualshock 4 Wireless (DS4Windows) / Cooler: AMD Near-silent 95w Thermal Solution / Cooler: AMD Near-silent 125w Thermal Solution / CPU: AMD Athlon X4 860K Black Edition Elite Quad-Core (T.S.M.C. 28nm) / CPU: AMD Athlon X4 880K Black Edition Elite Quad-Core (T.S.M.C. 28nm) / Display: HP 19" Flat Panel L1940 (75Hz) 1280x1024 / GPU: EVGA GeForce GTX 960 SuperSC 2GB (T.S.M.C. 28nm) / GPU: MSI GeForce GTX 970 4GD5 OC "Afterburner" @1450MHz (T.S.M.C. 28nm) / Keyboard: HP KB-0316 PS/2 (Nordic) / Motherboard: MSI A78M-E45 V2, Socket-FM2+ / Mouse: Razer Abyssus 2014 / PCI-E: ASRock USB 3.1/A+C (PCI Express x4) / PSU: EVGA SuperNOVA G2, 550W PSU / RAM 1, 2, 3 & 4: SK hynix DDR3-1866MHz CL9-10-11-27-40 (4x4GB) 16.38GB / Operating System 1: Ubuntu Gnome 16.04 LTS (Xenial Xerus) / Operating System 2: Windows 10 Home / Sound 1: Zombee Z500 / Sound 2: Logitech Stereo Speakers S-150 / Storage 1: Samsung 850 EVO 500GB SSD (x2) / Storage 2: Western Digital My Passport 2.5" 2TB HDD / Storage 3: Western Digital Elements Desktop 2TB HDD / Wi-fi: TP-Link TL-WN851N 11n Wireless Adapter

Acer Aspire 7738G custom (changed CPU, GPU & Storage)
Spoiler

CPU: Intel Core 2 Duo P8600, 2-cores, 2-threads, 2.4GHz, 3MB cache (Intel 45nm) / GPU: ATi Radeon HD 4570 515MB DDR2 (T.S.M.C. 55nm) / RAM: DDR2-1066MHz CL7-7-7-20-1T (2x2GB) / Operating System: Windows 10 Home / Storage: Crucial BX500 480GB 3D NAND SATA 2.5" SSD

Complete portable device SoC history:

Spoiler
Apple A4 - Apple iPod touch (4th generation)
Apple A5 - Apple iPod touch (5th generation)
Apple A9 - Apple iPhone 6s Plus
HiSilicon Kirin 810 (T.S.M.C. 7nm) - Huawei P40 Lite / Huawei nova 7i
Mediatek MT2601 (T.S.M.C 28nm) - TicWatch E
Mediatek MT6580 (T.S.M.C 28nm) - TECNO Spark 2 (1GB RAM)
Mediatek MT6592M (T.S.M.C 28nm) - my|phone my32 (orange)
Mediatek MT6592M (T.S.M.C 28nm) - my|phone my32 (yellow)
Mediatek MT6735 (T.S.M.C 28nm) - HMD Nokia 3 Dual SIM
Mediatek MT6737 (T.S.M.C 28nm) - Cherry Mobile Flare S6
Mediatek MT6739 (T.S.M.C 28nm) - my|phone myX8 (blue)
Mediatek MT6739 (T.S.M.C 28nm) - my|phone myX8 (gold)
Mediatek MT6750 (T.S.M.C 28nm) - honor 6C Pro / honor V9 Play
Mediatek MT6765 (T.S.M.C 12nm) - TECNO Pouvoir 3 Plus
Mediatek MT6797D (T.S.M.C 20nm) - my|phone Brown Tab 1
Qualcomm MSM8926 (T.S.M.C. 28nm) - Microsoft Lumia 640 LTE
Qualcomm MSM8974AA (T.S.M.C. 28nm) - Blackberry Passport
Qualcomm SDM710 (Samsung 10nm) - Oppo Realme 3 Pro

 

Link to comment
Share on other sites

Link to post
Share on other sites

32 minutes ago, porina said:

Again, we have to look at possible workloads that a gamer might run. Ram performance is important, but I'm not so sure ethereum levels of access are representative of many other cases.

Almost every workload under BOINC would, these are all compute workloads that are dependent on memory performance either latency or bandwidth. Some more than others. The same is true of F@H. Games may not be but basically every workload that is not a game is to varying levels, but to be honest even games are memory performance dependent too.

 

Here's the list of all the projects, within each projects there are many different Job types (or projects, they all tend to use similar but slightly different names)

https://boinc.berkeley.edu/wiki/Project_list

 

I think that's ~39 projects, conservatively lets say there are 3 different Job Types/Projects within those so 117 different computation types all processing on different parts of or different source data.

 

So yea sure gamers might not really be doing much BOINC/F@H but that doesn't mean others are not and that doesn't mean putting systems in place to limit performance to appease one set of users is a good idea in the first place.

 

If you want to sell graphics cards to gamers here's a crazy idea, sell graphics cards to gamers. If that's not working then figure it out, it's a point of sale issue.

 

32 minutes ago, porina said:

Overall I feel that many are attacking this because they don't like the idea of a nerf, but they're viewing it from a position not that of the majority of the actual target market.

I don't like it because I have extremely good confidence it will not do anything to solving the supply problems or getting graphics cards in to the hands of gamers and only has unexplained potential problems with the given "solution".

 

People buy Geforce cards because Telsa/Quadro's are prohibitively expensive, they aren't just used for gaming or mining and these CMP cards are unlikely to get proper support by BOINC or F@H due to what they are.

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, Moonzy said:

i understand why some people are against it, and i respect their decision to do whatever they wish with their life

 

the only valid reason i can agree with is the power consumption

and that's a very important point, so i can see why linus doesn't support it.

 

but if your local power is from green sources, or you simply dont care about global warming (you should), i don't see a reason why not to mine

Human impact on global warming is much less than they lead people to believe.  There are galactic cycles that are way out of "observable" parameters.

 

Still I provide more power than I use which has its other benefits.  They probably would have cut power last week if I didn't...

AMD 7950x / Asus Strix B650E / 64GB @ 6000c30 / 2TB Samsung 980 Pro Heatsink 4.0x4 / 7.68TB Samsung PM9A3 / 3.84TB Samsung PM983 / 44TB Synology 1522+ / MSI Gaming Trio 4090 / EVGA G6 1000w /Thermaltake View71 / LG C1 48in OLED

Custom water loop EK Vector AM4, D5 pump, Coolstream 420 radiator

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, leadeater said:

Almost every workload under BOINC would, these are all compute workloads that are dependent on memory performance either latency or bandwidth. Some more than others. The same is true of F@H. Games may not be but basically every workload that is not a game is to varying levels, but to be honest even games are memory performance dependent too.

It remains to be seen how the detection works, but I think they'd do something more than just "if mem bw is high for more than 30 secs, cut bw in half" or similar.

 

Since I'm not big into folding, I never seriously optimised for it, does it scale that much with ram bandwidth? With ethereum it is strongly mem bound, you can reduce the core a lot and make little difference. The opposite (in CPU world) would be something like cinebench. Sure it must use data in ram, but ram performance hardly makes any impact to it. The spectrum of workload spread is wide, and I am really not aware of anything that is as heavily memory bound as mining is.

 

22 minutes ago, leadeater said:

If you want to sell graphics cards to gamers here's a crazy idea, sell graphics cards to gamers. If that's not working then figure it out, it's a point of sale issue.

That's less in nvidia's control than the product.

 

22 minutes ago, leadeater said:

I don't like it because I have extremely good confidence it will not do anything to solving the supply problems or getting graphics cards in to the hands of gamers and only has unexplained potential problems with the given "solution".

I look forward to arguing how successful it was after launch 😄 I still imagine a scenario where supply is improved, but still far from sufficient. I'd consider that a success. Others might not agree. It will have failed if we have proof large numbers of people are still mining profitably on them, hack or no hack.

 

22 minutes ago, leadeater said:

People buy Geforce cards because Telsa/Quadro's are prohibitively expensive, they aren't just used for gaming or mining and these CMP cards are unlikely to get proper support by BOINC or F@H due to what they are.

As long as the mining cards get CUDA or whatever the other API is that is less used there shouldn't be any problem with BOINC/folding. Mining cards will still have to run the same code at the end of the day, unless nvidia nerf gaming performance on them 😄 

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

57 minutes ago, porina said:

 

Overall I feel that many are attacking this because they don't like the idea of a nerf, but they're viewing it from a position not that of the majority of the actual target market. Chances are people actively participating on this forum are more advanced than average and look at things in a different way. I am curious, how did I arrive at my viewpoint, which seems to be in a minority compared to posts around here? I can't blame all of it on AMD fanboyism which is about the only thing in PC tech community that isn't in short supply right now. And I don't think I've disagreed as strongly with Linus at any point in past. I've stuck around because unlike many others in the tech media community, I generally agree with his interpretation on other events, but the gap on this one seems too big to close. He has admitted to saying things before to provoke a reaction, and I don't know if this might be another case of that.

The entire point of going PC over Console is the flexibility provided in to what we can dedicate our compute power towards. If GPU vendors start nerfing performance for specific  applications outside gaming, this kind of defeats the purpose, even only as a precedent. 

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

57 minutes ago, porina said:

As long as the mining cards get CUDA or whatever the other API is that is less used there shouldn't be any problem with BOINC/folding. Mining cards will still have to run the same code at the end of the day, unless nvidia nerf gaming performance on them 😄 

Yes there would be a problem, each GPU actually does go through optimization to work such is why Navi for a very long time was not supported on F@H at all. These CMP cards are being sold as ones that failed to meet the Geforce silicon quality standard, who knows what problems there are with them. I don't think any scientific users are going to want any potential computational data errors introduced by using known inferior quality GPUs.

 

Who even knows what the drivers are like, what actual CUDA level feature support there is within that and verification that they actually function correctly. The CMP lineup is sold specifically for mining workloads, nothing else. It's unlikely anyone will actually be buying them for distributed computing at all so it's also unlikely they'll get added to the whitelists of supported GPUs so they are a non-starter just form that alone.

 

57 minutes ago, porina said:

It remains to be seen how the detection works, but I think they'd do something more than just "if mem bw is high for more than 30 secs, cut bw in half" or similar.

I don't think it's that either, what I fear is that they are using application signing (most applications are already signed, games included) and are using trusted publisher lists so anything not trusted will go slow. That is a very simple way of targeting workloads that you do want to support while not having to monitor every Eth mining tool and algorithm change. Nvidia can easily update such a list with each driver update, these public keys will rarely change anyway so managing it in this way wouldn't be a burden on Nvidia.

 

That means only the blessed allowed will able to go fast, the rest not. Is it actually this? No idea. Is it not this? Also no idea.

 

57 minutes ago, porina said:

The spectrum of workload spread is wide, and I am really not aware of anything that is as heavily memory bound as mining is.

Again that depends on the project some absolutely will be, the verity is absolutely huge. Mining is just an algorithm like anything else is, anything similar that is mostly memory operations or reliant on memory operations will be performance dependent on memory bandwidth or latency. If you watch any recent HPC presentation in the last 10 years everyone is saying performance is memory bound, BOINC is HPC as in actually running those types of workloads just without any usage of MPI and optimized for single node runs.

 

I couldn't name any specific project or jobs within because I don't look at that too much as changing memory clocks and core clocks when using BOINC is heavily ill-advised as it leads to computation errors and your computer getting temporary blacklisted from any new jobs. I know of some that definitely are not and are GPU core performance bound.

Link to comment
Share on other sites

Link to post
Share on other sites

32 minutes ago, SGT-AMD said:

You might be able to work on the "Generic" driver that Microsoft supplies, after the card is detected, unless, they change that one as well.

 

The GPU isn't identified as a "GPU" until drivers are installed afaik

 

Meaning they don't appear in the system as "rtx 3090" until the drivers are installed, thus applications can't really use it from my knowledge

Even task manager doesn't detect it until drivers are installed

-sigh- feeling like I'm being too negative lately

Link to comment
Share on other sites

Link to post
Share on other sites

I do wonder if we have a glass half full/half empty situation here. I'm leaning on the positive. How would these moves potentially work out? Which is not saying it is the right or best decision, but on the other hand we have those looking to pick at every possible case where it might not work. nvidia bad, mmkay?

 

2 hours ago, leadeater said:

Yes there would be a problem, each GPU actually does go through optimization to work such is why Navi for a very long time was not supported on F@H at all. These CMP cards are being sold as ones that failed to meet the Geforce silicon quality standard, who knows what problems there are with them. I don't think any scientific users are going to want any potential computational data errors introduced by using known inferior quality GPUs.

I can't speak for f@h specifically as it is not my special interest area, but in my limited experience, generally speaking, CUDA code interrogates the GPU and the software adapts itself as best it can to the hardware level returned. I can't see a mining GPU doing that differently enough to break things.

 

Also, I don't see anywhere suggesting these are reject graphics GPUs. It might be a logical conclusion some of the might be defective dies relating to video output, but I'd expect them to be otherwise functionally complete.

 

Errors in mining might not be as bad as if it happened in other workloads, but certainly there is no desire for unstable hardware here either. From what limited information we have so far, I believe nvidia will be released the cards configured similarly to gaming GPUs, and it is up to the miner to optimise it as far as they dare.

 

2 hours ago, leadeater said:

It's unlikely anyone will actually be buying them for distributed computing at all so it's also unlikely they'll get added to the whitelists of supported GPUs so they are a non-starter just form that alone.

I can only say most if not all projects I run don't run on a hardware whitelist. If a "new" and unknown GPU appears it generally works well enough with existing code, until such time further updates might be applied for even better performance.

 

Since they're not selling the mining cards in the open, I also don't expect anyone to buy them initially, but they could be interesting at such time the bubble bursts and some places sell out. Assuming they work as I expect, and they're at the right price, it would be an idea secondary market for them.

 

2 hours ago, leadeater said:

I don't think it's that either, what I fear is that they are using application signing (most applications are already signed, games included) and are using trusted publisher lists so anything not trusted will go slow. That is a very simple way of targeting workloads that you do want to support while not having to monitor every Eth mining tool and algorithm change. Nvidia can easily update such a list with each driver update, these public keys will rarely change anyway so managing it in this way wouldn't be a burden on Nvidia.

This method seems highly unlikely to me. There is no way nvidia will be able to keep up with all games and variations possible. It makes more sense to detect how the hardware is utilised, with the risks and benefits that come with it.

 

2 hours ago, leadeater said:

If you watch any recent HPC presentation in the last 10 years everyone is saying performance is memory bound, BOINC is HPC as in actually running those types of workloads just without any usage of MPI and optimized for single node runs.

My personal pet project is prime number finding, and that is more HPC-like than most workloads. You might have seen in other threads I consider Prime95 to be representative of a load I optimise all my systems to be able to run continuously, and that I consider pretty much every Intel quad core or higher CPU released since Sandy Bridge to be massively ram bandwidth limited (and AMD since Zen 2 when they finally caught up). I also care a lot about FP64 performance, which is traditional HPC territory. I don't suffer so much from this on GPU side since the work available on GPU is more limited than CPU due to the difficulty in scaling software for that task. One of the problems is lack of bandwidth relative to the execution potential. But even then, I know the balance in that use case is nowhere near as extreme as it is for mining.

 

On the recent navi GPUs, there was much talk if the bigger on-GPU cache would help boost compute applications. I'm not aware if anyone has reached a conclusion for that yet.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, porina said:

Also, I don't see anywhere suggesting these are reject graphics GPUs

nvidia themselves said these are reject dies so it wont affect geforce card production

-sigh- feeling like I'm being too negative lately

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, Moonzy said:

nvidia themselves said these are reject dies so it wont affect geforce card production

Got a link to that? I must have missed it.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, porina said:

Got a link to that? I must have missed it.

around 4:30

 

pretty sure it's been mentioned before this video but i dont think i've seen sources for it

-sigh- feeling like I'm being too negative lately

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Moonzy said:

around 4:30

 

pretty sure it's been mentioned before this video but i dont think i've seen sources for it

I'm not giving that video another view. Where do nvidia say it? Not Linus saying nvidia saying it?

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, porina said:

I'm not giving that video another view. Where do nvidia say it? Not Linus saying nvidia saying it?

https://blogs.nvidia.com/blog/2021/02/18/geforce-cmp/

 

Quote

CMP products — which don’t do graphics — are sold through authorized partners and optimized for the best mining performance and efficiency. They don’t meet the specifications required of a GeForce GPU and, thus, don’t impact the availability of GeForce GPUs to gamers.

 

-sigh- feeling like I'm being too negative lately

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Moonzy said:

Thanks, I did indeed miss that line as I had seen that page before.

 

I still think it doesn't necessarily mean what some take it to mean, but given we're days away from 3060 launch let's see how it goes. At this point I'm curious enough to buy one, if it is available, and at a price I think it should be at relative to 3060Ti.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, porina said:

Thanks, I did indeed miss that line as I had seen that page before.

 

I still think it doesn't necessarily mean what some take it to mean, but given we're days away from 3060 launch let's see how it goes. At this point I'm curious enough to buy one, if it is available, and at a price I think it should be at relative to 3060Ti.

i have low hopes that i can obtain one that is remotely near MSRP since retailers in my region like to price things however they wish

but, i might grab one and have fun with it still

-sigh- feeling like I'm being too negative lately

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Moonzy said:

i have low hopes that i can obtain one that is remotely near MSRP since retailers in my region like to price things however they wish

but, i might grab one and have fun with it still

Fortunately for me it is not a "need" so I can afford to pass if the pricing is above expectations. Good luck to any who go for it, as long as it doesn't get in my way 😄 

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, porina said:

 

Also, I don't see anywhere suggesting these are reject graphics GPUs.

Nvidia literally said CMP cards will be sourced from silicon that fails to meet their Geforce quality standards, that's their claim as to how these CMP cards will not affect supply of Geforce.

 

1 hour ago, porina said:

CUDA code interrogates the GPU and the software adapts itself as best it can to the hardware level returned.

It really is a lot more complicated that that, CUDA itself is huge and there's a lot to it.

 

1 hour ago, porina said:

I can only say most if not all projects I run don't run on a hardware whitelist. If a "new" and unknown GPU appears it generally works well enough with existing code, until such time further updates might be applied for even better performance.

Well BOINC and F@H are not like that because they are tools for science and need to ensure correct computational output. You can do a quick search just on F@H about GPU overlocking and just how easily GPUs will start producing errors then get kicked off the network. It's very sensitive to that.

 

1 hour ago, porina said:

On the recent navi GPUs, there was much talk if the bigger on-GPU cache would help boost compute applications. I'm not aware if anyone has reached a conclusion for that yet.

Only NAVI2 has that, NAVI doesn't and when it was released even though the GPUs supported OpenCL like any other AMD GPU did it wasn't stable enough in reality to be used so F@H outright did not support NAVI GPUs for a long ass time.

 

1 hour ago, porina said:

This method seems highly unlikely to me. There is no way nvidia will be able to keep up with all games and variations possible. It makes more sense to detect how the hardware is utilised, with the risks and benefits that come with it.

It's actually extremely simple to do it this way. Applications are signed with a publisher key, this doesn't change game to game, it won't even change year to year much. This is already in place and used by AV software.

 

Here's an example of Civilization V executable.

image.thumb.png.a90d5adf0bdb61108049b1912d0e6bff.png

 

It's signed by Valve and the certificate is valid for 2 years.

 

Granted I don't think Nvidia will be doing it with this method as not every game actually is or will get signed but it's a method already in place and used and required very little effort.

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, leadeater said:

Well BOINC and F@H are not like that because they are tools for science and need to ensure correct computational output. You can do a quick search just on F@H about GPU overlocking and just how easily GPUs will start producing errors then get kicked off the network. It's very sensitive to that.

Agree in most of these cases results have to be correct to be useful. Actually I can see some mining like parallels in the distributed computing world, at least, for those competing on points. More hardware means more work done. You can try to optimise your hardware. The only difference is you don't get paid in cryptocurrency but some credit score.

 

Anyway, that's beside the point. Can't speak for f@h but certainly most boinc projects I participate in, or have done in past, don't care much about generation of product. Assuming the new product is working correctly old code will work on it. If we parallel to gaming, we don't need to wait for each game to be updated to specifically support new hardware (assuming you don't use the new features), beyond getting the driver for it. That's not to say there aren't edge cases where new hardware does things differently enough the expected output is not obtained, but that's the exception rather than the rule. The expectation is it will work correctly, but not necessarily optimally. Similarly for CPUs.

 

One software in the project I participate in has also implemented a check in code to ensure results are computed correctly, so it can even tolerate hardware errors. I don't claim to understand it, but there is mathematical proof that a calculation was done correctly, and if not, it can retry from the last valid checkpoint. That's not to say people should run unstable hardware, but at least it can still do useful work and also give the opportunity to flag the user to do something about correcting it. Also recognise this wont apply to all code out there, but it happens to be possible in that case.

 

20 minutes ago, leadeater said:

Only NAVI2 has that

I did say recent. 

 

20 minutes ago, leadeater said:

It's actually extremely simple to do it this way.

You can also bet someone will set up a shell developer/publisher and put mining code hidden in a game. Or do you want nvidia to vet every publisher and title produced?

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

kinda annoying tbh, its a piece of hardware they shouldn't put up invisible walls. instead they could just make it so that almost all the supply went to physical stores and make them do a limit of 1 per customer. like what if i buy one then want to mine in the future, it hurts the value of the card. then again even with this artificial limitation it will immediately sell out. I think nvidia sees this as an excuse to start locking down gaming gpus which could hurt consumers in the future when there is no shortage anymore. miners and gamers using the same gpus is not always bad, remember those really cheap rx 570s and 470s on ebay that were used for mining? when crypto crashes it helps gamers because there's an oversupply of cards and gamers/people who do editing or whatever get really cheap used gpus. Nvidia is playing the long game and using this as an excuse to separate there costumer base so used products arnt as popular and you have to buy new. think about it, the mining gpus will have no afterlife, you cant reuse them for other stuff so when they are too inefficient to compete, they get tossed instead of being used by someone on a budget. Im no envirmentalist or whatever but it seems like it would create more e-waste and force people on a budget to buy new cards from nvidia because there arnt as many used gpus. apple does the same think with right to repair, they act like there are so eco friendly but wont let you fix your phone. nvidia is playing 4d chess just wait a few years you will see. controlling the market while looking like the good guy for helping gamers get gpus. they have wanted to do this for a while most likely, now they see the perfect opportunity to hurt the used market. 

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, porina said:

You can also bet someone will set up a shell developer/publisher and put mining code hidden in a game. Or do you want nvidia to vet every publisher and title produced?

They won't need to vet every title, that's not how signing works. You'll never ever have to do that. Any new publishing cert would have to be submitted through to Nvidia and the included in the next driver update or Nvidia implements an automatic signature update like AV engines.

 

You understand application signing is already in use right? You also understand it's very similar to SSL? However with less stringent rules and processes in place like blocking or warning when the code signing expires, like my Civ 5 has.

 

Abuse is simple to deal with, blacklist that publisher public key and job done.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, leadeater said:

Abuse is simple to deal with, blacklist that publisher public key and job done.

So kinda like DHCP? I get the process, but just not the value in this application. Still think it would be too easy for someone to sign up, hide mining code that might go undiscovered unless it is actively looked for.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, porina said:

So kinda like DHCP? I get the process, but just not the value in this application. Still think it would be too easy for someone to sign up, hide mining code that might go undiscovered unless it is actively looked for.

No like SSL, it's the same architecture and uses PKI. Abuse is not wide spread in SSL/PKI because there are only a few trusted signers who have to maintain their reputation, these trusted CA's are the same. And yea sure it can be abused, like anything can be but I don't think that's a failure point that is the biggest problem, the failure point is not every game actually gets signed and most older games are not, in the same way not every website uses HTTPS.

 

But you could hide mining code in anything, how would these RTX 3060 cards handle mixed workloads? Is what ever that is going to be in place be as easy as to defeat as simply running another workload at the same time.

 

These issues are fundamentally why what Nvidia is trying to do is simply a bad idea, it's a very poor solution to a problem by trying methods that are indirect to the issue itself anyway. The solution to getting GPUs to gamers is to sell GPUs to gamers, it really is that simple and carries no risk to the functioning of the GPU at all.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×