Jump to content

NVIDIA Fires Shots at AMD’s 7nm Tech - Claims "Can Create Most Energy-efficient GPU in the World Anytime"

The reason they'll never do this is because they're to busy spending billions chasing pipedreams like Raytracing that no one asked for and based on current implementation rates in software no one wants either.

 

The phrase "the bigger you are the harder you fall" applies to Nvidia in this scenario. Just like Intel they got complacent with being market leader and instead of focusing on what mattered they decided to focus on a personal dream of their CEO. Guess what Nvidia, while you were wasting time developing RTX AMD caught you up and might just overtake you in the market where it really matters.

 

This is what happens when you chase the top 1% and ignore the other 99%. The top 1% might buy the most expensive cards but the other 99% are where the real money lies.

Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Corsair MP600 1TB PCIe Gen 4 | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Ubuntu 20.04.2 LTS |

 

Server:-

Intel NUC running Server 2019 + Synology DSM218+ with 2 x 4TB Toshiba NAS Ready HDDs (RAID0)

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, ARikozuM said:

If Huang isn't going to do it, he shouldn't bring it up. Unless he's trying to be an American politician... 

 

Mother of God | Know Your Meme

 

  Reveal hidden contents

Give a single-slot, low-profile, fanless 750 Ti with the power to RTX everything at 2160@60 Ultra. - The People

 

many people are looking at this from a too narrow perspective. Sure the Radeon VII might only trade blows with the RTX 2080 in gaming performance, but have you seen its performance in professional/data centre tasks? It can compete with Nvidia's highest end, and do so while costing significantly less and even consume less power. The data centre is already a large part of Nvidia's revenue and is only going to continue growing. The only reason a CEO would ever speak in such a manner about a competitor is in a last, and frankly pathetic attempt to prevent costumers from switching over to their products. Sure, Nvidia has its niche, but with the overall disappointment that ray tracing was, and the ever increasing news that AMD gpu's can actually ray trace as well Nvidia is shitting their pants. This is not to say that AMD will completely annihilate Nvidia, because that simply wont happen. Nvidia have way too many resources, and as stated, at the current time are still ahead of AMD in certain workloads. But change is coming. Otherwise you would most definitely not hear any such statements from the CEO of a multi billion dollar company.

Link to comment
Share on other sites

Link to post
Share on other sites

32 minutes ago, Master Disaster said:

The reason they'll never do this is because they're to busy spending billions chasing pipedreams like Raytracing that no one asked for and based on current implementation rates in software no one wants either.

 

The phrase "the bigger you are the harder you fall" applies to Nvidia in this scenario. Just like Intel they got complacent with being market leader and instead of focusing on what mattered they decided to focus on a personal dream of their CEO. Guess what Nvidia, while you were wasting time developing RTX AMD caught you up and might just overtake you in the market where it really matters.

 

This is what happens when you chase the top 1% and ignore the other 99%. The top 1% might buy the most expensive cards but the other 99% are where the real money lies.

If raytracing wasn't worth chasing after Nvidia wouldn't be investing into it, and apparently AMD is wanting to add raytracing into their cards as well.

The implementation is slow because its new, and game publishers care more about money than implementing a feature that significantly improves the visual quality. The real money is in the top 1% in the top tier cards and in datacenter applications.

Link to comment
Share on other sites

Link to post
Share on other sites

Let's see, in gaming performance, the 7nm Radeon VII is comparable to the 1080Ti released nearly two years earlier, and still doesn't beat it on power consumption. Am I missing something here?

 

I'd give the VII has potential to do really well in some other specific tasks, and certainly in those cases, you can follow the performance. On a similar note, for compute applications I did see Turing take a good step ahead over Pascal if you reference gaming performance, the compute is both faster and lower power consuming. 

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, Master Disaster said:

The reason they'll never do this is because they're to busy spending billions chasing pipedreams like Raytracing that no one asked for and based on current implementation rates in software no one wants either.

 

The phrase "the bigger you are the harder you fall" applies to Nvidia in this scenario. Just like Intel they got complacent with being market leader and instead of focusing on what mattered they decided to focus on a personal dream of their CEO. Guess what Nvidia, while you were wasting time developing RTX AMD caught you up and might just overtake you in the market where it really matters.

 

This is what happens when you chase the top 1% and ignore the other 99%. The top 1% might buy the most expensive cards but the other 99% are where the real money lies.

Everyone wants ray tracing, it has been the "holy grail" of graphical and effects design for decades. The adoption rate is slow because the performance isn't there right now nor are the sales. Also, you do realize that both companies have developed professional products that handle ray tracing for movie studios right?

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, porina said:

Let's see, in gaming performance, the 7nm Radeon VII is comparable to the 1080Ti released nearly two years earlier, and still doesn't beat it on power consumption. Am I missing something here?

 

I'd give the VII has potential to do really well in some other specific tasks, and certainly in those cases, you can follow the performance. On a similar note, for compute applications I did see Turing take a good step ahead over Pascal if you reference gaming performance, the compute is both faster and lower power consuming. 

I still have to wonder how much difference there is between an arch shrunk to 7nm versus an arch designed for 7nm. How much difference if any at all dunno, but that's the main area I'm interested in.

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, Blademaster91 said:

If raytracing wasn't worth chasing after Nvidia wouldn't be investing into it, and apparently AMD is wanting to add raytracing into their cards as well.

The implementation is slow because its new, and game publishers care more about money than implementing a feature that significantly improves the visual quality. The real money is in the top 1% in the top tier cards and in datacenter applications.

No, the real money is in the mid end cards that majority buys, not the top 1% of cards.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, RejZoR said:

There was one point in my ownage of GTX 1080Ti where I had so many dumb problems with it I was "this" close to selling the damn thing and buying Vega 64. NVIDIA, stfu and sit down you annoying bragging little shit. This is one of reasons why I don't like NVIDIA and especially its founder who acts like a little kid bragging how his dad or brother can kick anyone's ass.

 

Great, so you have the most power efficient GPU and the fastest GPU while at that. Has anyone looked at the NVIDIA Control Panel recently and with honest face said "yup, this is modern". The damn thing looks the same as it did back in 2004 and it's absolute shit. Organized like a turd, all flickering and twitching, selections jerking around when you change anything, you need to use fat and useless NVIDIA "Experience" to get some extra mostly useless functionality that doesn't even work in 99% of stuff (like that custom shading thing that I couldn't get to work in any damn game). How about you shut up and improve your garbage software instead, eh NVIDIA?

 

I was setting up an AMD system for a relative and boy the AMD Crimson control panel is such a sublime experience. Beautiful, fast, responsive, well arranged and the new OSD feature is freaking amazing as we could monitor GPU clock under heavy load, see thermals and also the framerate without having to touch MSI Afterburner installer. No fiddling with extra fat software, it's a part of the drivers.

I've never had that problem with the Nvidia Control Panel, it could definitely use an update although the only time I need to use it is when cleaning out drivers with DDU and reapplying the resolution and color settings. If AMD were the best they would be constantly bragging, they already overhype everyone then at release its the 2nd best product, and the 7nm marketing despite more power consumption than Nvidia's 12nm parts. Yeah the AMD control panel looks nice, but the ability to easily tweak is almost necessary especially undervolting if you got a card with a bad cooler. I personally don't touch apps like MSI afterburner unless i'm benchmarking a card since Nvidia took the most of the fun out of overclocking, and it would be nice if AMD's drivers were actually optimized on release instead of buying a card then waiting for improvements.

Link to comment
Share on other sites

Link to post
Share on other sites

 

18 minutes ago, Blademaster91 said:

If raytracing wasn't worth chasing after Nvidia wouldn't be investing into it, and apparently AMD is wanting to add raytracing into their cards as well.

The implementation is slow because its new, and game publishers care more about money than implementing a feature that significantly improves the visual quality. The real money is in the top 1% in the top tier cards and in datacenter applications.

When it comes to the consumer market, no. The top end cards likely produce the most profit per unit, but the vast majority of sales are at the mid-range making that the biggest money maker on the consumer side. Datacenter is a good money earner, but the consumer sector has routinely brought in more revenue and profit for Nvidia.

Link to comment
Share on other sites

Link to post
Share on other sites

53 minutes ago, Master Disaster said:

The reason they'll never do this is because they're to busy spending billions chasing pipedreams like Raytracing that no one asked for and based on current implementation rates in software no one wants either.

 

The phrase "the bigger you are the harder you fall" applies to Nvidia in this scenario. Just like Intel they got complacent with being market leader and instead of focusing on what mattered they decided to focus on a personal dream of their CEO. Guess what Nvidia, while you were wasting time developing RTX AMD caught you up and might just overtake you in the market where it really matters.

 

This is what happens when you chase the top 1% and ignore the other 99%. The top 1% might buy the most expensive cards but the other 99% are where the real money lies.

 

Raytracing has better uptake ATM than DX12 did at the same timeframe after release.5 Months after DX12 hit there where 2 games that supported it. One was an indie title the other was developed by microsoft. There were no pro dev studio released titles with DX12 support that when't associated with microsoft until 7 months after DX12 released.

 

3 3rd party pro studio titles after 5 months is a massive level of uptake by comparison.

Link to comment
Share on other sites

Link to post
Share on other sites

Turing also has some major issues though, NVIDIA. Here's just a few... The 2080ti, 2080 and 2070 (And even 2060 to some extent) are like $100 more expensive than they should be. The 2060 having 6gb vram and the 2080 having 8gb is a joke. The Titan RTX makes no sense whatsoever to anybody. The gtx 1660 gets beat by Polaris in some instances, a LAST GEN card. Ray Tracing performs awful. DLSS looks like Vaseline has been smeared on the display.

 

Other than the NVENC encoder, I can't think of anything in Turing that you could say is actually any good.

 

30% more performance for 60% more price. Anyone interested?

Link to comment
Share on other sites

Link to post
Share on other sites

So why don't they?

Intel i7 5820K (4.5 GHz) | MSI X99A MPower | 32 GB Kingston HyperX Fury 2666MHz | Asus RoG STRIX GTX 1080ti OC | Samsung 951 m.2 nVME 512GB | Crucial MX200 1000GB | Western Digital Caviar Black 2000GB | Noctua NH-D15 | Fractal Define R5 | Seasonic 860 Platinum | Logitech G910 | Sennheiser 599 | Blue Yeti | Logitech G502

 

Nikon D500 | Nikon 300mm f/4 PF  | Nikon 200-500 f/5.6 | Nikon 50mm f/1.8 | Tamron 70-210 f/4 VCII | Sigma 10-20 f/3.5 | Nikon 17-55 f/2.8 | Tamron 90mm F2.8 SP Di VC USD Macro | Neewer 750II

Link to comment
Share on other sites

Link to post
Share on other sites

Someone's going to jail in a bit for stock manipulation with bold claims that can't be backed up. He just need a journalist challenging him a bit before saying stupid things he shouldn't.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Humbug said:

LOL true. That would not be efficiency though.

 

Efficiency does not mean low power. It means performance per watt, getting work done for less power...

Fine, it runs on wind power from your case fan, and so draws 0W fro the outlet. Will clarify

Link to comment
Share on other sites

Link to post
Share on other sites

27 minutes ago, GoodBytes said:

Fine, it runs on wind power from your case fan, and so draws 0W fro the outlet. Will clarify

Im not cashing out for an RTX card untill it runs off quantum field fluctuations, dat fan power doesnt come for free and wind power will impede cooling

Link to comment
Share on other sites

Link to post
Share on other sites

This ain't really news to me honestly. Both Nvidia and Intel had years of barely any competition from AMD to fine tune their products.

There's also the fact that AMD is competing on two front, CPU and GPUs, while they only have to compete on a single side of the hardware "war".

 

Funny how these sort of news from Nvidia, bashing AMD, always come around whenever AMD is about to release a new product eh...

CPU: AMD Ryzen 3700x / GPU: Asus Radeon RX 6750XT OC 12GB / RAM: Corsair Vengeance LPX 2x8GB DDR4-3200
MOBO: MSI B450m Gaming Plus / NVME: Corsair MP510 240GB / Case: TT Core v21 / PSU: Seasonic 750W / OS: Win 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

I would love a 1060-like card that draws like 50 watts.

 

SFF enthusiasts unite!

Fan Comparisons          F@H          PCPartPicker         Analysis of Market Trends (Coming soon? Never? Who knows!)

Designing a mITX case. Working on aluminum prototypes.

Open for intern / part-time. Good at maths, CAD and airflow stuff. Dabbled with Python.

Please fill out this form! It helps a ton! https://linustechtips.com/main/topic/841400-the-poll-to-end-all-polls-poll/

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, BiG StroOnZ said:

"What makes us special is we can create the most energy-efficient GPU in the world at anytime. And we should use the most affordable technology. Look at Turing. The energy efficiency is so good even compared to 'somebody else’s' 7nm."

Bring me a worthy upgrade from my GTX 1060 3GB that doesn't require more than a 6-pin.

 

PastScrawnyCassowary-size_restricted.gif.1fb643938aeecf1b7af121580db879b6.gif

mechanical keyboard switches aficionado & hi-fi audio enthusiast

switch reviews  how i lube mx-style keyboard switches

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, leadeater said:

I still have to wonder how much difference there is between an arch shrunk to 7nm versus an arch designed for 7nm. How much difference if any at all dunno, but that's the main area I'm interested in.

Considering how much cooperation it took to get 7nm parts, my guess is that the differences are quite small, as they would have to change quite a bit to get it to work, though vega 20 was the first 7nm part made by amd so their is a good chance had they done it latter on it would be better due to the experience they got from making it

3 hours ago, CarlBar said:

 

Raytracing has better uptake ATM than DX12 did at the same timeframe after release.5 Months after DX12 hit there where 2 games that supported it. One was an indie title the other was developed by microsoft. There were no pro dev studio released titles with DX12 support that when't associated with microsoft until 7 months after DX12 released.

 

3 3rd party pro studio titles after 5 months is a massive level of uptake by comparison.

you really cant compare the two, dx12 takes many times more work to implement it, they need to change the base of the render engine for the game to take advantage of it, while raytracing is another effect on top of the render engine

Link to comment
Share on other sites

Link to post
Share on other sites

Now I'm really curious as to what both companies are cooking up. 

The Workhorse (AMD-powered custom desktop)

CPU: AMD Ryzen 7 3700X | GPU: MSI X Trio GeForce RTX 2070S | RAM: XPG Spectrix D60G 32GB DDR4-3200 | Storage: 512GB XPG SX8200P + 2TB 7200RPM Seagate Barracuda Compute | OS: Microsoft Windows 10 Pro

 

The Portable Workstation (Apple MacBook Pro 16" 2021)

SoC: Apple M1 Max (8+2 core CPU w/ 32-core GPU) | RAM: 32GB unified LPDDR5 | Storage: 1TB PCIe Gen4 SSD | OS: macOS Monterey

 

The Communicator (Apple iPhone 13 Pro)

SoC: Apple A15 Bionic | RAM: 6GB LPDDR4X | Storage: 128GB internal w/ NVMe controller | Display: 6.1" 2532x1170 "Super Retina XDR" OLED with VRR at up to 120Hz | OS: iOS 15.1

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, fasauceome said:

. Huang knows what AMD has up their sleeves as much as we do so these lofty claims are easy to ignore.

Yes and thus he is afraid of that and shoots at AMD preemtively.

 

The "nVidia believers" will eat it and go to Forums, starting flamewars because of that.

 

 

In other words, it means that AMD totally concentrated on Navi as they've seen that they were behind and only did what they have to to somewhat compete.

 

And Jensen knows probably a good bit more about Navi, is shitting his pants so he's rambling about AMD because if AMD is good, that means less money for him.

 

But hey, once Intel brings out their GPU technology, its pretty much over for them...

"Hell is full of good meanings, but Heaven is full of good works"

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Jack_of_all_Trades said:

This last sentence makes me wonder if the whole post was sarcastic.

No, it is not.

If intel doesn't totally botch their GPU Lineup and has a somewhat reasonable Performance/Price ratio, they will enter the GPU market very strongly as they have "Brand Recognition", wich is what AMD prevented to get some decent shares in the recent years even if the Products were pretty good.

 

Oh and also the Media Bashing from Tomshardware and co.

 

That won't happen with Intel, so Intel will get hyped by the Media as some of them are Intel Fans anyway.

 

So the conclusion is that nV has a Problem in the Consumer and Workstation Market.

 

Especially since AMD already has implemented Infinity Fabric in their GPUs and will continue to do so.

Intel will have something Similar.

 

For us Consumers that's not important but for Data Center it is. So for best performance, you combine an Intel CPU with an Intel GPU and AMD CPU with an AMD GPU due to the non standard Interfaces.

 

The Conclusion is:

If AMD gets a foot into the Market with their products (now and then) and Intel also comes with a very strong product, its over for them.


There is no real Market for nVidia.

 

ESPECIALLY since they fucked over all their partners!

They are sick of them, if you treat your partners like nVidia does, its safe to assome that everyone would love to drop them like a hot potato if given the chance.

 

And Intel entering the Market with a Strong Card, AMD getting strong and competitive again is such a chance for the nVidia Partners.

Especially if AMD continues to gain brand recognition with Ryzen...

"Hell is full of good meanings, but Heaven is full of good works"

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Sampsy said:

Hah this is a great way for Nvidia to piss off their datacenter customers who are presumably asking for such a product. 

Is there someone in the world, that nVidia didn't piss off yet??


I mean they alredy pissed off Sony and Microsoft. That are two big companies that might not touch them any time soon (in Consoles) or ever again...

"Hell is full of good meanings, but Heaven is full of good works"

Link to comment
Share on other sites

Link to post
Share on other sites

Guest
This topic is now closed to further replies.


×