Jump to content

Nvidia CEO Says Intel's Test Chip Results For Next-Gen Process Are Good

3 hours ago, Fasterthannothing said:

They aren't really in it for the gamers anymore. 

I see this sentence gets repeated everywhere, but I don't understand why.

Why did so many people suddenly say the same thing?

 

Was it because they recently announced their earnings and the data center has grown a lot? Because their gaming segment is still massive (about 33% of their revenue), and in fact a larger percentage of their total revenue than AMD's gaming revenue (around 29%).

Link to comment
Share on other sites

Link to post
Share on other sites

47 minutes ago, LAwLz said:

I see this sentence gets repeated everywhere, but I don't understand why.

Why did so many people suddenly say the same thing?

 

Was it because they recently announced their earnings and the data center has grown a lot? Because their gaming segment is still massive (about 33% of their revenue), and in fact a larger percentage of their total revenue than AMD's gaming revenue (around 29%).

Maybe because everyone can see the writing on the wall. They don't need our money anymore. Yes we still give them 33% but in reality it's shrinking literally every week. They still don't have competition in the high end card space from AMD and they are barely trying anymore. The literally just release cards for gamers at this point to subsidize bad yields on their business grade stuff.

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, Fasterthannothing said:

Or maybe they aren't actually going to be competition at all. I actually foresee Intel doing gaming GPUs and Nvidia dropping them completely in the future. They aren't really in it for the gamers anymore. 

I doubt Nvidia would drop the consumer market entirely, they have way too much marketshare to just give that up. Although I would expect Nvidia to change to a full vertical monopoly and force out all the AIB's, only making their own FE GPU's because they seem to be trying really hard to be like Apple marketing as the "premium" brand at premium prices insisting you need the software features they charge extra for.

9 hours ago, Fasterthannothing said:

Maybe because everyone can see the writing on the wall. They don't need our money anymore. Yes we still give them 33% but in reality it's shrinking literally every week. They still don't have competition in the high end card space from AMD and they are barely trying anymore. The literally just release cards for gamers at this point to subsidize bad yields on their business grade stuff.

I'm not surprised people don't see it given how strong the Nvidia mindshare is, but its pretty clear Nvidia hasn't focused on the gaming market since the RTX 20 series,  they pushed crypto hard with the 30 series, didn't drop 30 series card pricing enough after the crypto crash, increased prices for every RTX 40 series card while lowing the performance for the price a whole tier except for the 4090. Nvidia clearly doesn't care with things like they tried to pass off the 4070Ti as a 4080 12GB, or selling a 4060Ti for $400 which doesn't even beat the 3060Ti by enough to be worth buying.

As for AMD I think their cards are fine although they need to stop screwing up launches especially with pricing, AMD needs to launch their midrange cards and I think its stupid they don't launch those first as midrange is what usually sells the most.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, LAwLz said:

I see this sentence gets repeated everywhere, but I don't understand why.

Why did so many people suddenly say the same thing?

 

Was it because they recently announced their earnings and the data center has grown a lot? Because their gaming segment is still massive (about 33% of their revenue), and in fact a larger percentage of their total revenue than AMD's gaming revenue (around 29%).

not that they quit, but more that they could maybe hand fist the gamer market and raise the prices, as like they do when pushing DLSS 2-3, then going back on their big resolution targets.

Link to comment
Share on other sites

Link to post
Share on other sites

On 5/30/2023 at 10:23 PM, Spotty said:

That's probably a good point. With Apple though they're doing a lot more than just designing SoCs, and I would argue that's actually a pretty small part of what Apple does and something they've only been doing very recently. Whereas with Nvidia designing processors is (almost) entirely what they do. If Nvidia's ever tries to acquire ARM again they could also help offset the cost by fabbing their own ARM CPUs as well as their GPUs.

 

With USA trying to get more chip fabs built within their borders it wouldn't surprise me if Nvidia took the chance for some fat grant money to build their own fab. Though obviously we're talking several years and generations down the line for any new fabrication facility to be up and running, clearly doesn't solve Nvidia's problem of finding a fab partner for their next GPU. Maybe it's something we'll see in 10 or 20 years time.

 

I guess the biggest benefit for Nvidia not having their own fab is simply being able to switch to whoever is going to be the most competitive at the time, both in regards to cost and latest process nodes. Not getting stuck for possibly multiple product generations on what their own fab can deliver. TSMC 12nm -> Samsung 8nm -> TSMC 5nm -> Intel???

 

image.png

wouldn't be surprised that the next Nvidia jetson will be using SMIC fabs.

Specs: Motherboard: Asus X470-PLUS TUF gaming (Yes I know it's poor but I wasn't informed) RAM: Corsair VENGEANCE® LPX DDR4 3200Mhz CL16-18-18-36 2x8GB

            CPU: Ryzen 9 5900X          Case: Antec P8     PSU: Corsair RM850x                        Cooler: Antec K240 with two Noctura Industrial PPC 3000 PWM

            Drives: Samsung 970 EVO plus 250GB, Micron 1100 2TB, Seagate ST4000DM000/1F2168 GPU: EVGA RTX 2080 ti Black edition

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Quackers101 said:

not that they quit, but more that they could maybe hand fist the gamer market and raise the prices, as like they do when pushing DLSS 2-3, then going back on their big resolution targets.

I don't really understand what you mean. Can you please explain in more detail?

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, williamcll said:

wouldn't be surprised that the next Nvidia jetson will be using SMIC fabs.

And risk the wrath of the US government, not to mention that the IP theft risk they face?

Link to comment
Share on other sites

Link to post
Share on other sites

On 5/30/2023 at 8:24 AM, VaderCraft_ said:

It won't make it any cheaper, I have a feeling they don't care about the gamers anymore because they barely make any revenue in comparison to the workstation GPU market they have. FSR 3 is going to possibly dominate

This is a myth, A LOT of Nvidias gross revenue is from gaming. They absolutely do care if  the 4060ti for example completely flops. Just because workstation and AI are doing so well doesn't mean they don't care about gaming profits

CPU-AMD Ryzen 7 7800X3D GPU- RTX 4070 SUPER FE MOBO-ASUS ROG Strix B650E-E Gaming Wifi RAM-32gb G.Skill Trident Z5 Neo DDR5 6000cl30 STORAGE-2x1TB Seagate Firecuda 530 PCIE4 NVME PSU-Corsair RM1000x Shift COOLING-EK-AIO 360mm with 3x Lian Li P28 + 4 Lian Li TL120 (Intake) CASE-Phanteks NV5 MONITORS-ASUS ROG Strix XG27AQ 1440p 170hz+Gigabyte G24F 1080p 180hz PERIPHERALS-Lamzu Maya+ 4k Dongle+LGG Saturn Pro Mousepad+Nk65 Watermelon (Tangerine Switches)+Autonomous ErgoChair+ AUDIO-RODE NTH-100+Schiit Magni Heresy+Motu M2 Interface

Link to comment
Share on other sites

Link to post
Share on other sites

On 5/31/2023 at 11:08 AM, LAwLz said:

I see this sentence gets repeated everywhere, but I don't understand why.

Why did so many people suddenly say the same thing?

 

Was it because they recently announced their earnings and the data center has grown a lot? Because their gaming segment is still massive (about 33% of their revenue), and in fact a larger percentage of their total revenue than AMD's gaming revenue (around 29%).

I shout this to the clouds all the time. The fact that Nvidias gaming division is doing so bad right now actually matters to them. Next gen is not going to be as shitty I can promise you that, they messed up big time

CPU-AMD Ryzen 7 7800X3D GPU- RTX 4070 SUPER FE MOBO-ASUS ROG Strix B650E-E Gaming Wifi RAM-32gb G.Skill Trident Z5 Neo DDR5 6000cl30 STORAGE-2x1TB Seagate Firecuda 530 PCIE4 NVME PSU-Corsair RM1000x Shift COOLING-EK-AIO 360mm with 3x Lian Li P28 + 4 Lian Li TL120 (Intake) CASE-Phanteks NV5 MONITORS-ASUS ROG Strix XG27AQ 1440p 170hz+Gigabyte G24F 1080p 180hz PERIPHERALS-Lamzu Maya+ 4k Dongle+LGG Saturn Pro Mousepad+Nk65 Watermelon (Tangerine Switches)+Autonomous ErgoChair+ AUDIO-RODE NTH-100+Schiit Magni Heresy+Motu M2 Interface

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, WolframaticAlpha said:

And risk the wrath of the US government, not to mention that the IP theft risk they face?

Tell that to the Nvidia H800.

 

Besides, Moore's thread's archicture is built on PowerVR and ONLY started DX11 support last week. The future is RISC-V and Nvidia knows that too.

Specs: Motherboard: Asus X470-PLUS TUF gaming (Yes I know it's poor but I wasn't informed) RAM: Corsair VENGEANCE® LPX DDR4 3200Mhz CL16-18-18-36 2x8GB

            CPU: Ryzen 9 5900X          Case: Antec P8     PSU: Corsair RM850x                        Cooler: Antec K240 with two Noctura Industrial PPC 3000 PWM

            Drives: Samsung 970 EVO plus 250GB, Micron 1100 2TB, Seagate ST4000DM000/1F2168 GPU: EVGA RTX 2080 ti Black edition

Link to comment
Share on other sites

Link to post
Share on other sites

Wait so they currently make their GPU's on TSMC 5nm 

Intel's next process is "Intel 4" which is actually a 7nm process; technically speaking. I guess overall it's still a better process than TSMC 5nm though? 

Or are they just going to be making "some" chips with Intel, not necessarily their flagship products?

 

or is this talking about Intel 20A? But I still feel like that's a good ways away since it's such a big change.

"If a Lobster is a fish because it moves by jumping, then a kangaroo is a bird" - Admiral Paulo de Castro Moreira da Silva

"There is nothing more difficult than fixing something that isn't all the way broken yet." - Author Unknown

Spoiler

Intel Core i7-3960X @ 4.6 GHz - Asus P9X79WS/IPMI - 12GB DDR3-1600 quad-channel - EVGA GTX 1080ti SC - Fractal Design Define R5 - 500GB Crucial MX200 - NH-D15 - Logitech G710+ - Mionix Naos 7000 - Sennheiser PC350 w/Topping VX-1

Link to comment
Share on other sites

Link to post
Share on other sites

53 minutes ago, bcredeur97 said:

I guess overall it's still a better process than TSMC 5nm though? 

We don't know.

53 minutes ago, bcredeur97 said:

Or are they just going to be making "some" chips with Intel, not necessarily their flagship products?

We don't know this either, but Nvidia is not know to care about this, but rather care how cheap the node is as long as its meets their expectations.

53 minutes ago, bcredeur97 said:

or is this talking about Intel 20A?

Once again, we don't know lol

But I guess that's the case, Intel 4 is due this year and I guess any new nvidia product would only come bext year or so.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

34 minutes ago, igormp said:

We don't know this either, but Nvidia is not know to care about this, but rather care how cheap the node is as long as its meets their expectations.

For a while now Nvidia has been making the largest die size possible on a given node, Nvidia even worked with TSMC years ago to develop process technologies and improvements to increase the  maximum size possible. So as you said cost is pretty much the biggest factor for them. How big can I make the die, how many transistors can I get in it and how much will it cost to do it. They didn't actually aggressively seek TSMC 5nm as soon as possible, they waited until it was cost viable.

 

Similarly Nvidia chose Samsung for consumer chips likely for the above type of reasoning.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, bcredeur97 said:

Intel's next process is "Intel 4" which is actually a 7nm process; technically speaking. I guess overall it's still a better process than TSMC 5nm though?

"Intel 4" is not really a 7nm process. The naming of process nodes have very little to do with the actual measurements these days. That's why Intel decided to rename all their nodes. 

 

What happened was that TSMC and Samsung updated their node names to use smaller numbers even though they didn't necessarily make a big jump in reducing the transistor size. Intel meanwhile started adding + signs to their names because they thought that was more accurate. However, people not really in the know started making fun of them because "lol Intel 10nm++++". The end result was that Intel's nodes with similar nm names as TSMC were significantly better. As a result, Intel decided to rename their nods to more accurately reflect the naming scheme used by TSMC. 

 

Intel 4 is, on paper, competitive with TSMC 3nm. That's why it's called Intel 4.

Its just that for marketing purposes, TSMC started calling their process "3nm" while it maybe should be called 7nm.

The same apply to basically all nodes in the last couple of years. Intel used a more traditional naming scheme and added + signs to refinements, while companies like TSMC kept lowering their number even though it didn't necessarily match the transistor dimensions. 

 

Comparing Intel's old names (like saying Intel 4 is a 7nm process) against TSMC's names is a big mistake. Intel's new names such as Intel 4 are far more accurate representation of performance and density than their old names.

Intel 4 is just as much a "7nm process" as TSMC 3nm (actually called TSMN N3) is a "7nm process". 

I recommend forgetting the old Intel names and sticking to the new ones, because otherwise you get an inaccurate picture when comparing them against other fabs. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, LAwLz said:

The naming of process nodes have very little to do with the actual measurements these days

If my memory is correct that died at like 22nm, maybe even sooner. It's been a very long time since the node name size was a reflection of the transistors being made with it.

 

Quote

Recent technology nodes such as 22 nm, 16 nm, 14 nm, and 10 nm refer purely to a specific generation of chips made in a particular technology. It does not correspond to any gate length or half pitch. Nevertheless, the name convention has stuck and it's what the leading foundries call their nodes.

 

Quote

At the 45 nm process, Intel reached a gate length of 25 nm on a traditional planar transistor. At that node the gate length scaling effectively stalled; any further scaling to the gate length would produce less desirable results. Following the 32 nm process node, while other aspects of the transistor shrunk, the gate length was actually increased.

 

Quote

With the introduction of FinFET by Intel in their 22 nm process, the transistor density continued to increase all while the gate length remained more or less a constant. This is due to the properties of FinFET; for example the effective channel length is a function of the new fins (Weff = 2 * Hfin + Wfin). Due to how the transistor changed dramatically from how it used to be, the current naming scheme lost any meaning.

 

https://en.wikichip.org/wiki/technology_node

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

If my memory is correct that died at like 22nm, maybe even sooner. It's been a very long time since the node name size was a reflection of the transistors being made with it.

 

 

 

 

https://en.wikichip.org/wiki/technology_node

Yeah, I wasn't really trying to say that this just happened. 

I am pretty sure things started getting really "out of sync" with the introduction of 3D gates, which happened at 22nm.

 

My point is just that things have gotten more and more out of sync, and that today you can't judge something by the name.

 

I get the impression that a lot of people think Intel just rebranded things to deceive people and look more competitive because they got stuck on 10nm. In reality, it was more like TSMC and Samsung deceiving people and Intel got made fun of for using slightly more accurate names. Then Intel one day just went "alright, we'll use the naming everyone else uses". 

Link to comment
Share on other sites

Link to post
Share on other sites

I feel like comparing transistor density is a good metric to see how well a node is doing (but that doesn't take stuff like power consumption and performance into account), and by this metric Intel's new naming scheme is way more on par with the other fabs.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, LAwLz said:

Yeah, I wasn't really trying to say that this just happened. 

I am pretty sure things started getting really "out of sync" with the introduction of 3D gates, which happened at 22nm.

 

My point is just that things have gotten more and more out of sync, and that today you can't judge something by the name.

 

I get the impression that a lot of people think Intel just rebranded things to deceive people and look more competitive because they got stuck on 10nm. In reality, it was more like TSMC and Samsung deceiving people and Intel got made fun of for using slightly more accurate names. Then Intel one day just went "alright, we'll use the naming everyone else uses". 

Thanks for clearing that up. It did seem to get a lot more confusing lately. 

Also I hate it, lol. Things aren't what they say they are. At least chips are still getting better... for now.

"If a Lobster is a fish because it moves by jumping, then a kangaroo is a bird" - Admiral Paulo de Castro Moreira da Silva

"There is nothing more difficult than fixing something that isn't all the way broken yet." - Author Unknown

Spoiler

Intel Core i7-3960X @ 4.6 GHz - Asus P9X79WS/IPMI - 12GB DDR3-1600 quad-channel - EVGA GTX 1080ti SC - Fractal Design Define R5 - 500GB Crucial MX200 - NH-D15 - Logitech G710+ - Mionix Naos 7000 - Sennheiser PC350 w/Topping VX-1

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, igormp said:

I feel like comparing transistor density is a good metric to see how well a node is doing (but that doesn't take stuff like power consumption and performance into account), and by this metric Intel's new naming scheme is way more on par with the other fabs.

If you go just by performance and for high power chips then Intel is and has been kicking ass on that metric. But that's sort of expected since it's Intel making a node for Intel products which primarily focus on that aspect unlike the majority of use cases for say TSMC nodes, those being lower power or power efficiency. 

 

Also I personally don't care if a CPU uses 300W-400W or whatever so long as it's the fastest thing around, by a good amount though. If I want something lower power then options exist but I'm fine with the other option existing too.

Link to comment
Share on other sites

Link to post
Share on other sites

22 hours ago, LAwLz said:

Yeah, I wasn't really trying to say that this just happened. 

I am pretty sure things started getting really "out of sync" with the introduction of 3D gates, which happened at 22nm.

 

My point is just that things have gotten more and more out of sync, and that today you can't judge something by the name.

 

I get the impression that a lot of people think Intel just rebranded things to deceive people and look more competitive because they got stuck on 10nm. In reality, it was more like TSMC and Samsung deceiving people and Intel got made fun of for using slightly more accurate names. Then Intel one day just went "alright, we'll use the naming everyone else uses". 

No worries, I was just making a supplementary comment. I think people have too easily forgotten just how important Intel is and has been in the fabrication industry and just how much of a benchmark they were, and still are. Sure Intel badly missed deadlines and technology progression for a while which was a key problem in many ways but they are and were more competitive than given public credit for. Also let's not forget 14nm Intel CPUs were still industry sector leading against supposedly superior node based products, even their "failed" 10nm products weren't that bad either.

 

700px-7nm_densities.svg.png

 

600px-5nm_densities.svg.png

 

Above are labelled pre Intel node naming change. Just shows how much of a joke 7nm vs 7nm was going to be, also how easily Intel may become the undisputed industry benchmark again.

 

I'd like to give a shout out and reminder about IBM too, none of these nodes would exist without them. They may not operate any production fabrication facilities anymore but they are still right in there in the industry doing the advanced research and development.

Link to comment
Share on other sites

Link to post
Share on other sites

Fabs are a precarious investment in my mind.

only 15 years ago there were some people thinking that 10nm might be the limit and already consumers are about to get their finger on devices using 3nm.

Think about that for a moment 1 atom occupies the space of 0.1 nm. At a certain point in time the ability to create so small with such accuracy begins to fall off making it less viable and at that point a major breakthru is required.

Arm will take over eventually as it is poised to fill the interim gap as 86architecture becomes to unweildy and cumbersome.

Already the levels of heat to power to performance are rapidly reaching a plateau. 

in todays gaming rigs outside of hi load workstations rendering farms etc we already have pieces of silicon that can consume almost as much power as an electric kettle to boil water for our morning cup of tea.

 I think Apple showed the way with how it couple 2 apple silicon chips together now copied by nvidea and Elon Musk's approach to mulitple chips that lock together for expansion etc is the near future.

I expect to be using not a single chip solution in a pc but daughter boards with multiple chips fused in a manner where they work in concert and can delver multiple levels of performance without the need to get forever smaller to cram more transistors into the same physical area. 

Sockets today are designed to drop in a cpu what is to prevent a socket designed to drop in a board holding multiple cpus working in concert ? why simply follow the same old pattern of more in the same space ? big little - stacking - cores and threads etc. 

Apple has not been suitable to play games in any meaningful manner since the early days of Bungie with Marathon and Oni but M1 suddenly made games possible even if at mediocre levels.

What happens when Apple drop 4 M3s fused together ? expensive and need to mortage an arm and a leg probably but performance could possibly be so high that the industry has no choice but to follow suit with their design options. Pure conjecture so dont go ape on me................just random thoughts and I dont care about intel apple amd or nvidea......I simply look forward to the future.

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, johnno23 said:

Arm will take over eventually as it is poised to fill the interim gap as 86architecture becomes to unweildy and cumbersome.

Thing is it's really not and those that are so sure of it's demise probably shouldn't be. It's always a very surface level type of statement.

 

ARM isn't better, it's different. Many ARM designs are crap in basically every measurable metric, some ARM designs are exceptionally good. ARM itself doesn't it make it better than x86, it makes it different.

 

Above statement repeat and apply to RISC-V, different not better.

 

Outside of Apple the industry hasn't and isn't changing in that way and there is little no signs it is. x86 is as strong as it has ever been, ARM market is also growing but it's not taking away x86 market.

 

Do well to heed the tale of "The sky is falling", it wasn't 😉

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, johnno23 said:

Fabs are a precarious investment in my mind.

only 15 years ago there were some people thinking that 10nm might be the limit and already consumers are about to get their finger on devices using 3nm.

Think about that for a moment 1 atom occupies the space of 0.1 nm. At a certain point in time the ability to create so small with such accuracy begins to fall off making it less viable and at that point a major breakthru is required.

I mean there is some things going for different processes, like if you exclude the big quantum physic ones.
While not everything can do the same, it will be fun to see new methods being brough into it all to save power and maybe be faster without just going "smaller", and then you got the stacking, to going smaller bringing more problems if not changing "everything".

Link to comment
Share on other sites

Link to post
Share on other sites

opinions differ but I am not convinced x86 has a life span that will lose ground and fade in a few more years. 

Right or wrong in my assumptions are secondary in that the future regardless of what transpires technology never stands still.

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, johnno23 said:

opinions differ but I am not convinced x86 has a life span that will lose ground and fade in a few more years. 

What are you actually basing this on? A lot of people have strong sentiment around it simply and only that x86 is old. ARM is 1985, x86 16 bit is 1978, x86 32bit is 1985. They are effectively as old as each other. ARM is getting filled with fixed pipeline instruction extensions, it is not simple and light weight anymore. It is moving ever closer to x86 just as x86 underlying execution engines are RISC based and have been for a really long time.

 

So the question I think you should ask yourself is "what exactly is so unwieldy and bad about x86?"

 

x86 doesn't prevent multi chip solutions, which are already being done anyway. It doesn't prevent the type of chip fusing technology, choice around these differ on many factors. Anything you can do with an ARM chip you can do with an x86 chip, instruction sets aren't related to that at all.

 

Industry sector just doesn't like x86 unless the companies are Intel or AMD, that is the primary reason others not them use and explore non x86 options, not because there are problems with x86 that are so dire it has to be moved away from. It has far more to do with business strategy and product development than "x86". But make no mistake Intel and AMD aren't going to let x86 go away, they have every reason to keep it exactly where it is.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×