Jump to content

Apple ARM is superior?

Hawkeye27
14 minutes ago, papajo said:

  

Nothing of this makes sense. 

 

the μArch of each individual CPU is build around its ISA. So there is no "only" the ISA is the whole deal the μΑrch is only the practical approach to make the ISA work by using circuitry/logic

 

And @A51UK's argument is not irrelevant you need to translate the source code in order to make it work in a different ISA its not as easy and trivial as changing mp3 to to mp4 or something like that. 

They are 0 years ahead of anyone else

 

They just have fast RAM the cores are slower compared to x86 ones

 

The thing that makes it snappy is apple's closed ecosystem in addition with the fast ram

 

If they are ahead in something is that they are a 2 trillion dollar company and can produce stuff like that with nice polishing as where e.g ASUS, Toshiba etc couldnt just make an armchip with all that software development needed to polish it and make it at huge quantities so that they could sell it at a remotely viable pricetag (not that the m1 is cheap but it is far cheaper than other competitors would need to sell their arm version ) 

The CPU performance is most certainly genuine. There’s no trickery or illusion of fast cores. The x86 emulation demonstrations from earlier in the year, and native performance suggests the cpu performance exhibited is the real deal. Apple’s cores are fast, no question. Perhaps not enough so to keep pace with a modern x86 core with a GHz+ clock advantage, but they’re certainly more than competitive in similar power envelopes. 
 

There’s hardly any point spending so much silicon and R&D on additional cache, fast memory and custom storage controllers unless the CPU can actually benefit. The mere fact that much effort was expended in keeping the CPU fed should be testament to the raw performance the Firestorm cores can put out. 
 

ISA I don’t think really matters too much in the terms of number crunching performance. Most current x86 cores break down x86 code to internal RISC opcodes anyway. 

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Zodiark1593 said:

The CPU performance is most certainly genuine. There’s no trickery or illusion of fast cores.

I didn't say they are showing false results I said that people are focusing on things (e.g Geekbench) that do not use 100% of the CPU and are heavily influenced by RAM speed. (of which the M1 has 4266Mhz of low latency ram which is a huge speed) 

 

Take for example this https://browserbench.org/Speedometer2.0/ and open your task manager, you will see that it wont use more than about 5 to 10% of your CPU it used in multiple youtube videos to compare the m1. 

 

Now you may not be able to upgrade your ram but restart the computer go to your bios and change the ram speed to something slower and run the test again you will get a lower score... now if you would have the money and willingness to buy some fast 3800 cl14 RAM or even like 5Ghz one then your score would have been much higher even higher than M1's 

 

There is no trickery in that other than betting that most people wouldnt know better to tell the difference and just would focus on the shiny big numbers. 

 

In any test in which the cores get stressed to 100% or in otherwords is CPU heavy it doesnt fair good 

Link to comment
Share on other sites

Link to post
Share on other sites

46 minutes ago, papajo said:

  

Nothing of this makes sense. 

 

the μArch of each individual CPU is build around its ISA. So there is no "only" the ISA is the whole deal the μΑrch is only the practical approach to make the ISA work by using circuitry/logic

 

And @A51UK's argument is not irrelevant you need to translate the source code in order to make it work in a different ISA its not as easy and trivial as changing mp3 to to mp4 or something like that. 

They are 0 years ahead of anyone else

 

They just have fast RAM the cores are slower compared to x86 ones

 

The thing that makes it snappy is apple's closed ecosystem in addition with the fast ram

 

If they are ahead in something is that they are a 2 trillion dollar company and can produce stuff like that with nice polishing as where e.g ASUS, Toshiba etc couldnt just make an armchip with all that software development needed to polish it and make it at huge quantities so that they could sell it at a remotely viable pricetag (not that the m1 is cheap but it is far cheaper than other competitors would need to sell their arm version ) 

Muh closed ecosystem. Even when it's beneficial, people refuse to admit it coz Apple hating is hip yo.

Link to comment
Share on other sites

Link to post
Share on other sites

57 minutes ago, papajo said:

the μArch of each individual CPU is build around its ISA. So there is no "only" the ISA is the whole deal the μΑrch is only the practical approach to make the ISA work by using circuitry/logic

The ISA itself is just a facade to the internal µArch that makes it easier for compilers to target a device. If one were to truly design just thinking about the ISA, there would be several limitations. An example on how µArches are way more important/similar is how apple managed to implement x86's TSO on an ARM CPU that has a weak memory model by default.

 

57 minutes ago, papajo said:

And @A51UK's argument is not irrelevant you need to translate the source code in order to make it work in a different ISA its not as easy and trivial as changing mp3 to to mp4 or something like that. 

Source code has nothing to do with CISC or RISC. After being compiled, the ISA opcodes are broken down or fused anyway inside the CPU so the µArch can do its job properly.

 

57 minutes ago, papajo said:

They are 0 years ahead of anyone else

They are 1 year and 4 months ahead of others solely due to the fact that they're using TSMC's 5nm instead of the 7nm everyone else is using.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, Pc6777 said:

they optimize that cpu for their os and programs so a lot of it is just optimization.

It's not. There are plenty of cross platform benchmarks to show how fast the chip is. Combined with the platform dependent benchmarks, you can say how fast it is. The notion that it's only fast because of integration is the dumbest take on the internet by people that dont understand how hardware and software engineering work. Sure, you can argue that FCPX is fast because of tight integration with hardware, but not that the entire chip is fast because of software. The reality is that Apple has no ARM competitors right now. 

 

I respect people wanting to use open platforms and hate to see how Apple is moving towards a more closed ecosystem, but that does not mean the chip isn't fast and the work of Apple's Silicon Division isn't monumental.

 

Apple is competing with Intel using chips that are running no hotter than spikes to 70C when Intel can't manage to run below 95+C with massive cooling systems. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, papajo said:

They are 0 years ahead of anyone else

So why do the 8CX based devices suck so badly? And why are Intel worried that a 'lifestyle company' is now designing better chips? 

Link to comment
Share on other sites

Link to post
Share on other sites

38 minutes ago, descendency said:

So why do the 8CX based devices suck so badly? And why are Intel worried that a 'lifestyle company' is now designing better chips? 

 it's x86 and nothing of what you said belongs to the realm of reality 😛 

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, papajo said:

 it's x86 and nothing of what you said belongs to the realm of reality 😛 

The 8CX is ARM, and Pat Gelsinger (the now CEO of intel) did say something along the lines of what @descendency  mentioned.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, igormp said:

The 8CX is ARM, and Pat Gelsinger (the now CEO of intel) did say something along the lines of what @descendency  mentioned.

oh I didnt know that snapdragons are labeled like anyway its not an architecture name I thought he was speaking about intel devices and I doubt anybody from intel would say that they are worried that Apple is going to make better chips maybe they called apple a "lifestyle company" 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, papajo said:

oh I didnt know that snapdragons are labeled like anyway its not an architecture name I thought he was speaking about intel devices and I doubt anybody from intel would say that they are worried that Apple is going to make better chips maybe they called apple a "lifestyle company" 

"Intel has to be better than ‘lifestyle company’ Apple at making CPUs, says new CEO", from: https://www.theverge.com/2021/1/15/22232554/intel-ceo-apple-lifestyle-company-cpus-comment

 

A quick google wouldn't hurt.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, igormp said:

The ISA itself is just a facade to the internal µArch that makes it easier for compilers to target a device. If one were to truly design just thinking about the ISA, there would be several limitations. An example on how µArches are way more important/similar is how apple managed to implement x86's TSO on an ARM CPU that has a weak memory model by default.

 μArches are based on ISA that's not a debate that's how stuff works by definition.https://en.wikipedia.org/wiki/Microarchitecture#:~:text=In computer engineering%2C microarchitecture%2C also,implemented in a particular processor.&text=Computer architecture is the combination of microarchitecture and instruction set architecture.

 

as for apple they merely use a hardware toggle when rosetta 2 runs to manage the memory differently (so by following x86 TSO) which gives it that 70% efficiency 

 

 

1 hour ago, igormp said:

Source code has nothing to do with CISC or RISC. After being compiled, the ISA opcodes are broken down or fused anyway inside the CPU so the µArch can do its job properly.

just dont think by slapping jargon you can get away  "the source has nothing to do with CISC or RISC"  followed by  "After being compiled" well the source code depends on the compiler and the compiler depends on the architecture. 

 

1 hour ago, igormp said:

They are 1 year and 4 months ahead of others solely due to the fact that they're using TSMC's 5nm instead of the 7nm everyone else is using.

fabrication process has nothing to do with "being ahead" it only means "smaller" if a simple thing is "smaller" doesnt mean it is ahead! many chips are made in 5nm qualcoms snapdragon included and many other chips usually found in routers and such . 

 

Its not about 5nm by it self it is  about if you are able to make something complex (as a x86 CPU) in 5nm... 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, descendency said:

It's not. There are plenty of cross platform benchmarks to show how fast the chip is. Combined with the platform dependent benchmarks, you can say how fast it is. The notion that it's only fast because of integration is the dumbest take on the internet by people that dont understand how hardware and software engineering work. Sure, you can argue that FCPX is fast because of tight integration with hardware, but not that the entire chip is fast because of software. The reality is that Apple has no ARM competitors right now. 

 

I respect people wanting to use open platforms and hate to see how Apple is moving towards a more closed ecosystem, but that does not mean the chip isn't fast and the work of Apple's Silicon Division isn't monumental.

 

Apple is competing with Intel using chips that are running no hotter than spikes to 70C when Intel can't manage to run below 95+C with massive cooling systems. 

maybe, I still would rather have freedom over speed any day. desktops will move on at some point, or x86 will have to have some major breakthroughs to compete. hopefully amd can do something good within the next few generations that blows current chips out of the water to keep desktops on top and relevant. hopefully whatever comes next is fully backwards compatible with current and past instruction sets. 

Link to comment
Share on other sites

Link to post
Share on other sites

27 minutes ago, papajo said:

I really recommend you to study a bit and look on how a CPU is actually designed, here is a really nice video that shows what I meant:

 

27 minutes ago, papajo said:

as for apple they merely use a hardware togle when rosetta 2 runs to manage the memory differently (so by following x86 TSO) which gives it that 70% efficiency

I wouldn't say that it's "merely" a toggle, since changing context and memory model during runtime is not an easy task. If it were so easy, Qualcomm would've already done so during their partnership with MS to get ARM-based windows devices.

 

29 minutes ago, papajo said:

just dont think by slapping jargon you can get away  "the source has nothing to do with CISC or RISC"  followed by  "After being compiled" well the source code depends on the compiler and the compiler depends on the architecture.

Source code is more often than not ISA-independent (that's the whole point of having a high level language), and compilers target tons of ISAs. Some compilers don't evey target an ISA directly, such as LLVM which has its own IR.

And still, your point makes no sense since it's as easy as setting a single flag when doing a cross compile to another arch. You're totally moving goalposts here.

 

34 minutes ago, papajo said:

fabrication process has nothing to do with "being ahead" it only means "smaller" if a simple thing is "smaller" doesnt mean it is ahead! many chips are made in 5nm qualcoms snapdragon included and many other chips usually found in routers and such . 

Lithography has a great deal when it comes to the amount of transistors you can pack in a specific area, achievable clock speeds and power consumption, saying that it's irrelevant is dumb. A simple example is the performance that Intel used to achieve when moving the same arch to a new, smaller node.

 

Qualcomm got 5nm chips done like 2 months ago, and are not even trying to compete against apple's latest chips.

 

The best µArch achievable in a bad node would result in a bad CPU in the end.

 

37 minutes ago, papajo said:

Its not about 5nm by it self it is  about if you are able to make something complex (as a x86 CPU) in 5nm... 

Lithography has nothing to do with the ISA, again I do recommend that you do some proper reading before spreading misinformation to others.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, Pc6777 said:

maybe, I still would rather have freedom over speed any day. desktops will move on at some point, or x86 will have to have some major breakthroughs to compete. hopefully amd can do something good within the next few generations that blows current chips out of the water to keep desktops on top and relevant. hopefully whatever comes next is fully backwards compatible with current and past instruction sets. 

There are a bunch of things people want from computers and they can’t have all of them.. yet.  One thing that seems to have achieved “more than necessary” for 90+% of people is CPU speed.  This has also nearly  happened with screens I think.  There are a few things that haven’t done that yet though.  GPUs is one area. You can get more or less everything most people want out of a cpu in a laptop.  Screens will I think be there within a year.  GPUs likely not so much.  When it comes though there may be very little reason to own a desktop for most people.  There will still be reason for some though.  Some professions will continue to want faster CPUs with more memory.  For this group the desktop will always be the correct system. Hardware unboxed just did a video where they showed the level of bottleneck for given GPUs a 1600 will apparently bottleneck a 5700xt with no trouble. A 3070 is needed for a 1600 anything newer needs a desktop 3090.  GPUs aren’t even close.

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

35 minutes ago, igormp said:

I really recommend you to study a bit and look on how a CPU is actually designed, here is a really nice video that shows what I meant:

 

M8 there is not disputing this, I am not going to watch a video which you think disputes this (maybe I am going to watch it later but just out of curiosity ) it is like saying that the design of a house is  not based on the available space and real estate. 

 

You have an ISA and by thinking how you put the bits and bobs together to make that isa work what you do is called designing the microarchitecture, yes one microarchitecture can have differences compared to another but all that with the ISA as the base of the design. 

 

35 minutes ago, igormp said:

I wouldn't say that it's "merely" a toggle, since changing context and memory model during runtime is not an easy task. If it were so easy, Qualcomm would've already done so during their partnership with MS to get ARM-based windows devices.

Hardware is much easier to design than specialized software in terms of budgeting manpower and collaboration needed. 

 

That's why the PS3 despite its superior Cell CPU couldnt entice devs to work with it. 

 

As for apple there is nothing special happening under the hood the software is commendable though. 

 

35 minutes ago, igormp said:

Lithography has a great deal when it comes to the amount of transistors you can pack in a specific area, achievable clock speeds and power consumption, saying that it's irrelevant is dumb

I didnt say it is irrelevant in general. I said under the context it is irrelevant when comparing similar architectures it is relevant 

 

When comparing an arm chip whith its simpler architecture it is irrelevant because of course it is easier to make it smaller and I already mentioned to you that many other chips are made in 5nm fabrications its like saying that a snapdragon is 1 year ahead from x86 or that this IBM chip 

IBM-Unveils-Worlds-First-5nm-Nail-Sized-

 

which is made in 5nm (on 2017 btw so 3 years before apple 😛 )  is years ahead AMD or intel... that is dump imho 

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, Bombastinator said:

There are a bunch of things people want from computers and they can’t have all of them.. yet.  One thing that seems to have achieved “more than necessary” for 90+% of people is CPU speed.  This has also nearly  happened with screens I think.  There are a few things that haven’t done that yet though.  GPUs is one area. You can get more or less everything most people want out of a cpu in a laptop.  Screens will I think be there within a year.  GPUs likely not so much.  When it comes though there may be very little reason to own a desktop for most people.  There will still be reason for some though.  Some professions will continue to want faster CPUs with more memory.  For this group the desktop will always be the correct system. Hardware unboxed just did a video where they showed the level of bottleneck for given GPUs a 1600 will apparently bottleneck a 5700xt with no trouble. A 3070 is needed for a 1600 anything newer needs a desktop 3090.  GPUs aren’t even close.

I will always want desktops because modularity/internal devices, you cant do a lot with laptops in that regard. if they ever make fully modular diy laptops maybe i would jump ship. 

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, papajo said:

You have an ISA and by thinking how you put the bits and bobs together to make that isa work what you do is called designing the microarchitecture, yes one microarchitecture can have differences compared to another but all that with the ISA as the base of the design. 

The ISA is not the base, it's the facade to your actual µArch in order to have your CPU work with the current standards (mostly as in compilers).

 

5 minutes ago, papajo said:

Hardware is much easier to design than specialized software in terms of budgeting manpower and collaboration needed. 

Ok, now I can't know for sure if you're just trolling or you are really desinformed. Designing hardware is hard, and requires formal verification and the costs associated are way higher than software projects. Just take a look at the salaries of any Verilog/HDL job against any software job.

 

8 minutes ago, papajo said:

That's why the PS3 despite its superior Cell CPU couldnt entice devs to work with it. 

It was not "superior", it was different and had coprocessors akin to our current SIMD units found in most GPUs (AVX2 and whatnot), which were (and still are) hard to program for. I can't see how this relates to your previous point tho.

 

12 minutes ago, papajo said:

As for apple there is nothing special happening under the hood the software is commendable though. 

It is nothing special, but is something that no one system has made, both from the CPU aspect itself and the system as a whole. The software is the thing that has nothing special to be quite honest.

 

17 minutes ago, papajo said:

When comparing an arm chip which its simpler architecture it is irrelevant because of course it is easier to make it smaller

It's also pretty easy to make x86 smaller, hence all of the x86-based microcontrollers around. ISA has nothing to do with your die size, lithography, the µArch itself (mostly if you cram tons of EUs in there) and caches are what really matter.

 

19 minutes ago, papajo said:

and I already mentioned to you that many other chips are made in 5nm fabrications its like saying that a snapdragon is 1 year ahead from x86 or that this IBM chip which is made in 5nm (on 2017 btw so 3 years before apple 😛 )  is years ahead AMD or intel... that is dump imho 

It is ahead of other x86 CPUs in the sense that it has a more advanced node available. Also, comparing a research design that didn't went for mass production isn't that fair.

 

 

I won't respond any further since it seems that you're just trolling.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

One thing that I could see happening is Apple not putting in the effort to make a lot of different CPUs for low volume desktop Macs.

 

i.e., I could see some desktop Macs sharing a CPU with laptops, just with different cooling, clock and sustained performance (basically what we’re already seeing with the M1 MacMini)

 

Keeping the total number of CPUs per generation to 3-4 tops. 

 

That is probably the main practical limit to Apple really going crazy with desktop CPUs. Except for the highest end desktops.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Pc6777 said:

I will always want desktops because modularity/internal devices, you cant do a lot with laptops in that regard. if they ever make fully modular diy laptops maybe i would jump ship. 

They haven’t for a long long time.  It pretty unlikely to come back as long as thin and light is considered an advantage.  Modularity takes space.   That desktop computers even got modularity was something of an accident.  There are other problems with the laptop form factor.  A screen attached to a keyboard is a lousy idea ergonomically

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, igormp said:

It is ahead of other x86 CPUs in the sense that it has a more advanced node available. Also, comparing a research design that didn't went for mass production isn't that fair.

 

m8 do you read only what you would like to read and ignore everything else? 

 

Qualcom snapdragons 800, Huawai's Kirin 9000s are on 5nm as well and there are other chips too.. the simpler the chip the easier it is to fabricate in a smaller lithography 

 

 

Nvidia's 3060 ti GPU is on 7nm  does this mean that it is ahead of intel's CPUs? 

 

As I said smaller lithography in and of itself  (google what that means because it seems you do not understand) isnt something to brag about or the target, shrinking complex architectures such as the ones of Itel's and AMDs leading designs into smaller fabrications is what makes it remarkable. 

 

Having said that IBM uses the chips they made for their AI infrastructure. 

 

1 hour ago, igormp said:

The ISA is not the base, it's the facade to your actual µArch in order to have your CPU work with the current standards (mostly as in compilers).

 

There is no such a thing as a "facade" why you think architectures are defined by their instruction set and not by their microarchitecture? 

 

Intel and AMD are both x86 have different microarchitecture yet they are both in the same category (CISC) 

 

M8 have you designed any sort of microarchitecture?  you base it around of the ISA you try to implement it this is not something to debate you are just making an elementary mistake (which of course it is not a problem there is only a problem if you insist)  that is all that is happening here. 

 

1 hour ago, igormp said:

It was not "superior", it was different and had coprocessors akin to our current SIMD units found in most GPUs (AVX2 and whatnot), which were (and still are) hard to program for. I can't see how this relates to your previous point tho.

It was superior  it had 175 Gflops FP32 perfomance. 

 

1 hour ago, igormp said:

It is nothing special, but is something that no one system has made, both from the CPU aspect itself and the system as a whole. The software is the thing that has nothing special to be quite honest.

Because on its own it does nothing it needs the software for it to have any use that is what I meant (among other things)  when I said 

 

  

5 hours ago, papajo said:

If they are ahead in something is that they are a 2 trillion dollar company and can produce stuff like that with nice polishing as where e.g ASUS, Toshiba etc couldnt just make an armchip with all that software development needed to polish it and make it at huge quantities so that they could sell it at a remotely viable pricetag (not that the m1 is cheap but it is far cheaper than other competitors would need to sell their arm version ) 

 

2 hours ago, igormp said:

Lithography has nothing to do with the ISA, again I do recommend that you do some proper reading before spreading misinformation to others.

And I recommend you to stop getting in my nerves by not being able to understand elementary things and prompting me to "study" while at it all the time 😛

 

infinity fabric CCDs 3Dnands (in the billions) and everything else that is present in a modern X86 CPU from AMD or INTEL is hard to shrink down as where the basic switches of an ARM chip are easy that's what I meant

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, papajo said:

Nvidia's 3060 ti GPU is on 7nm  does this mean that it is ahead of intel's CPUs?

Two things wrong with this statement.

1. Ampere is on Samsung's 8nm process.

2. You're comparing a CPU with a GPU. That's not how it works.

 

elephants

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, FakeKGB said:

Two things wrong with this statement.

1. Ampere is on Samsung's 8nm process.

2. You're comparing a CPU with a GPU. That's not how it works.

 

Ampere has been done on Samsung8, but it was designed originally for TSMC7 I don’t know where the TSMC booking stands, but if NVIDIA bought some space some years ago it is very possible that a 3060ti could come out on TSMC7

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, FakeKGB said:

2. You're comparing a CPU with a GPU. That's not how it works.

That's my point 

 

3 minutes ago, FakeKGB said:

. Ampere is on Samsung's 8nm process.

its on 7nm

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, papajo said:

That's my point 

 

its on 7nm

It could be anyway.  3080 (102) and 3070(104) are samsung8.  I don’t know what the 3060 (106) is being run on.  Nvidia could do either though. 

Not a pro, not even very good.  I’m just old and have time currently.  Assuming I know a lot about computers can be a mistake.

 

Life is like a bowl of chocolates: there are all these little crinkly paper cups everywhere.

Link to comment
Share on other sites

Link to post
Share on other sites

And just to understand why it is not worth mentioning  

 

this is a core i9 10900k 

 

arch2.jpg

 

its 9.2x22.4mm  207mm²  notice that about 60% of the die is CPU cores 

 

This is the apple's arm chip used in m1 

 

M1_575px.png

 

119 mm² 

 

Notice that less than 1/6th of the die is CPU cores.  Also notice the flat surfaces which are basically empty spaces or cells (so no complex circuitry) 

 

Now even if we assume that half the nm means half the physical size (which is not the case it is much more complicated than that but let's assume this best case scenario) 

 

the Apple cores are far less complex compared to the intel ones that was my point in saying it is not as easy to shrink them down thus it is irrelevant in terms of "how much is apple ahead" that they are produced on a 5nm node. 

Link to comment
Share on other sites

Link to post
Share on other sites

Guest
This topic is now closed to further replies.


×