Jump to content

Arm introducing their new A76, G76 & V76 processors

WMGroomAK

Arm has brought out the details on their upcoming Cortex A76 & Mali G76 & V76 processors at their annual TechDay.  The new Cortex A76 is supposed to be a new Arm microarchitecture that they are touting as providing 'Laptop-class performance with mobile efficiency'.  This new processor will supposedly be based on TSMCs 7nm fabrication and be able to provide 35% more performance with 40% increase in power efficiency and 4-times the machine learning capabilities while clocking at 3 GHz as compared to the previous A75 at 2.8 GHz using 10nm production.  

 

Cortex A76:  https://www.anandtech.com/show/12785/arm-announces-cortexa76

Quote

The Cortex A76 is important for Arm for a design perspective as it represents a new start from a clean sheet. It’s rare for IP claim to be able to do this as it represents a great resource and time investment and if it weren’t for the Sophia design team taking over the steering wheel for the last two generations of products it wouldn’t have been reasonable to execute. The execution of the CPU design teams should be emphasised in particular as Arm claims this is the 5th generation “annual beat” product where the company delivers a new microarchitecture every new year. Think of it as an analogue to Intel’s past Tick-Tock strategy, but rather Tock-Tock-Tock for Arm with steady CAGR (compound annual growth rate) of 20-25% every generation coming from µarch improvements.

 

So what is the Cortex A76? In Arm’s words, it’s a “laptop-class” performance processor with mobile efficiency. The vision of the A76 as a laptop-class processor had been emphasised throughout the TechDay presentation so it seems Arm is really taking advantage of the large performance boost of the IP to cater to new market segments such as the emerging “Always connected PCs” which Qualcomm is spearheading with their SoC platforms.

 

The Cortex A76 microarchitecture has been designed with high performance while maintaining power efficiency in mind. Starting from a clean sheet allowed the designers to remove bottlenecks throughout the design and to break previous microarchitectural limitations. The focus here was again maximum performance while remaining within energy efficiency that is fit for smartphones.

 

In broad metrics, what we’re promised in actual products using the A76 is the follows: a 35% performance increase alongside 40% improved power efficiency. We’ll also see a 4x improvements in machine learning workloads thanks to new optimisations in the ASIMD pipelines and how dot products are handled. These figures are baselined on A75 configurations running at 2.8GHz on 10nm processes while the A76 is projected by Arm to come in at 3GHz on 7nm TSMC based products.

The Mali G76 GPU will supposedly be a significant improvement over the current G72 by providing 30% more performance density and energy efficiency on the same process node or 50% increase in performance over the current G72 when the G76 is placed on the TSMC 7nm process.

 

Mali G76:  https://www.anandtech.com/show/12834/arm-announces-the-mali-g76-scaling-up-bifrost

Quote

Today’s Arm announces the follow-up to the G72 and the latest offspring in the Bifrost family: The Mali G76. The targets of the GPU IP should be pretty clear: Improve performance, efficiency and area and try to catch up with the competition as much as possible.

 

Overall what Arm promises for the next generation of SoCs using the G76 on a new TSMC 7nm process is a 50% increase in performance versus current generation devices.

 

In terms of apples-to-apples comparisons, we see three key metrics that are improved: A 30% improvement in performance density is the first one. What this means is that either for the same area, the new GPU will perform 30% better, or for the same performance, the vendor can shrink the GPU space on the SoC.

 

The new GPU promises a 30% microarchitectural efficiency improvement thanks to a consolidation of the functional blocks of the unit. Efficiency is particularly something Arm needs to focus on in regards to Mali as we’ve seen a few missteps over the last year or two and the competition from Qualcomm in the GPU and 3D gaming space is particularly fierce.

 

Finally, there’s a quoted 2.7x improvement for machine learning inferencing applications thanks to the inclusion of new dedicated 8-bit dot product instructions

The last bit of the '76' processors that Arm announced today was the Mali V76 Video encode/decode processor which as an 8 core design is intended to be able to decode up to 8Kp60 and encode up to 8Kp30 videos, providing approximately twice the throughput per a core as the previous V61.  This processor will also be able to handle most of the standard Video Codecs, with AV1 being the only exception.

 

Mali V76:  https://www.anandtech.com/show/12835/arm-announces-maliv76-video-processor-planning-for-the-8k-video-future

Quote

Overall the V76 brings a slew of changes to Arm’s video processor ecosystem. On the performance front the new video block offers twice the decode performance of the V61, and on the encode side Arm says that encoding quality has been improved by about 25%. Meanwhile on the features front, the latest block adds support for 10-bit H.264 encoding and decoding, the one major codec/format that wasn’t already present on the V61.

 

From a hardware perspective, Arm has retained their scalable core design, and the V76 is intended for designs ranging from 2 to 8 cores. Arm’s ambitions are very forward-looking given the longer timeframe between generation, and as a result at a nominal frequency of 600MHz, an 8 core design is slated to be able to decode up to decode videos up to 8Kp60, and encode up to 8Kp30. Or for a smaller 4 core design, that becomes 4Kp120 decoding and 4Kp60 encoding. As previously stated, this is twice the throughput per-core of the V61, meaning that at least at nominal frequencies, this is the first Arm video processor block suitable for 8K video (as well as high frame rate 4K).

...

Along with the greater resolution support, the other most notable addition to the new processor is support for 10-bit H.264 video. This format was oddly absent in the V61 – the processor supported 10-bit HEVC, but not H.264 – and at the time the company didn’t think it would be needed. The slow adoption of HEVC relative to the faster adoption of HDR has changed that however, so for the V76 both encode and decode support for the format is being included.

 

On that note, however, this processor will not include any support for the upcoming AV1 codec. While the bitstream specification for the eagerly anticipated codec was released a couple of months back, the timing was unfortunately after Arm had already completed the V76 RTL (never mind the fact that the specification isn’t closed yet). So it’s going to have to be the next video block after the V76 before Arm can include AV1 decode support.

...

Finally, while not a focus of their presentation, Arm also briefly commented on HDR support in our briefing. In concert with their display processor, the new video processor is currently able to handle HDR10 and HLG formatted HDR video. Meanwhile support for HDR10+ – which is HDR10 with support for dynamic metadata – is set to arrive in the future. This is an important distinction, as Arm’s display controller can’t support Dolby Vision, meaning that HDR10+ would be the only dynamic HDR format that Arm can support.

All said, this all looks really promising for some future Arm based systems.  I would love to see these coming out in the near future as home media center boxes or even as a decent mobile platform.

Link to comment
Share on other sites

Link to post
Share on other sites

No SMT on the A76 ? Surprising considering what this core is aimed at ...

AMD Ryzen R7 1700 (3.8ghz) w/ NH-D14, EVGA RTX 2080 XC (stock), 4*4GB DDR4 3000MT/s RAM, Gigabyte AB350-Gaming-3 MB, CX750M PSU, 1.5TB SDD + 7TB HDD, Phanteks enthoo pro case

Link to comment
Share on other sites

Link to post
Share on other sites

"laptop-class"

 

Meanwhile, the SD835 on windows 10 is at least 50% behind real "laptop class" CPUs, and with this being only 30-40% ish faster than the SD845's A75s which are 25% ish faster than the A73s in the SD835 I don't think this is "laptop class."

 

Cool but Apple is still way far ahead guys.

Make sure to quote me or tag me when responding to me, or I might not know you replied! Examples:

 

Do this:

Quote

And make sure you do it by hitting the quote button at the bottom left of my post, and not the one inside the editor!

Or this:

@DocSwag

 

Buy whatever product is best for you, not what product is "best" for the market.

 

Interested in computer architecture? Still in middle or high school? P.M. me!

 

I love computer hardware and feel free to ask me anything about that (or phones). I especially like SSDs. But please do not ask me anything about Networking, programming, command line stuff, or any relatively hard software stuff. I know next to nothing about that.

 

Compooters:

Spoiler

Desktop:

Spoiler

CPU: i7 6700k, CPU Cooler: be quiet! Dark Rock Pro 3, Motherboard: MSI Z170a KRAIT GAMING, RAM: G.Skill Ripjaws 4 Series 4x4gb DDR4-2666 MHz, Storage: SanDisk SSD Plus 240gb + OCZ Vertex 180 480 GB + Western Digital Caviar Blue 1 TB 7200 RPM, Video Card: EVGA GTX 970 SSC, Case: Fractal Design Define S, Power Supply: Seasonic Focus+ Gold 650w Yay, Keyboard: Logitech G710+, Mouse: Logitech G502 Proteus Spectrum, Headphones: B&O H9i, Monitor: LG 29um67 (2560x1080 75hz freesync)

Home Server:

Spoiler

CPU: Pentium G4400, CPU Cooler: Stock, Motherboard: MSI h110l Pro Mini AC, RAM: Hyper X Fury DDR4 1x8gb 2133 MHz, Storage: PNY CS1311 120gb SSD + two Segate 4tb HDDs in RAID 1, Video Card: Does Intel Integrated Graphics count?, Case: Fractal Design Node 304, Power Supply: Seasonic 360w 80+ Gold, Keyboard+Mouse+Monitor: Does it matter?

Laptop (I use it for school):

Spoiler

Surface book 2 13" with an i7 8650u, 8gb RAM, 256 GB storage, and a GTX 1050

And if you're curious (or a stalker) I have a Just Black Pixel 2 XL 64gb

 

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, DocSwag said:

"laptop-class"

 

Meanwhile, the SD835 on windows 10 is at least 50% behind real "laptop class" CPUs, and with this being only 30-40% ish faster than the SD845's A75s which are 25% ish faster than the A73s in the SD835 I don't think this is "laptop class."

 

Cool but Apple is still way far ahead guys.

some loads actually do see massive performance boosts though . It seems the 30% boost is a very conservative bet

big_arm-tech-day-7-a76-performance-comparison.png_ashx.png.71a6e393741945803eb92d1efbaebe56.png

AMD Ryzen R7 1700 (3.8ghz) w/ NH-D14, EVGA RTX 2080 XC (stock), 4*4GB DDR4 3000MT/s RAM, Gigabyte AB350-Gaming-3 MB, CX750M PSU, 1.5TB SDD + 7TB HDD, Phanteks enthoo pro case

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, DocSwag said:

"laptop-class"

 

Meanwhile, the SD835 on windows 10 is at least 50% behind real "laptop class" CPUs, and with this being only 30-40% ish faster than the SD845's A75s which are 25% ish faster than the A73s in the SD835 I don't think this is "laptop class."

 

Cool but Apple is still way far ahead guys.

by your own numbers, 1.25*1.4 = 1.75 (the A76 cpu should be 75% faster than the 835) bringing it pretty close to that laptop class cpu.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, DocSwag said:

Cool but Apple is still way far ahead guys

When you realize people don't know the difference between Qualcomm and arm

One day I will be able to play Monster Hunter Frontier in French/Italian/English on my PC, it's just a matter of time... 4 5 6 7 8 9 years later: It's finally coming!!!

Phones: iPhone 4S/SE | LG V10 | Lumia 920 | Samsung S24 Ultra

Laptops: Macbook Pro 15" (mid-2012) | Compaq Presario V6000

Other: Steam Deck

<>EVs are bad, they kill the planet and remove freedoms too some/<>

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, bobhays said:

by your own numbers, 1.25*1.4 = 1.75 (the A76 cpu should be 75% faster than the 835) bringing it pretty close to that laptop class cpu.

 

8 hours ago, DocSwag said:

"laptop-class"

 

Meanwhile, the SD835 on windows 10 is at least 50% behind real "laptop class" CPUs, and with this being only 30-40% ish faster than the SD845's A75s which are 25% ish faster than the A73s in the SD835 I don't think this is "laptop class."

 

Cool but Apple is still way far ahead guys.

The gap is noticeably greater than that with an Ivy Bridge i7 being 68% faster in Cinebench ST. Source
image.png.d76efea8c73f7295a9baec626cd30e47.png

8 hours ago, Coaxialgamer said:

No SMT on the A76 ? Surprising considering what this core is aimed at ...

Most ARM based chip designs use 6-8 cores (mainly due to the more cores are better marketing capability). Even if the design for a laptop only had 4 A55s and 4 A76s the ability for most tasks performed on a laptop to scale would have been reached. Additionally SMT only provides benefits in certain workloads and only slightly increases efficiency as it uses more  power to obtain this boost so ARM would be wiser to leave the implementation how it is rather than exceed to the TDP envelope of a phone and risk another SD810 scenario.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, DocSwag said:

"laptop-class"

 

Meanwhile, the SD835 on windows 10 is at least 50% behind real "laptop class" CPUs, and with this being only 30-40% ish faster than the SD845's A75s which are 25% ish faster than the A73s in the SD835 I don't think this is "laptop class."

 

Cool but Apple is still way far ahead guys.

You're comparing real performance to emulation performance. X86 code requires emulation to run on an ARM CPU so running Windows 10 or ARM require emulation and that impacts performance.

Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Corsair MP600 1TB PCIe Gen 4 | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Ubuntu 20.04.2 LTS |

 

Server:-

Intel NUC running Server 2019 + Synology DSM218+ with 2 x 4TB Toshiba NAS Ready HDDs (RAID0)

Link to comment
Share on other sites

Link to post
Share on other sites

Good to see GPU improvements particularly to efficiency, Qualcomm really stepped their game this year so your gonna need that to compete with the Adrenos.

If you want to reply back to me or someone else USE THE QUOTE BUTTON!                                                      
Pascal laptops guide

Link to comment
Share on other sites

Link to post
Share on other sites

Looking forward to the benchmark scores of the upcoming A12 chip from Apple. 

There is more that meets the eye
I see the soul that is inside

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Still waiting for actual ARM laptops to arrive..............

How many PCIe lanes do these "laptop" class ARM CPUs have?

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, ScratchCat said:

 

Most ARM based chip designs use 6-8 cores (mainly due to the more cores are better marketing capability). Even if the design for a laptop only had 4 A55s and 4 A76s the ability for most tasks performed on a laptop to scale would have been reached. Additionally SMT only provides benefits in certain workloads and only slightly increases efficiency as it uses more  power to obtain this boost so ARM would be wiser to leave the implementation how it is rather than exceed to the TDP envelope of a phone and risk another SD810 scenario.

That's actually not what i meant. From what i've seen the A76 is a wide design , it can issue up to 8 instructions per clock. Granted , those slots aren't as versatile as skylake's ( some of them are limited to AGU ) , but that's still quite a few slots to fill . SMT increases the utilisation of the execution engine with minimal transistor overhead , which is why i'm surpised . It seems it would benefit

AMD Ryzen R7 1700 (3.8ghz) w/ NH-D14, EVGA RTX 2080 XC (stock), 4*4GB DDR4 3000MT/s RAM, Gigabyte AB350-Gaming-3 MB, CX750M PSU, 1.5TB SDD + 7TB HDD, Phanteks enthoo pro case

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, Coaxialgamer said:

No SMT on the A76 ? Surprising considering what this core is aimed at ...

Not really necessary just yet and would probably add complexity and use more die space.

You could fit 8 of these cores into a cluster and still use less die space than 4 CFL cores.

Intel also doesn't use SMT for their low power cores. They did briefly in their catch-up phase with Saltwell (I think) which was a single core design 1/2 - at least that was the case for phones. I think it was partially a power/complexity versus performance issue.

It may come later but it would be a pretty big paradigm shift in core design. I think perhaps after this generation of chips or perhaps it's something the Sophia team is working on.

 

TL;DR: there must be a penalty for SMT since Intel abandoned it in their low power chips. 

3 hours ago, suicidalfranco said:

When you realize people don't know the difference between Qualcomm and arm

Well, since Qualcomm uses ARM designs there isn't any difference on the core design. You could argue the SoC is very different but the core itself isn't. And it would be a huge surprise if Qualcomm launched a new custom design. From what I've heard Qualcomm pretty much shut down the custom design unit (or running a skeleton crew) and moved the engineers elsewhere.

Link to comment
Share on other sites

Link to post
Share on other sites

But what about Spectre mitigations?

I'm guessing that, since they're not advertising it, performance hitting OS mitigations are still required.

Link to comment
Share on other sites

Link to post
Share on other sites

38 minutes ago, Trixanity said:

Not really necessary just yet and would probably add complexity and use more die space.

You could fit 8 of these cores into a cluster and still use less die space than 4 CFL cores.

Intel also doesn't use SMT for their low power cores. They did briefly in their catch-up phase with Saltwell (I think) which was a single core design 1/2 - at least that was the case for phones. I think it was partially a power/complexity versus performance issue.

It may come later but it would be a pretty big paradigm shift in core design. I think perhaps after this generation of chips or perhaps it's something the Sophia team is working on.

 

TL;DR: there must be a penalty for SMT since Intel abandoned it in their low power chips. 

Well, since Qualcomm uses ARM designs there isn't any difference on the core design. You could argue the SoC is very different but the core itself isn't. And it would be a huge surprise if Qualcomm launched a new custom design. From what I've heard Qualcomm pretty much shut down the custom design unit (or running a skeleton crew) and moved the engineers elsewhere.

Intel doesn't use SMT in atom because it's a relatively narrow architecture, it can only issue a couple instructions at a time. 

This ARM design is 8 issue, meaning it's very wide, as wide as coffeelake in fact. Granted, there aren't as many ALUs, and some slots are limited to address generation, but it's still very wide even by today's definition (zen is 6 issue, haswell and over are 8 issue, apple's monsoon is 6 issue). The hardware overhead is minimal too. 

 

SMT could do wonders to extract more parallelism and fill those slots. 

I'm not saying ARM's engineers were stupid, as they probably had a very good reason to not use it, but I'm simply surprised, as basically every high performance ( wide) architecture out there uses SMT to complement limited OOOE

AMD Ryzen R7 1700 (3.8ghz) w/ NH-D14, EVGA RTX 2080 XC (stock), 4*4GB DDR4 3000MT/s RAM, Gigabyte AB350-Gaming-3 MB, CX750M PSU, 1.5TB SDD + 7TB HDD, Phanteks enthoo pro case

Link to comment
Share on other sites

Link to post
Share on other sites

45 minutes ago, Coaxialgamer said:

Intel doesn't use SMT in atom because it's a relatively narrow architecture, it can only issue a couple instructions at a time. 

This ARM design is 8 issue, meaning it's very wide, as wide as coffeelake in fact. Granted, there aren't as many ALUs, and some slots are limited to address generation, but it's still very wide even by today's definition (zen is 6 issue, haswell and over are 8 issue, apple's monsoon is 6 issue). The hardware overhead is minimal too. 

 

SMT could do wonders to extract more parallelism and fill those slots. 

I'm not saying ARM's engineers were stupid, as they probably had a very good reason to not use it, but I'm simply surprised, as basically every high performance ( wide) architecture out there uses SMT to complement limited OOOE

Intel has used SMT in Atom and Atom is also OOOE. In fact it was when they transitioned to OOOE on Atom that they dropped SMT. Whatever that means.

There's probably more to the story on ARM though. Apple isn't using SMT either and you'd think they'd be a prime candidate for SMT.

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, bobhays said:

by your own numbers, 1.25*1.4 = 1.75 (the A76 cpu should be 75% faster than the 835) bringing it pretty close to that laptop class cpu.

Correction, for windows on arm modern intel cpus are over twice as fast by a very good margin. When I said 50% slower I was being conservative.

https://www.techspot.com/review/1599-windows-on-arm-performance/page2.html

5 hours ago, suicidalfranco said:

When you realize people don't know the difference between Qualcomm and arm

I was using Qualcomm as a reference point especially seeing as they're the only one I can think of off the top of my head that are using A75s already.

5 hours ago, Master Disaster said:

You're comparing real performance to emulation performance. X86 code requires emulation to run on an ARM CPU so running Windows 10 or ARM require emulation and that impacts performance.

That's just the problem... If they're gonna call it "laptop class" but if you actually use it in laptops it ends up being way slower... It doesn't really mean anything.

 

Laptop class implies it's on the same performance level as modern x86 laptop cpus, and that's great if that's sort of true if you try to compare across architectures. But if you actually in the real world put it in a laptop it's much slower.

Make sure to quote me or tag me when responding to me, or I might not know you replied! Examples:

 

Do this:

Quote

And make sure you do it by hitting the quote button at the bottom left of my post, and not the one inside the editor!

Or this:

@DocSwag

 

Buy whatever product is best for you, not what product is "best" for the market.

 

Interested in computer architecture? Still in middle or high school? P.M. me!

 

I love computer hardware and feel free to ask me anything about that (or phones). I especially like SSDs. But please do not ask me anything about Networking, programming, command line stuff, or any relatively hard software stuff. I know next to nothing about that.

 

Compooters:

Spoiler

Desktop:

Spoiler

CPU: i7 6700k, CPU Cooler: be quiet! Dark Rock Pro 3, Motherboard: MSI Z170a KRAIT GAMING, RAM: G.Skill Ripjaws 4 Series 4x4gb DDR4-2666 MHz, Storage: SanDisk SSD Plus 240gb + OCZ Vertex 180 480 GB + Western Digital Caviar Blue 1 TB 7200 RPM, Video Card: EVGA GTX 970 SSC, Case: Fractal Design Define S, Power Supply: Seasonic Focus+ Gold 650w Yay, Keyboard: Logitech G710+, Mouse: Logitech G502 Proteus Spectrum, Headphones: B&O H9i, Monitor: LG 29um67 (2560x1080 75hz freesync)

Home Server:

Spoiler

CPU: Pentium G4400, CPU Cooler: Stock, Motherboard: MSI h110l Pro Mini AC, RAM: Hyper X Fury DDR4 1x8gb 2133 MHz, Storage: PNY CS1311 120gb SSD + two Segate 4tb HDDs in RAID 1, Video Card: Does Intel Integrated Graphics count?, Case: Fractal Design Node 304, Power Supply: Seasonic 360w 80+ Gold, Keyboard+Mouse+Monitor: Does it matter?

Laptop (I use it for school):

Spoiler

Surface book 2 13" with an i7 8650u, 8gb RAM, 256 GB storage, and a GTX 1050

And if you're curious (or a stalker) I have a Just Black Pixel 2 XL 64gb

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, DocSwag said:

Correction, for windows on arm modern intel cpus are over twice as fast by a very good margin. When I said 50% slower I was being conservative.

https://www.techspot.com/review/1599-windows-on-arm-performance/page2.html

I was using Qualcomm as a reference point especially seeing as they're the only one I can think of off the top of my head that are using A75s already.

That's just the problem... If they're gonna call it "laptop class" but if you actually use it in laptops it ends up being way slower... It doesn't really mean anything.

 

Laptop class implies it's on the same performance level as modern x86 laptop cpus, and that's great if that's sort of true if you try to compare across architectures. But if you actually in the real world put it in a laptop it's much slower.

Deleted

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Trixanity said:

Well, since Qualcomm uses ARM designs

And so does Apple. Which makes the comment above pointless

One day I will be able to play Monster Hunter Frontier in French/Italian/English on my PC, it's just a matter of time... 4 5 6 7 8 9 years later: It's finally coming!!!

Phones: iPhone 4S/SE | LG V10 | Lumia 920 | Samsung S24 Ultra

Laptops: Macbook Pro 15" (mid-2012) | Compaq Presario V6000

Other: Steam Deck

<>EVs are bad, they kill the planet and remove freedoms too some/<>

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Trixanity said:

Intel has used SMT in Atom and Atom is also OOOE. In fact it was when they transitioned to OOOE on Atom that they dropped SMT. Whatever that means.

There's probably more to the story on ARM though. Apple isn't using SMT either and you'd think they'd be a prime candidate for SMT.

Granted , apple has a wide design that doesn't use SMT... The early intel atoms were relatively narrow ( Bonnel was 2-issue) but they were in-order .OoOE was out of the question because of the power and transistor budgets . Intel was basically forced to use SMT in order to get any hope of filling the execution engine , and it shows , as SMT brought a nearly 2-fold increase in performance in optimized workloads . Newer designs being out of order , they had a reasonable hope of filling all those ports , and because atom wasn't getting any wider , it worked. My guess is that they just saved the extra transistors . With the more recent Goldmont cores it seems they dabbled with it , as knight's landing has 4-way SMT support , but these newer atoms are actually wider so it makes a bit of sense. I don't quite know why they didn't enable it in consumer Goldmont...

AMD Ryzen R7 1700 (3.8ghz) w/ NH-D14, EVGA RTX 2080 XC (stock), 4*4GB DDR4 3000MT/s RAM, Gigabyte AB350-Gaming-3 MB, CX750M PSU, 1.5TB SDD + 7TB HDD, Phanteks enthoo pro case

Link to comment
Share on other sites

Link to post
Share on other sites

20 hours ago, WMGroomAK said:

Arm has brought out the details on their upcoming Cortex A76 & Mali G76 & V76 processors

All at the same time of the announcement of Fallout 76?  Hmm, coincidental timing or PROOF OF A REAL VAULT 76 (V76) BEING BUILT?!

Yes, I'm kidding.........maybe.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Jito463 said:

All at the same time of the announcement of Fallout 76?  Hmm, coincidental timing or PROOF OF A REAL VAULT 76 (V76) BEING BUILT?!

Yes, I'm kidding.........maybe.

Fallout 76 mobile phone game confirmed

 

One day I will be able to play Monster Hunter Frontier in French/Italian/English on my PC, it's just a matter of time... 4 5 6 7 8 9 years later: It's finally coming!!!

Phones: iPhone 4S/SE | LG V10 | Lumia 920 | Samsung S24 Ultra

Laptops: Macbook Pro 15" (mid-2012) | Compaq Presario V6000

Other: Steam Deck

<>EVs are bad, they kill the planet and remove freedoms too some/<>

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, suicidalfranco said:

And so does Apple. Which makes the comment above pointless

No, Apple does not use ARM core designs. Otherwise Qualcomm, Apple, Huawei etc would all have similar performance and power characteristics. They share an ISA but there are vast differences between each vendor in how they design their products. So I'm not sure whether you're being pedantic in some fashion that goes beneath my notice or genuinely believe that they're doing the same thing (despite the vastly different results).

 

His comment merely says Apple is still ahead in core design. If ARM is behind so is Qualcomm since they're using ARM designs. And this core allegedly slots in just around A10 but would need 3 GHz to do so which means it won't.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×