Jump to content

$14 billion of profit in a single quarter for Nvidia

Kimb

Summary

Nvidia has earned an amazing $14 billion of profit in a single quarter related to it's sales of AI chips. Their 'gaming' revenue was only $2.6 billion during the same period. 

 

Quotes

Quote

Nvidia just made $14 billion of profit in a single quarter thanks to AI chips.Sales jumped 262 percent in Q1 2025 to hit a record $26B in revenue, of which nearly three-quarters ($19.4B) was data center compute — especially its Hopper GPUs for training LLMs and generative AI apps, says Nvidia. Gaming only accounted for $2.6 billion revenue this quarter.

 

My thoughts

14 billion dollars of ACTUAL PROFIT is just an incredible number. 

 

Sources

https://www.theverge.com/2024/5/22/24162792/nvidia-just-made-14-billion-of-profit-in-a-single-quarter-thanks-to-ai-chips

https://nvidianews.nvidia.com/news/nvidia-announces-financial-results-for-first-quarter-fiscal-2025

Link to comment
Share on other sites

Link to post
Share on other sites

It's impressive indeed, but really looks like a bubble to me, not sure it'll last long

System : AMD R9  7950X3D CPU/ Asus ROG STRIX X670E-E board/ 2x32GB G-Skill Trident Z Neo 6000CL30 RAM ASUS TUF Gaming AMD Radeon RX 7900 XTX OC Edition GPU/ Phanteks P600S case /  Thermalright Peerless Assassin 120 cooler (with 2xArctic P12 Max fans) /  2TB WD SN850 NVme + 2TB Crucial T500  NVme  + 4TB Toshiba X300 HDD / Corsair RM850x PSU

Alienware AW3420DW 34" 120Hz 3440x1440p monitor / Logitech G915TKL keyboard (wireless) / Logitech G PRO X Superlight mouse / Audeze Maxwell headphones

Link to comment
Share on other sites

Link to post
Share on other sites

impressive but extremely bad imo. If they made that much that means theyll either shift focus leading to worse gpus for bad prices or rise in prices so they can milk the golden goose they found

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, PDifolco said:

It's impressive indeed, but really looks like a bubble to me, not sure it'll last long

Nvidia has successfully ridden every tech bubble so far so I don't think they care if it is or not. They'll just seize the moment of the next thing.

Link to comment
Share on other sites

Link to post
Share on other sites

Meanwhile AMD had 22.7B in revenue for 2023. How do you compete in terms of R&D with a company whose profit in a single quarter is close to your yearly revenue 😬

Remember to either quote or @mention others, so they are notified of your reply

Link to comment
Share on other sites

Link to post
Share on other sites

The increase shouldn't be a total surprise as it has been ramping up over the last year. Roughly, a year ago net income was 2B. Quarters since then were about 6B, 9B, 12B and now this 14B.

 

If I'm not mistaken Nvidia are only playing in one part of the AI market, that of higher end systems. AMD, Intel and others wouldn't mind a slice of that, but it isn't the only play. Outside of dGPUs, nvidia don't have anything consumer tier. AMD/Intel/Qualcomm SoCs between them could take up a lot of the low power PC AI space.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Alienware AW3225QF (32" 240 Hz OLED)
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, iiyama ProLite XU2793QSU-B6 (27" 1440p 100 Hz)
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

AMD makes not even close to the same profit as Nvidia and given the amount of products Nvidia makes and sells it sucks for AMD not being able to make as much.

But here's the thing AMD is still making a lot of money, If AMD wants to compete its the old adage you have to spend money to make money and when your a business that literally makes things this would be even more the case.

AMD makes  good products but it just seems not good enough to compete with Nvidia however this is Nvidia's race to lose i saw a video from gamers nexus talking about how many cards AMD and Nvidia release and how often. In the video he made a comment that he doesnt think Nvidia will rest in the top spot in the way Intel did when it was market leader and it made me wonder if AMD to some degree has kind of given up and just accepted it will be 2nd place.

Intel will take years to get a large enough market share IMO to pass AMD in any meaningful way and i dont think they are close to passing Nvidia any time soon but its on AMD not to stop development.

 

Infect this could be a really good time for AMD to kick its ass into gear because Nvidia is plowing head on into the "AI" shit and so focused on hardware then i don't think they will give up on consumer cards but you have to wonder if they will push as hard in consumer space as they will in the "AI" hardware space.

 

If AMD was to run full steam ahead into the consumer space and really try they could taken a huge swing they may fail but they could also succeed.

 

The only AMD card i have being interested in was the Radeon 7 with HBM but they dropped it so fast i got whiplash, aside from the CPUs in consumer and server i have no interest in buying an AMD GPU. I have told a friend of mine who invests in stock to buy AMD because there CPUs are killing it not because the GPUs are worth buying.

 

AMD needs to spend big to get back into the fight with Nvidia just look at the investment Nvidia was making in ray tracing before then even made the 1st card with it. 

 

Before you start saying I'm an Nvidia fan boy and AMD hater I'm really not, i have a 970 and a 2080ti and will most likely be looking at a new Nvidia card some time soon but not because i like Nvidia it's because i have never had an issue with ether of my cards and when i see reports from performance graphs showing AMD ether just beating or trailing Nvidia cards but then hear about problems AMD cards are having it gives me a big worry about buying AMD graphics, I will also say the melting connector for Nvidia cards super worry's me.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 5/23/2024 at 4:59 AM, leadeater said:

Nvidia has successfully ridden every tech bubble so far so I don't think they care if it is or not. They'll just seize the moment of the next thing.

You really think there's a market for a Jensen Huang branded leather jacket?? 🤔

I'll never understand fads.

Link to comment
Share on other sites

Link to post
Share on other sites

On 5/23/2024 at 5:38 AM, Eigenvektor said:

Meanwhile AMD had 22.7B in revenue for 2023. How do you compete in terms of R&D with a company whose profit in a single quarter is close to your yearly revenue 😬

Fix your drivers maybe? 😛 

Link to comment
Share on other sites

Link to post
Share on other sites

After gouging the gaming segment to get more capital for their AI market... They now report record profits. Who could've seen it coming.

CPU: AMD Ryzen 3700x / GPU: Asus Radeon RX 6750XT OC 12GB / RAM: Corsair Vengeance LPX 2x8GB DDR4-3200
MOBO: MSI B450m Gaming Plus / NVME: Corsair MP510 240GB / Case: TT Core v21 / PSU: Seasonic 750W / OS: Win 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, da na said:

Fix your drivers maybe? 😛 

Better drivers alone aren't going to compete with DLSS, better ray tracing hardware, ray reconstruction and whatever else Nvidia will come up with to keep their feature advantage.

Remember to either quote or @mention others, so they are notified of your reply

Link to comment
Share on other sites

Link to post
Share on other sites

Not surprising. Really should have bought into Nvidia, there's no real way they can lose tbh. 

 

On 5/23/2024 at 2:53 AM, Millios said:

impressive but extremely bad imo. If they made that much that means theyll either shift focus leading to worse gpus for bad prices or rise in prices so they can milk the golden goose they found

Gamers are very far from a golden goose. 

On 5/23/2024 at 3:38 AM, Eigenvektor said:

Meanwhile AMD had 22.7B in revenue for 2023. How do you compete in terms of R&D with a company whose profit in a single quarter is close to your yearly revenue 😬

Funny that someone beat me to it, but I was also going to say better drivers. You don't grab a huge chunk of market share at a time, you chip away at it. Nail your driver packages, then start adding features, and after a while you'll start to build momentum. Realistically if AI and compute keep Nvidia happy they'd likely just abandon the consumer GPU market. 

CPU: Ryzen 9 5900 Cooler: EVGA CLC280 Motherboard: Gigabyte B550i Pro AX RAM: Kingston Hyper X 32GB 3200mhz

Storage: WD 750 SE 500GB, WD 730 SE 1TB GPU: EVGA RTX 3070 Ti PSU: Corsair SF750 Case: Streacom DA2

Monitor: LG 27GL83B Mouse: Razer Basilisk V2 Keyboard: G.Skill KM780 Cherry MX Red Speakers: Mackie CR5BT

 

MiniPC - Sold for $100 Profit

Spoiler

CPU: Intel i3 4160 Cooler: Integrated Motherboard: Integrated

RAM: G.Skill RipJaws 16GB DDR3 Storage: Transcend MSA370 128GB GPU: Intel 4400 Graphics

PSU: Integrated Case: Shuttle XPC Slim

Monitor: LG 29WK500 Mouse: G.Skill MX780 Keyboard: G.Skill KM780 Cherry MX Red

 

Budget Rig 1 - Sold For $750 Profit

Spoiler

CPU: Intel i5 7600k Cooler: CryOrig H7 Motherboard: MSI Z270 M5

RAM: Crucial LPX 16GB DDR4 Storage: Intel S3510 800GB GPU: Nvidia GTX 980

PSU: Corsair CX650M Case: EVGA DG73

Monitor: LG 29WK500 Mouse: G.Skill MX780 Keyboard: G.Skill KM780 Cherry MX Red

 

OG Gaming Rig - Gone

Spoiler

 

CPU: Intel i5 4690k Cooler: Corsair H100i V2 Motherboard: MSI Z97i AC ITX

RAM: Crucial Ballistix 16GB DDR3 Storage: Kingston Fury 240GB GPU: Asus Strix GTX 970

PSU: Thermaltake TR2 Case: Phanteks Enthoo Evolv ITX

Monitor: Dell P2214H x2 Mouse: Logitech MX Master Keyboard: G.Skill KM780 Cherry MX Red

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, dizmo said:

Gamers are very far from a golden goose. 

whether we like it or not the gaming gpu market is by sheer percentage dominated by nvidia with most rocking a team green gpu for one reason or another. Can amd remedy this if it fixes its drivers more and focuses in more budget gpus instead of high ends like the 7900 xtx? yes but rn nvidia is #1 and amd and intel will be scrapping for #2 when it comes to gaming.

Link to comment
Share on other sites

Link to post
Share on other sites

21 hours ago, dizmo said:

Realistically if AI and compute keep Nvidia happy they'd likely just abandon the consumer GPU market. 

That would be something, if iGPUs displace dGPUs, not because of technical superiority, but simply due to not being enough profit in dGPUs, even with demand. 

My eyes see the past…

My camera lens sees the present…

Link to comment
Share on other sites

Link to post
Share on other sites

22 hours ago, dizmo said:

Realistically if AI and compute keep Nvidia happy they'd likely just abandon the consumer GPU market. 

AMD's whole company latest quarter results was 5.5B revenue. Nvidia's gaming revenue alone was 2.6B (note quarters may not be aligned so this could be affected by seasonal variations). It is no small change. It still provides a safety net as AI matures and growth in that area flattens or even declines. Even if Nvidia decide to drop gaming, their IP is still going to be highly valuable. I'm not sure if there is anyone who both would want to buy it and could afford it, making this feel unlikely.

 

1 hour ago, Zodiark1593 said:

That would be something, if iGPUs displace dGPUs, not because of technical superiority, but simply due to not being enough profit in dGPUs, even with demand. 

iGPUs as we currently know them will always be limited in performance as they're constrained by memory bandwidth. We realistically need to move beyond DDR/LPDDR to GDDR or HBM, with the implication of non-user replaceable ram will be a requirement. Alternatively a big cache might be a way around it, but it will still add significant cost and have its limits.

 

Even if we say the above happens and the low to mid range dGPUs become unviable, implying high end dGPUs might become niche and even more expensive, Nvidia could transition their gaming focus to Arm based SoCs for Windows. There are already pointers to them looking to enter that market. Early movers are focusing on low power, but if it gains traction higher perf options would be welcome.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Alienware AW3225QF (32" 240 Hz OLED)
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, iiyama ProLite XU2793QSU-B6 (27" 1440p 100 Hz)
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

On 5/23/2024 at 5:51 AM, PDifolco said:

It's impressive indeed, but really looks like a bubble to me, not sure it'll last long

Its a bubble the same way the internet was a bubble. this isnt crypto. 

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, starsmine said:

Its a bubble the same way the internet was a bubble. this isnt crypto. 

Nvidia's part is a bubble. But Jensen always knows that. It's why he's always moving onto the next thing. Everyone will make custom hardware, but that takes a few years.

 

Still, it's wild to see Nvidia land on this after the ARM debacle and the previous Server GPU stock bubble.

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Taf the Ghost said:

Nvidia's part is a bubble. But Jensen always knows that. It's why he's always moving onto the next thing. Everyone will make custom hardware, but that takes a few years.

Yes, because we've literately seen this happen TWICE, once with bitcoin and once with Ethereum.

 

Likewise Google building their own transcoder chips for youtube.

 

Once every "AI" thing reaches a mature state, they'll build ASIC's to squeeze the last bit of performance out. Right now, the GPU's are best at training, and CPU's can do inference as long as it's not needed to be done in real time. GPU inference is "expensive" in energy, but only certain things really peak the performance. For example. ASR (Whisper) and TTS (various bleeding edge systems) all tend to max out on 16GB cards at present, and the difference between say a 1050 and a 3090 is measured in seconds rather than minutes on a CPU. 

 

9 minutes ago, Taf the Ghost said:

Still, it's wild to see Nvidia land on this after the ARM debacle and the previous Server GPU stock bubble.

I think it'll be short lived. We've seen nvidia chase short term profits before. We're also approaching die-shrink limits without discovering an exotic matter to make them from. Everyone has to move to "chiplet" type tech just to keep a reason to buy new hardware, because shrinking the die itself is going to eventually be too expensive to do.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

20 hours ago, porina said:

AMD's whole company latest quarter results was 5.5B revenue. Nvidia's gaming revenue alone was 2.6B (note quarters may not be aligned so this could be affected by seasonal variations). It is no small change. It still provides a safety net as AI matures and growth in that area flattens or even declines. Even if Nvidia decide to drop gaming, their IP is still going to be highly valuable. I'm not sure if there is anyone who both would want to buy it and could afford it, making this feel unlikely.

 

iGPUs as we currently know them will always be limited in performance as they're constrained by memory bandwidth. We realistically need to move beyond DDR/LPDDR to GDDR or HBM, with the implication of non-user replaceable ram will be a requirement. Alternatively a big cache might be a way around it, but it will still add significant cost and have its limits.

 

Even if we say the above happens and the low to mid range dGPUs become unviable, implying high end dGPUs might become niche and even more expensive, Nvidia could transition their gaming focus to Arm based SoCs for Windows. There are already pointers to them looking to enter that market. Early movers are focusing on low power, but if it gains traction higher perf options would be welcome.

Nvidia has wanted to get out of the consumer-facing GPU market since the mid-2000s. Jensen might have a shot at doing it, now, but, hilariously, his paranoia about where the next market is should probably prevent him from doing that. Somehow coming full circle with using the gaming revenue to try to find a new market will keep them in gaming because that's the space that drives the Tech fast enough, with assured profits, to justify R&D risks.  My theory that the 2030 dGPU release being their last major one is still clearly on the table, but I love the Full Circle Paranoia bit keeping them still in the space.

 

One thing to understand about Nvidia is that Jensen has been wanting to become IBM for decades. Consumer Facing companies without structural markets are always in a rough business space. See the Car Industry if you want to understand the issue. Jensen has been trying to escape the dGPU consumer market for decades because of that. Mind you, it's going to make him something like 20 billion USD trying to run away, but it's still funny to watch.

Link to comment
Share on other sites

Link to post
Share on other sites

More proof that Nvidia has been price gouging hard over the last 4 years. They can probably easily afford to sell 4090's for <$1000 and still make a decent amount of money. On the one hand, consumers have shown that they're willing to pay >$1000 for high-end GPUs, and on the other hand, their other markets allow them to miss a gaming GPU sale every now and then. Everything is in place for them to raise prices even more.

If someone did not use reason to reach their conclusion in the first place, you cannot use reason to convince them otherwise.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Kisai said:

Yes, because we've literately seen this happen TWICE, once with bitcoin and once with Ethereum.

 

Likewise Google building their own transcoder chips for youtube.

 

Once every "AI" thing reaches a mature state, they'll build ASIC's to squeeze the last bit of performance out. Right now, the GPU's are best at training, and CPU's can do inference as long as it's not needed to be done in real time. GPU inference is "expensive" in energy, but only certain things really peak the performance. For example. ASR (Whisper) and TTS (various bleeding edge systems) all tend to max out on 16GB cards at present, and the difference between say a 1050 and a 3090 is measured in seconds rather than minutes on a CPU. 

 

I think it'll be short lived. We've seen nvidia chase short term profits before. We're also approaching die-shrink limits without discovering an exotic matter to make them from. Everyone has to move to "chiplet" type tech just to keep a reason to buy new hardware, because shrinking the die itself is going to eventually be too expensive to do.

 

 

Was probably about a decade ago I had along chat with a friend (deep in the tech space) about the need for 128 bit memory addressing coming up in the future. He thought it'd never be needed and, while I'm not completely convinced it'll need to be, everything deep researchers has been proposing for a while were going to need incredible amounts of quickly accessible data to process in any reasonable time frame.  I wish I could remember what random paper spawned the conversation, but it was likely during the Big Data Buzzword period. But, the need for "near space" data calls has driven basically the entire industry since the Internet came about and it's ever progressing even further. Every iteration of these models will require more and more active memory, because that's simply the nature of what the models will do. It's not as if governments keep spending billions on super computer clusters just for fun.

 

Though I also do find it hilarious how deep into the Garbage In, Garbage Out Era we now are. The Adaptive Programs Era is going to be funny because, while it's going to blow up quite a few industries, the all consuming "everything will be AI" trend is going to explode so completely in a couple of years that it's going to be this grandly remembered tech event. The reasons for that a quite long, but I love the fact we get a real-time example of Cantor's Diagonal.

Link to comment
Share on other sites

Link to post
Share on other sites

27 minutes ago, Taf the Ghost said:

Was probably about a decade ago I had along chat with a friend (deep in the tech space) about the need for 128 bit memory addressing coming up in the future. He thought it'd never be needed and, while I'm not completely convinced it'll need to be, everything deep researchers has been proposing for a while were going to need incredible amounts of quickly accessible data to process in any reasonable time frame.

That's actually more a side effect of the execution width in CPUs and being able to combine execution resources to do larger data operations. We don't actually need to increase to say 128bit, we can just as viably and is already being done allow for narrow execution at an increased operations/instructions per cycle.

 

This is where VNNI (AVX-VNNI-INT16 etc) come in however they are executed on AVX-512 execution engines and use instruction packing to run multiple per cycle which doesn't lead to higher performance if you aren't fully packing aka full occupancy of that execution resource. What may happen if these usages actually stay around long term and become more critical to computer platforms is either dedicated low precision execution engines will be created and put in CPUs/SoCs or all execution resources will be narrows with combining used to do traditional 32/64bit. One problem is that is likely not so efficient in number of transistors and density so I wouldn't expect to see anything smaller than 16bit.

 

AMD already has/is doing combining of 128bit execution engines to do 256bit AVX2.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Taf the Ghost said:

Nvidia has wanted to get out of the consumer-facing GPU market since the mid-2000s.

Can you expand on that? Just looking at financials, DC revenue first overtook gaming in summer 2020, and the AI explosion in revenue only started a few quarters ago.

 

23 minutes ago, leadeater said:

AMD already has/is doing combining of 128bit execution engines to do 256bit AVX2.

Note I read the post you're replying to as address size, not data size, which can be separated from each other. For consumer systems 64-bit seems sufficient for quite a while yet with an address range of 16 EB (16,000,000 TB).

 

I'm going to guess data type sizes wont grow simply because there isn't any major driver of them that makes it worth while. Use cases that exceed current sizes already have well developed ways of breaking it down, at lower efficiency/performance. SIMD type use cases could scale as much as you like if you can feed it, which seems to be the bigger problem in a lot of cases. BTW on FPUs Zen and Zen+ has two 128-bit FMA units, and only caught up with Intel with Zen 2 and newer with two 256-bit FMA units. Intel i series consumer CPUs since Haswell have two 256-bit FMAs.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Alienware AW3225QF (32" 240 Hz OLED)
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, iiyama ProLite XU2793QSU-B6 (27" 1440p 100 Hz)
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Kisai said:

Yes, because we've literately seen this happen TWICE, once with bitcoin and once with Ethereum.

 

Likewise Google building their own transcoder chips for youtube.

 

Once every "AI" thing reaches a mature state, they'll build ASIC's to squeeze the last bit of performance out. Right now, the GPU's are best at training, and CPU's can do inference as long as it's not needed to be done in real time. GPU inference is "expensive" in energy, but only certain things really peak the performance. For example. ASR (Whisper) and TTS (various bleeding edge systems) all tend to max out on 16GB cards at present, and the difference between say a 1050 and a 3090 is measured in seconds rather than minutes on a CPU.


Tensor cores (mixed precision matrix multipliers) are ASICS. The biggest issue isn't optimization of matrix multipliers insomuch as the Von Neuman bottleneck; hence the need for HBM modules for training. For Nvidia, that's where the real money is at. It's their money printing machine. 

 

The "bubble" (or race to the bottom of commodity pricing) however will be in the inferencing chips to process edge-AI LLMs. Basically any chip with an NPU, be it a CPU, SoC, PCIe or M.2 card. NPUs eventually will be used in everything from microwave ovens, cleaning bots, to video surveillance. You can be sure that the automotive industry will use them for real-time analytics and diagnostics. If your future car is leased, I would expect the dealership would reach out to the lease holder long before the driver was aware of any potential problem. AI will focus more proactive maintenance than reactive responses (though it could triage that too). Medical field, ditto.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, porina said:

Note I read the post you're replying to as address size, not data size, which can be separated from each other. For consumer systems 64-bit seems sufficient for quite a while yet with an address range of 16 EB (16,000,000 TB).

Err ahh yea... good point. Ooops 🙃

 

2 hours ago, porina said:

BTW on FPUs Zen and Zen+ has two 128-bit FMA units, and only caught up with Intel with Zen 2 and newer with two 256-bit FMA units. Intel i series consumer CPUs since Haswell have two 256-bit FMAs.

Yep that was what I was referring to, I don't think Intel combined execution units for single operation until what, Skylake-X/SP? I forget, pretty sure they did that for one of the AVX-512 operation capabilities.

 

AMD was just the easier and clearer example.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×