Jump to content

Intel Arc A730M 12GB 3DMark TimeSpy Score Leaked, in the league of RTX 3060/3070 Laptop GPU (Update #2 ~ More Gaming Benchmarks Added)

4 hours ago, starsmine said:

rdna 1 -> rdna 2

Sure, that makes as much sense as comparing the RTX 2070 to the RTX 3080 or higher then declaring a Gen on Gen increase of X amount, that's just bad evaluation lol.

 

3 hours ago, starsmine said:

The question was Gen on gen, to not compare flagship to flagship is a choice when comparing gen on gen. 

Unless you bother to have any kind of standard for the comparison then there is zero point to doing it. A good reliable one is comparing either cost equivalent products, allowing for inflation, or product to product replacement.

 

The one and only time in recent history where such high gains have been achieved is Pascal and that's not even 2x. Anyone going in expecting 2x-3x, which implies greater than 2x, is throwing out any kind of reasonable evaluation and expectations for what is simply rumor hype.

 

1 hour ago, starsmine said:

You mean when Kepler came out?

Because thats what we did when the 680 came out, using the GK104 rather then GK100
Did it again when Maxwell cam out with the 980 with it using the GM204 rather then the 200. 

That's not how that works, if you compared the 680 vs the 780 then you compared the 680 vs the 780, or the 780 vs the 980. What Nvidia actually choose silicon wise to deliver that product doesn't mean you should suddenly uproot logic and start comparing vastly different products in a product family or vastly different products. At that point why not compare the Titan Z to the 680....

 

Also I would similarly advise against looking at the rumored TDP increases as a meaningful way to tell if there is going to be a large performance uplift. Almost always these are due to pushing silicon power further in to the exponential efficiency decrease zone due to boost clocks not massively scaling out the silicon die size and number of transistors for a more natural power increase.

 

4 hours ago, starsmine said:

The big reasons we didnt get big gains on Turing and Ampere for rasterization is because all the die space got taken up by rtx and turing cores rather then focusing on cuda.

Turing the combined area that RT and Tensor cores used represents ~10% of the die area so it's not a big reason at all. Nvidia even reduced the amount of area taken up by these in Ampere. Architecture block diagrams are not a physical representation of the die or die area used and definitely are not to scale.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Brooksie359 said:

Imo high-end gpus in laptops are just stupid. They just don't make sense for the form factor. If you compare a mid range gpu laptop with a high end one you can see the size difference is so big that tbh its not worth it imo. Also most of the time you would have to use an external monitor to actually make good use of the high end gpu.

 

I think my main gripe with high-end laptop GPUs is the price of the laptop that has them increases exponentially as you go up in GPU tier. Also, the laptops that have them (the higher end GPUs) usually have beefed up other specs; like more storage, better CPU, more memory, etc. so you're looking at $2000 before you know it, when all you wanted was to go up slightly in GPU tier and not the other specs.

 

But laptops have made great strides with displays, so often you can find a good pairing of the GPU with the screen offering; which makes an external monitor not a necessity in many instances. I'm really enjoying this 17-inch 1080p 144Hz IPS on my Laptop, and it is doing quite well with the games I'm playing. 

Link to comment
Share on other sites

Link to post
Share on other sites

33 minutes ago, BiG StroOnZ said:

I think my main gripe with high-end laptop GPUs is the price of the laptop that has them increases exponentially as you go up in GPU tier. Also, the laptops that have them (the higher end GPUs) usually have beefed up other specs; like more storage, better CPU, more memory, etc. so you're looking at $2000 before you know it, when all you wanted was to go up slightly in GPU tier and not the other specs.

Yea, you can't go with the equivalent desktop style 12400F and RTX 3070 Ti with laptops. Want a high end GPU but legitimately don't need a high end i7/i9 well then that's just too bad. 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, leadeater said:

Yea, you can't go with the equivalent desktop style 12400F and RTX 3070 Ti with laptops. Want a high end GPU but legitimately don't need a high end i7/i9 well then that's just too bad. 

 

I know, and it sucks too because usually you want to over-spec the GPU because it's not something that you can exactly upgrade in the future in a laptop; while the CPU at least has more longevity, and the other stuff can easily be upgraded oftentimes (memory, storage). 

Link to comment
Share on other sites

Link to post
Share on other sites

Alright boys (and girls), looks like we got some Gaming Benchmarks (of the Intel Arc A730M) leaked from the same source (Weibo’s user “Golden Pig Upgrade” [translated]), seems the performance is around an RTX 3060M or faster than a 3050 desktop but slower than 3060 desktop. I will add this info to the OP and update the thread ~

 

bfe91a7aly1h2yyjpa5anj21c00u0wm6-1480x925.thumb.jpg.8e6e9b5f8123da8767398dfbf183b32e.jpg

 

bfe91a7aly1h2yyjpk7j0j21c00u0dn9-1480x925.thumb.jpg.8d2f4fc3adc0a5fee5ca5ca09feac089.jpg

 

bfe91a7aly1h2yyjpjqz9j21hc0u0tef-1480x833.thumb.jpg.c1af06db16e09553a361928fa7132d8a.jpg

 

bfe91a7aly1h2yyjpqgttj21hc0u0afk-1480x833.thumb.jpg.f6ed4c255a412ea7d9633e5148aceca6.jpg

 

bfe91a7aly1h2yyjp53c5j21hc0u077d-1480x833.thumb.jpg.241410729d6c3998c7c4f589ffd42627.jpg

 

bfe91a7aly1h2yyjp9yqbj21hc0u0tbs-1480x833.thumb.jpg.9aab260099ff2f599f15a9c9fea4a628.jpg

 

intelarca730gamebench.jpg.ca6a05978d5b391cf18082d3f075ee78.jpg

 

Quote

The set of games tested is rather small—F1 2020, Metro Exodus, and Assassin's Creed Odyssey, but the three are fairly "mature" games (have been around for a while). The A730M is able to score 70 FPS at 1080p, and 55 FPS at 1440p in Metro Exodus. With F1 2020, we're shown 123 FPS (average) at 1080p, and 95 FPS avg at 1440p. In Assassin's Creed Odyssey, the A730M yields 38 FPS at 1080p, and 32 FPS at 1440p.

 

  • Based on these results, the GPU can run Metro Exodus at 70 FPS in 1080p and 55 FPS in 1440p when using the High quality settings. Based on tests from Notebookcheck, this is better than the RTX 2070 (66 FPS on average at 1080p) but not as good as the RTX 3060 (80 FPS).
  • Arc A730M gets an average of 123 FPS at 1080p and 95 FPS at 1440p in F1 2020. In comparison, the RTX 3050's 1080p High preset offers an average of 120 FPS.
  • The A730M achieves 38 FPS at 1080p and 32 FPS at 1440p in Assassin's Creed Odyssey. These figures suggest that the A730M is marginally faster than the desktop GeForce RTX 3050.

The data shown below in Intel Arc Control panel software indicates that the GPU has been running at 2050 MHz boost and 92W. Do note that this is Furmark test, so these metric almost certainly do not correspond to real-world use.

 

INTEL-ARC-A730M-TEST-2.thumb.jpg.0f05a3b0ea5b97db09bc9f3d35fbb9b5.jpg

 

https://www.guru3d.com/news-story/intel-arc-a730m-game-tests-gaming-performance-differs-from-synthetic-performance.html

https://videocardz.com/newz/intel-arc-a730m-has-been-tested-in-games-with-performance-between-rtx-3050-and-rtx-3060

https://www.tomshardware.com/news/intel-arc-a730m-gaming-benchmarks-show-rtx-3050-mobile-level-performance

https://www.techpowerup.com/295624/intel-arc-a730m-tested-in-games-gaming-performance-differs-from-synthetic

https://wccftech.com/intel-high-end-arc-a730m-gpu-is-barely-faster-than-an-nvidia-rtx-3050-in-gaming/

https://www.pcgamer.com/first-intel-arc-alchemist-benchmarks-are-a-bit-of-a-mixed-bag/

https://hothardware.com/news/intel-arc-a730m-benchmarks-mobile-geforce-rtx-gpus

 

I think there is definitely some driver maturation to be done of course, but performance is not too shabby. I know because of the synthetics people were expecting more, but I think this is still a great start to be honest. I'm not as gloom and doom as some of these news outlets are (not all of them are) because I think this is still early. I also think it's unfair to judge the GPU based solely on 3 games. Between now and when the GPUs are widely available they will definitely have some time to do some serious work. 

Link to comment
Share on other sites

Link to post
Share on other sites

46 minutes ago, BiG StroOnZ said:

I think there is definitely some driver maturation to be done of course, but performance is not too shabby. I know because of the synthetics people were expecting more, but I think this is still a great start to be honest. I'm not as gloom and doom as some of these news outlets are (not all of them are) because I think this is still early. I also think it's unfair to judge the GPU based solely on 3 games. Between now and when the GPUs are widely available they will definitely have some time to do some serious work. 

Still missing the most important information, what's the Hash Rate? lol

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, BiG StroOnZ said:

 

I think my main gripe with high-end laptop GPUs is the price of the laptop that has them increases exponentially as you go up in GPU tier. Also, the laptops that have them (the higher end GPUs) usually have beefed up other specs; like more storage, better CPU, more memory, etc. so you're looking at $2000 before you know it, when all you wanted was to go up slightly in GPU tier and not the other specs.

 

But laptops have made great strides with displays, so often you can find a good pairing of the GPU with the screen offering; which makes an external monitor not a necessity in many instances. I'm really enjoying this 17-inch 1080p 144Hz IPS on my Laptop, and it is doing quite well with the games I'm playing. 

Oh that is a huge factor as well. I can find decent midrange laptops with a 3060 for like 1000 bucks while if I want a 3070 or higher the price jump as at least 500 bucks. Also sure high refreshrate laptops are out their but honestly if you have a 3080 laptop gpu you can easily push better monitors than the laptop screen. 

Link to comment
Share on other sites

Link to post
Share on other sites

On 6/7/2022 at 12:44 PM, Arika S said:

i would say 80% of people that bought a xx80 or xx90 series card don't actually need all that power anyway, they are just chasing the higher numbers, be it FPS or resolution, or both and maybe even the clout of having an xx80 or xx90 card.

But according to them they need 280fps instead of 240fps at 1080p because they can totally feel the difference.

Desktop: i9-10850K [Noctua NH-D15 Chromax.Black] | Asus ROG Strix Z490-E | G.Skill Trident Z 2x16GB 3600Mhz 16-16-16-36 | Asus ROG Strix RTX 3080Ti OC | SeaSonic PRIME Ultra Gold 1000W | Samsung 970 Evo Plus 1TB | Samsung 860 Evo 2TB | CoolerMaster MasterCase H500 ARGB | Win 10

Display: Samsung Odyssey G7A (28" 4K 144Hz)

 

Laptop: Lenovo ThinkBook 16p Gen 4 | i7-13700H | 2x8GB 5200Mhz | RTX 4060 | Linux Mint 21.2 Cinnamon

Link to comment
Share on other sites

Link to post
Share on other sites

Looks like there are some more Gaming (and Workstation) Benchmarks available today, it seems the Arc A730M loses to an RTX 3060M in mostly all instances (except Metro Exodus and Elden Ring). The performance is all over the place to be honest. There's a brief video from "Golden Pig Upgrade" comparing the Arc A730M and GeForce RTX 3060 Laptop and also a full review from IT-Home. The results from IT-Home are based on an unofficial driver (30.0.101.1726) so the results may vary. But both reviews appear to show similar synthetic performance, so the numbers should be accurate. I'll update the OP and add this info to it.

 

ARC-A730M-TOTAL-WAR_videocardz.thumb.jpg.45c96f60ea092f6fe8a7aef3a351e648.jpg

 

ARC-A730M-BOUNDARY_videocardz.thumb.jpg.c643f7593cc6a26ddbcd2876c2614ddc.jpg

 

ARC-A730M-CIV6_videocardz.thumb.jpg.5f65f49c0206aa605911e89718cab5cf.jpg

 

ARC-A730M-CIV6_videocardz.thumb.jpg.5f65f49c0206aa605911e89718cab5cf.jpg

 

ARC-A730M-GEARS-OF-WAR5_videocardz.thumb.jpg.a1c063bcd5608e887bbcc320ce0a3958.jpg

 

ARC-A730M-HITMAN2_videocardz.thumb.jpg.85f5dc3672763289a434f2421c5e410c.jpg

 

ARC-A730M-METRO-EXODUS_videocardz.thumb.jpg.27190022bb6ec98bad2a5721af62c0a3.jpg

 A730M-vs-RTX3060.thumb.jpg.58718a86b5fdcda56984c301d404dc75.jpg

 

moreintelarca730mbenchs2.thumb.jpg.307c23374be4c456ddd4a114e486fc39.jpg

 

moreintelarca730mbenchs.jpg.f28e01a9d84f7b2237c9827eabf06eee.jpg

 

Quote

Some Arc A730M results are not explained (settings and resolution), which may lead to wrong conclusions, however, just for a sake of basic understanding, these charts should provide all the necessary data.

 

With an exception to Metro Exodus Enhanced Edition, Arc A730M is not as fast as RTX 3060. NVIDIA’s GPU outperforms Intel GPU in almost every test. But worse performance is not the only problem, Intel clearly needs to work on the drivers as reviewers report that some games do not even start or output errors. Intel mobile Arc series seems to do just fine in 3DMark though.

 

The GeForce RTX 3060 Mobile pulled a resounding victory over the Arc A730M. If we calculate the geometric mean for the average framerates, the GeForce RTX 3060 Mobile finished with a score of 109.58, while the Arc A730M put up 67.63. The GeForce RTX 3060 Mobile was up to 62% faster than the Arc A730M. The performance delta looks accurate, considering that the Arc A730M performed like a GeForce RTX 3050 Mobile in previous gaming benchmarks.

 

In eight of the titles, the Arc A730M only managed to score a victory over GeForce RTX 3060 Mobile in Metro Exodus Enhanced Edition. On the other hand, Alchemist delivered up to 6.59% higher average framerates than its Ampere rival. Some of the most significant performance margins were in Boundary and Counter-Strike: Global Offensive. However, the GeForce RTX 3060 Mobile machine was utilizing Nvidia DLSS in the former, giving it an unfair advantage. 

 

The reviewer also provided some workstation GPU results in the shape of the popular SPECviewperf 2020 benchmark. The scenario didn't change, and the GeForce RTX 3060 Mobile continued to dominate the Arc A730M with performance deltas spanning from 20% up to a whopping 515%.

 

https://videocardz.com/newz/first-review-of-intel-alchemist-acm-g10-gpu-is-out-arc-a730m-is-outperformed-by-rtx-3060m-in-gaming

https://wccftech.com/intel-arc-a730m-high-end-mobility-gpu-slower-than-rtx-3060m-despite-latest-drivers/

https://www.tomshardware.com/news/geforce-rtx-3060-mobile-kicks-intel-arc-a730m-around

https://www.bilibili.com/video/BV1US4y1i7ne

https://www.ithome.com/0/623/070.htm

 

I'm guessing this might be why the lineup has been limited to Chinese markets first before global release. It might simply be that the software is not even close to ready. I know things were looking promising with the synthetics a few days ago, and even yesterday the first gaming benchmarks weren't looking too bad. But this is quite a different scenario altogether, as even the synthetics here have nothing beneficial going on for the A730M. This is supposed to be a relatively high-end GPU in the end and it's not really competitive with NVIDIA's mainstream offering. If this is the final performance to be expected when the product launches worldwide, the only saving grace is pricing; as the performance is pretty poor in this showing here. Obviously it would be best to wait until Arc lands into the hands of respected reviewers, instead of guesstimating performance from these early reviews. However, nevertheless, it's still quite disappointing. 

Link to comment
Share on other sites

Link to post
Share on other sites

also with how intel arc got some error rates with their translation layer, dont remember what it was for hopefully not for the graphics APIs. and more like if they were and try to use DLSS or some other solutions? if one can get DLSS to work by nvidia? 😛 although wouldn't be any point to it, compared to use XeSS

 

hope they can fix some of the performance as pointed above, if some titles will do 4-5x worse compared to the ones it can keep up with a bit.

 

card do be looking a bit nice tho, if one can get that model.

also didn't know or maybe it was said before

"The Intel Arc AI based highlight reel that requires no developer support"

https://youtu.be/9J2N-EbfOuc?t=627

Edited by Quackers101
Link to comment
Share on other sites

Link to post
Share on other sites

On 6/8/2022 at 8:17 AM, leadeater said:

Still missing the most important information, what's the Hash Rate? lol

At least 2

✨FNIGE✨

Link to comment
Share on other sites

Link to post
Share on other sites

I'm curious as to what the prices will be when they launch the desktop GPUs in the summer? 

Leonidas Specs: Ryzen 7 5800X3D | AMD 6800 XT Midnight Black | MSI B550 Gaming Plus | Corsair Dominator CL16 3200 MHz  4x8 32GB | be quiet! Silent Base 802

Maximus Specs: Ryzen 7 3700x | AMD 6700 XT Power Color Fighter | Asrock B550M-Itx/AC | Corsair Vengeance CL 16 3200 MHz 2x8 16 GB | Fractal Ridge Case (HTPC)


 

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, BiG StroOnZ said:

This is supposed to be a relatively high-end GPU in the end and it's not really competitive with NVIDIA's mainstream offering.

How did you figure that out? The A770M is the highest level part revealed, and this is the 2nd highest. The public info for A730M relative to A770M is that is has 75% the cores, 75% the memory width, 2/3 the clock. That's quite a step down. It is bigger than the gap between 3070M and 3060M, but not as big a drop as 3050M. While we lack detail on absolutes, it could be said that the models in the Arc range may not directly equate to models in nvidia's range.

 

I'm not awake enough yet to go through this new set of claims, but based on previous I did have a suspicion that Arc may do relatively well on titles using more modern features than older basic games.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

On 6/7/2022 at 10:45 PM, Brooksie359 said:

Oh that is a huge factor as well. I can find decent midrange laptops with a 3060 for like 1000 bucks while if I want a 3070 or higher the price jump as at least 500 bucks. Also sure high refreshrate laptops are out their but honestly if you have a 3080 laptop gpu you can easily push better monitors than the laptop screen. 

 

Exactly, the price jump is huge and that's what actually lead me to pick out my current Laptop.

 

I see your point that you can better utilize the GPU with a superior monitor compared to the laptop screen, but some of the laptop screens paired with the 3080 or 3080 Ti's are 4K 120Hz or 1440p 240Hz. Of course you're still limited to the 17.3-inch form factor usually, but not bad by any means. At least you have exceptional pixel density to look forward to in those configurations.

 

12 hours ago, porina said:

How did you figure that out? The A770M is the highest level part revealed, and this is the 2nd highest. The public info for A730M relative to A770M is that is has 75% the cores, 75% the memory width, 2/3 the clock. That's quite a step down. It is bigger than the gap between 3070M and 3060M, but not as big a drop as 3050M. While we lack detail on absolutes, it could be said that the models in the Arc range may not directly equate to models in nvidia's range.

 

I'm not awake enough yet to go through this new set of claims, but based on previous I did have a suspicion that Arc may do relatively well on titles using more modern features than older basic games.

 

Well I figured since this was the second highest part in their lineup, it is considered relatively high end when looking at Intel's own graphs:

 

xVDv7XgxnzRYMvQYNU3nfd.thumb.jpg.9d40c3788acc3990aa4059f6f97b65e6.jpg''

 

There are three below it and only one above it. The A730M features 24 Xe-Cores out of 32 available, where the A770M has all 32 Xe-Cores. That's only a 33.33% increase in core count (3072 shaders vs 4096 shaders). The Arc 7 A730M and A770M are the two highest end options. Which is why I consider this a rather high end part. While you're correct that it's "quite a step down", it's not when compared to NVIDIA's lineup (it's actually the same). The 3060 Mobile has 3840 CUDA Cores and the 3070 Mobile has 5120 CUDA Cores, also a 33.33% increase in shaders. So in that regard, I would say you could claim that they equate to NVIDIA's range using your examples of the 3060 and 3070. You are correct that the 3050 compared to the 3060 or 3070 is much greater gap/drop, but not the 3060 to 3070. 

 

Well, just to give you the TL;DR version of the new set of claims. It only beats the 3060 in two titles: Metro Exodus and Elden Ring, and loses handily in every other test besides 3DMark. Here's one of the summaries:

 

IMG_20220609_113817.jpg.8f63803e7d01f36b4d5b9be06fb7b59c.jpg

 

Some of the numbers from other games showed it doing well (the first set of game benchmarks that were leaked), but most of the newer benches/tests show it not doing so well. Whether you gather that it will "do relatively well on titles using more modern features than older basic games" has yet to be seen. 

Link to comment
Share on other sites

Link to post
Share on other sites

On 6/6/2022 at 6:23 PM, leadeater said:

Yea no, that isn't happening.

3x is out there but the math 100% supports 2x with the only wildcard being the final clock speeds.  At least for 3090 vs 4090.

AMD 7950x / Asus Strix B650E / 64GB @ 6000c30 / 2TB Samsung 980 Pro Heatsink 4.0x4 / 7.68TB Samsung PM9A3 / 3.84TB Samsung PM983 / 44TB Synology 1522+ / MSI Gaming Trio 4090 / EVGA G6 1000w /Thermaltake View71 / LG C1 48in OLED

Custom water loop EK Vector AM4, D5 pump, Coolstream 420 radiator

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, ewitte said:

3x is out there but the math 100% supports 2x with the only wildcard being the final clock speeds.  At least for 3090 vs 4090.

 

Yeah, that's what the most recent rumors have it at:

Quote

In a new tweet, leaker "kopite7kimi" says that "nothing changed yet" with the GeForce RTX 4090 and that the "performance will be slightly higher than twice the RTX 3090".

 

To clarify, he said "I mean RTX 4090 > 2x RTX 3090". Yeah, it can't get any clearer than that.

 

We already knew the GeForce RTX 4090 would be twice as fast, but OVER twice as fast isn't too much of a stretch. Not when we're hearing rumors that NVIDIA is testing an even faster AD102-based GPU with up to 800-900W of power.

 

https://www.tweaktown.com/news/86703/nvidia-geforce-rtx-4090-actually-over-twice-as-fast-the-3090/index.html 

Link to comment
Share on other sites

Link to post
Share on other sites

56 minutes ago, ewitte said:

3x is out there but the math 100% supports 2x with the only wildcard being the final clock speeds.  At least for 3090 vs 4090.

Double the CUDA cores, even with higher clocks, doesn't lead to double the performance. And FYI the rumors aren't even claiming double the CUDA cores. If 2x or greater is going to be even close to true it will be with RT and DLSS both on.

 

The rumors only claim 70% more CUDA cores, the math simply does not stack up in any shape at all.

 

47 minutes ago, BiG StroOnZ said:

Not when we're hearing rumors that NVIDIA is testing an even faster AD102-based GPU with up to 800-900W of power.

LOL! Ok can we actually keep things to the realm of actually possible. Anything like that can only be direct die water cooled in server applications, 800W and greater isn't coming to consumer market outside of Kingpin cards which already support those power levels, LN2 only obviously. Yes I know different type of thing but it doesn't change that there is no viable way to cool such a GPU with an air cooler and let me ask if Nvidia would ever release such a product in to the consumer product chain?

 

Rumors != facts. Facts are we know nothing. Things we can do however is use historical data trends which do not support 2x increase as being likely. I'm more than happy to be wrong, what I'm not happy to do is spread very unlikely to happen rumor talk as if it's actually as possible as what is being said.

Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, leadeater said:

LOL! Ok can we actually keep things to the realm of actually possible. Anything like that can only be direct die water cooled in server applications, 800W and greater isn't coming to consumer market outside of Kingpin cards which already support those power levels, LN2 only obviously. Yes I know different type of thing but it doesn't change that there is no viable way to cool such a GPU with an air cooler and let me ask if Nvidia would ever release such a product in to the consumer product chain?

 

Anything is possible since the AD102 test board has more than two 16 pin connectors. How is obviously the big question (maybe AIO Hybrid Cooler like on the 3090 and 3090 Ti K|NGP|N). But if NVIDIA would ever release such a product to the consumer product chain. I'd say if they are looking to bring back their Titan line, why the heck not.

 

There's also this:

 

 
Quote

The latest comes from Kopite7kimi who reveals NVIDIA has been working on a reference cooler with three fans.

 

Rumor alleges that a triple-fan cooler for reference AD102 board exists (so PG136/139). This does not mean it will be used by RTX 4090 though, but more likely for RTX 4090 Ti, which is rumored to launch later with even higher TDP.

 

One should note that a very similar rumor appeared right before the RTX 30 series were released. NVIDIA was allegedly had a triple-fan reference cooler for engineering samples of its RTX 30 series. However, it was never used for any of the flagship GA102 models.

 

https://videocardz.com/newz/nvidia-allegedly-designed-a-triple-fan-reference-cooler-for-geforce-rtx-40-graphics-with-ad102-gpu

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, BiG StroOnZ said:

Anything is possible since the AD102 test board has more than two 16 pin connectors

Anything is not possible, it could have 42843 16 pin connectors it doesn't change that's not the problem. Unless you want to strap on a 20m3 air cooler you can't just cool any amount of power.

 

The past AIO AIB cards only had small RADs and were not designed to cool large amounts of power, they were designed for the cooler to be taken OFF and LN2 to be used. Also required downloading the BIOS that allows that power limit to be used.

 

Edit:

This can cool 250W and is equivalent to a average/sub-par 240mm RAD

01.jpg?fm=webp&q=60

 

So you think a refence GPU is going to have 3x time this? Does this pass the logic test?

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, leadeater said:

Anything is not possible, it could have 42843 16 pin connectors it doesn't change that's not the problem. Unless you want to strap on a 20m3 air cooler you can't just cool any amount of power.

 

The past AIO AIB cards only had small RADs and were not designed to cool large amounts of power, they were designed for the cooler to be taken OFF and LN2 to be used. Also required downloading the BIOS that allows that power limit to be used.

 

Edit:

This can cool 250W and is equivalent to a average/sub-par 240mm RAD

01.jpg?fm=webp&q=60

 

So you think a refence GPU is going to have 3x time this? Does this pass the logic test?

 

I understand your concerns, and thought process behind them, but you seem to doubt engineers' abilities in advancing technology. 

 

Out of curiosity I wanted to see what a single 360 radiator was capable of (and with fans @ 2800RPM) you're looking at up to 750W of cooling capacity:

 

 radiatorcoolingcapacity.thumb.jpg.e73ffcc6b5eae2b95f76ac1178b67516.jpg

 

Combine that with a Hybrid Solution, do you think it's still impossible to cool such a product (800-900W)? I'm not saying this is what they would do, and I also understand EK makes far superior products compared to your standard AIO CLC. Nevertheless, where there's a will there's a way. 

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, BiG StroOnZ said:

I understand your concerns, and thought process behind them, but you seem to doubt engineers' abilities in advancing technology. 

Look these are rumors, remember that strongly and wisely. None of this is official information. What we know right now is what is practical to cool  and what is not and through which methods. 800W is not and will never be practical to cool on a PCIe card in a reasonable formfactor size, this is already established design constraints.

 

Nvidia already right now does not sell any datacenter GPUs above 400W that are not liquid cooling only. They already have these high power GPUs on the market now and they are liquid cooled only.

 

30 minutes ago, BiG StroOnZ said:

Combine that with a Hybrid Solution, do you think it's still impossible to cool such a product (800-900W)?

Never. Not at a size you can actually put in to your computer.

 

30 minutes ago, BiG StroOnZ said:

Out of curiosity I wanted to see what a single 360 radiator was capable of (and with fans @ 2800RPM) you're looking at up to 750W of cooling capacity:

That 360 RAD is very large, it's a 60mm deep RAD. I have many of these EK RADs of that depth (480mm and 360mm) and I'll tell you now they are vastly bigger than any cooler on any current AIB GPU, even 3090 Ti.

 

For reference a standard PC fan is 25mm depth.

 

Liquid coolers have a much better ability at removing the heat from the silicon die, it's not just about how much heat a cooler can dissipate it's also how well and efficient it can remove heat from the die itself.

 

30 minutes ago, BiG StroOnZ said:

Nevertheless, where there's a will there's a way.

Nope, that's just wishful thinking. Don't just believe rumors because you want to. 800W-900W isn't happening on a consumer product.

 

450W is reasonable and I expect that to be true.

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, leadeater said:

Look these are rumors, remember that strongly and wisely. None of this is official information. What we know right now is what is practical to cool  and what is not and through which methods. 800W is not and will never be practical to cool on a PCIe card in a reasonable formfactor size, this is already established design constraints.

 

Nvidia already right now does not sell any datacenter GPUs above 400W that are not liquid cooling only. They already have these high power GPUs on the market now and they are liquid cooled only.

 

Never. Not at a size you can actually put in to your computer.

 

That 360 RAD is very large, it's a 60mm deep RAD. I have many of these EK RADs of that depth (480mm and 360mm) and I'll tell you now they are vastly bigger than any cooler on any current AIB GPU, even 3090 Ti.

 

Liquid coolers have a much better ability at removing the heat from the silicon die, it's not just about how much heat a cooler can dissipate it's also how well and efficient it can remove heat from the die itself.

 

Nope, that's just wishful thinking. Don't just believe rumors because you want to. 800W-900W isn't happening on a consumer product.

 

450W is reasonable and I expect that to be true.

 

Well then, what do you think the 800-900W 4000-series Ada Lovelace leaks/rumors are in regards to then? Just a shot in the dark, bold face lie to stir up conversation and the perpetual hype machine? Meaning, do you believe it has no merit whatsoever?

Link to comment
Share on other sites

Link to post
Share on other sites

31 minutes ago, BiG StroOnZ said:

Well then, what do you think the 800-900W 4000-series Ada Lovelace leak/rumor is in regards to then? Just a shot in the dark, bold face lie to stir up conversation and the perpetual hype machine? Meaning do you believe it has no merit whatsoever?

If there is a GPU coming with that kind of TGP it's datacenter. But I still doubt that. That invites many problems and that's also if that amount of heat can be removed from the GPU die well enough to keep it stable and reasonable clocks to make it worth it at all. Nvidia H100 SXM is 700W TDP/TGP, H100 PCIe is 350W. Notice the vast difference?

 

The H100 is TSMC 4nm btw.

 

Performance per watt is still very important so a 900W GPU is not going to be wanted if it's a regression in efficiency so that puts it in the realm of unlikely from that alone. Then there are other practical issues like actually being able to power it. The Nvidia DGX H100 already requires 10.2kw of power and most 3 Phase PDUs that supply the entire rack are 22kw, so to use these type of systems you have to go with more advanced and expensive rack power delivery. Increasing that 10.2kw to ~13kw just makes it even harder.

 

But why would a 900W datacenter GPU be coming out now when H100 is barely even on the market now?

 

So yea this 800W-900W I believe is complete bunk or is about some exotic Kingpin card for LN2 and exits for PR/marketing reasons. I mean you are talking about it now right? Doesn't really matter if it's not actually going to be a wide retail card it's already doing it's purpose.

Link to comment
Share on other sites

Link to post
Share on other sites

51 minutes ago, leadeater said:

If there is a GPU coming with that kind of TGP it's datacenter. But I still doubt that. That invites many problems and that's also if that amount of heat can be removed from the GPU die well enough to keep it stable and reasonable clocks to make it worth it at all. Nvidia H100 SXM is 700W TDP/TGP, H100 PCIe is 350W. Notice the vast difference?

 

The H100 is TSMC 4nm btw.

 

Performance per watt is still very important so a 900W GPU is not going to be wanted if it's a regression in efficiency so that puts it in the realm of unlikely from that alone. Then there are other practical issues like actually being able to power it. The Nvidia DGX H100 already requires 10.2kw of power and most 3 Phase PDUs that supply the entire rack are 22kw, so to use these type of systems you have to go with more advanced and expensive rack power delivery. Increasing that 10.2kw to ~13kw just makes it even harder.

 

But why would a 900W datacenter GPU be coming out now when H100 is barely even on the market now?

 

So yea this 800W-900W I believe is complete bunk or is about some exotic Kingpin card for LN2 and exits for PR/marketing reasons. I mean you are talking about it now right? Doesn't really matter if it's not actually going to be a wide retail card it's already doing it's purpose.


Ergo, what is the maximum TGP for Ada you think we're going to see?

 

They are claiming 450W for the RTX 4090 and 600W for the RTX 4090 Ti (and obviously the 900W for the Ada Titan that you believe is bunk).  

Link to comment
Share on other sites

Link to post
Share on other sites

 

11 hours ago, BiG StroOnZ said:

While you're correct that it's "quite a step down", it's not when compared to NVIDIA's lineup (it's actually the same). The 3060 Mobile has 3840 CUDA Cores and the 3070 Mobile has 5120 CUDA Cores, also a 33.33% increase in shaders. So in that regard, I would say you could claim that they equate to NVIDIA's range using your examples of the 3060 and 3070.

Also take into consideration the quoted clock differences. Referencing the same slide you reposted, for whatever reason the stated clock of the A730M is much lower than the A770M. The gap between the nvidia models is much smaller whether you look at base or boost. Of course we don't know final "game" clocks but this is all we have for now.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×