Jump to content

Intel Arc A730M 12GB 3DMark TimeSpy Score Leaked, in the league of RTX 3060/3070 Laptop GPU (Update #2 ~ More Gaming Benchmarks Added)

Summary

Over at Weibo, the leaker known as “Golden Pig Upgrade” (translated) posted a result from a 3DMark Time Spy benchmark test with a 10,138 Time Spy Score (10,107 Graphics Score). This score was achieved with Intel’s new Arc Alchemist mobile GPU known as Arc A730M, which is not even the full ACM-G10 based model; this discrete GPU features 24 Xe-Cores out of 32 available. It is scoring between a RTX 3060 and 3070 Laptop GPU depending on where you look.

 

Intel-Arc-A730M-3DMark-Performance-Benchmarks-_1.thumb.jpg.b5d9a70e28a0db5cf2fe746948bdbb33.jpg

 

Intel-Arc-A730M-Mobility-GPU-3DMark-Performance-Benchmarks.thumb.png.1414ad6ffe00496bf2e583db32edce8d.png

 

arc730.jpg.a3074601b0ee191a1594c640954b5214.jpg

 

Quotes

Quote

Intel-Arc-A730M-3DMark-Performance-Benchmarks.png.84c060aedd96691431c9c31e9180620f.png

 

The Arc A730M is based on the Xe-HPG graphics architecture and contains 24 Xe Cores, 24 ray tracing units, 384 execution units, and 3,072 unified shaders with an 1100 MHz graphics clock and 12GB of GDDR6 operating on a 192-bit bus. The A730M comes from the upper end of Intel's mobile A-series lineup, sitting one step down from the flagship A770M. The more powerful A770M makes use of all 512 execution units (4,096 unified shaders) of the ACM-G10 ASIC.

 

The A730M comes very close to matching the highest-performing RTX 3060 mobile GPU in graphics performance. Every other RTX 3060 mobile GPU in 3DMark has a graphics score under 10,000, which means the A730M would take second place if we jammed it into the RTX 3060 mobile GPU lineup. Additionally, comparing these results to the competition reveal that the Arc A730M GPU ends up being slightly faster than the GeForce RTX 3070 Laptop GPU and Radeon RX 6700M in Time Spy but takes a slight hit in 3DMark Fire Strike. The Arc A730M is more of an RTX 3060M and RX 6650M in 3DMark Fire Strike; so the performance is looking really good and that's actually impressive considering most laptops with the A730M will retail at around $1100-$1200 US.

 

The Arc A730M has only lately begun to appear in laptop designs, albeit exclusively in China at present. Power consumption depends on the notebook manufacturer and can be anywhere between 80 and 120 watts.

 

It looks like by the time the desktop graphics cards are released, we will have performance comparable to NVIDIA's RTX 3070 and RX 6700 XT graphics cards.

 

My thoughts

I'd say this is looking pretty darn good for Intel, IMHO. It's well in line with many of the previous predictions, so Intel definitely delivered in that regard. There seems to be some disparity between 3060 and 3070 performance, but that's expected with different Laptop models being tested and driver optimization. As for instance, my RTX 3060 Laptop scores 9,802 (9,303 Graphics) in Time Spy and 21,639 (23,288 Graphics) in Fire Strike. So, according to the charts I'm around a 3070 Laptop in Time Spy and the A730M in Fire Strike. I'd say this definitely brings the competition to the midrange mobile market; bringing hope to the release of the Arc desktop GPUs. Of course we will have to wait and see what sort of gaming performance Arc brings; as Tom's Hardware points out that it's easy to optimize for 3DMark rather than a wide gamut of games. 

 

Sources

https://wccftech.com/intel-arc-a730m-12-gb-mobile-gpu-is-faster-than-nvidias-rtx-3070-mobility-3dmark-performance-benchmarks/

https://www.techpowerup.com/295592/intel-arc-a730m-3dmark-timespy-score-spied-in-league-of-rtx-3070-laptop-gpu

https://www.guru3d.com/news-story/in-3dmarkthe-intel-arc-a730m-outperformed-the-rtx-3070-mobile.html

https://videocardz.com/newz/intel-arc-a730m-is-faster-than-rtx-3070-laptop-gpu-in-3dmark-timespy-test

https://www.tomshardware.com/news/intel-arc-a730m-close-to-mobile-rtx-3060-3dmark-time-spy 

 

Update to this story ~ 

 

Summary

Weibo’s user “Golden Pig Upgrade” tested the Intel Arc A730M discrete mobile graphics card in a number of games, such as Assassin’s Creed: Odyssey, Metro Exodus, & F1 2020 at two resolutions. The performance is around an RTX 3060M or faster than a 3050 desktop but slower than 3060 desktop depending on the scenario.

 

bfe91a7aly1h2yyjpa5anj21c00u0wm6-1480x925.thumb.jpg.8e6e9b5f8123da8767398dfbf183b32e.jpg

 

bfe91a7aly1h2yyjpk7j0j21c00u0dn9-1480x925.thumb.jpg.8d2f4fc3adc0a5fee5ca5ca09feac089.jpg

 

bfe91a7aly1h2yyjpjqz9j21hc0u0tef-1480x833.thumb.jpg.c1af06db16e09553a361928fa7132d8a.jpg

 

bfe91a7aly1h2yyjpqgttj21hc0u0afk-1480x833.thumb.jpg.f6ed4c255a412ea7d9633e5148aceca6.jpg

 

bfe91a7aly1h2yyjp53c5j21hc0u077d-1480x833.thumb.jpg.241410729d6c3998c7c4f589ffd42627.jpg

 

bfe91a7aly1h2yyjp9yqbj21hc0u0tbs-1480x833.thumb.jpg.9aab260099ff2f599f15a9c9fea4a628.jpg

 

intelarca730gamebench.jpg.ca6a05978d5b391cf18082d3f075ee78.jpg

 

Quotes

Quote

The set of games tested is rather small—F1 2020, Metro Exodus, and Assassin's Creed Odyssey, but the three are fairly "mature" games (have been around for a while). The A730M is able to score 70 FPS at 1080p, and 55 FPS at 1440p in Metro Exodus. With F1 2020, we're shown 123 FPS (average) at 1080p, and 95 FPS avg at 1440p. In Assassin's Creed Odyssey, the A730M yields 38 FPS at 1080p, and 32 FPS at 1440p.

 

  • Based on these results, the GPU can run Metro Exodus at 70 FPS in 1080p and 55 FPS in 1440p when using the High quality settings. Based on tests from Notebookcheck, this is better than the RTX 2070 (66 FPS on average at 1080p) but not as good as the RTX 3060 (80 FPS).
  • Arc A730M gets an average of 123 FPS at 1080p and 95 FPS at 1440p in F1 2020. In comparison, the RTX 3050's 1080p High preset offers an average of 120 FPS.
  • The A730M achieves 38 FPS at 1080p and 32 FPS at 1440p in Assassin's Creed Odyssey. These figures suggest that the A730M is marginally faster than the desktop GeForce RTX 3050.

The data shown below in Intel Arc Control panel software indicates that the GPU has been running at 2050 MHz boost and 92W. Do note that this is Furmark test, so these metric almost certainly do not correspond to real-world use.

 

INTEL-ARC-A730M-TEST-2.thumb.jpg.0f05a3b0ea5b97db09bc9f3d35fbb9b5.jpg

 

Sources

https://www.guru3d.com/news-story/intel-arc-a730m-game-tests-gaming-performance-differs-from-synthetic-performance.html

https://videocardz.com/newz/intel-arc-a730m-has-been-tested-in-games-with-performance-between-rtx-3050-and-rtx-3060

https://www.tomshardware.com/news/intel-arc-a730m-gaming-benchmarks-show-rtx-3050-mobile-level-performance

https://www.techpowerup.com/295624/intel-arc-a730m-tested-in-games-gaming-performance-differs-from-synthetic

https://wccftech.com/intel-high-end-arc-a730m-gpu-is-barely-faster-than-an-nvidia-rtx-3050-in-gaming/

https://www.pcgamer.com/first-intel-arc-alchemist-benchmarks-are-a-bit-of-a-mixed-bag/

https://hothardware.com/news/intel-arc-a730m-benchmarks-mobile-geforce-rtx-gpus

 

My thoughts

I think there is definitely some driver maturation to be done of course, but performance is not too shabby. I know because of the synthetics people were expecting more, but I think this is still a great start to be honest. I'm not as gloom and doom as some of these news outlets are (not all of them are) because I think this is still early. I also think it's unfair to judge the GPU based solely on 3 games. Between now and when the GPUs are widely available they will definitely have some time to do some serious work. 

 

Second update to this story ~ 

 

Summary

There are some more Gaming (and Workstation) Benchmarks available today, and it seems the Arc A730M loses to an RTX 3060M in mostly all instances (except Metro Exodus and Elden Ring). The performance is all over the place with many inconsistent results. There's a brief video from "Golden Pig Upgrade" comparing the Arc A730M and GeForce RTX 3060 Laptop and also a full review from IT-Home. The results from IT-Home are based on an unofficial driver (30.0.101.1726) so the results may vary. But both reviews appear to show similar synthetic performance, so the numbers should be accurate. 

 

ARC-A730M-TOTAL-WAR_videocardz.thumb.jpg.45c96f60ea092f6fe8a7aef3a351e648.jpg

 

ARC-A730M-BOUNDARY_videocardz.thumb.jpg.c643f7593cc6a26ddbcd2876c2614ddc.jpg

 

ARC-A730M-CIV6_videocardz.thumb.jpg.5f65f49c0206aa605911e89718cab5cf.jpg

 

ARC-A730M-CIV6_videocardz.thumb.jpg.5f65f49c0206aa605911e89718cab5cf.jpg

 

ARC-A730M-GEARS-OF-WAR5_videocardz.thumb.jpg.a1c063bcd5608e887bbcc320ce0a3958.jpg

 

ARC-A730M-HITMAN2_videocardz.thumb.jpg.85f5dc3672763289a434f2421c5e410c.jpg

 

ARC-A730M-METRO-EXODUS_videocardz.thumb.jpg.27190022bb6ec98bad2a5721af62c0a3.jpg

 

A730M-vs-RTX3060.thumb.jpg.58718a86b5fdcda56984c301d404dc75.jpg

 

moreintelarca730mbenchs2.thumb.jpg.307c23374be4c456ddd4a114e486fc39.jpg

 

moreintelarca730mbenchs.jpg.f28e01a9d84f7b2237c9827eabf06eee.jpg

 

Quotes

Quote

Some Arc A730M results are not explained (settings and resolution), which may lead to wrong conclusions, however, just for a sake of basic understanding, these charts should provide all the necessary data.

 

With an exception to Metro Exodus Enhanced Edition, Arc A730M is not as fast as RTX 3060. NVIDIA’s GPU outperforms Intel GPU in almost every test. But worse performance is not the only problem, Intel clearly needs to work on the drivers as reviewers report that some games do not even start or output errors. Intel mobile Arc series seems to do just fine in 3DMark though.

 

The GeForce RTX 3060 Mobile pulled a resounding victory over the Arc A730M. If we calculate the geometric mean for the average framerates, the GeForce RTX 3060 Mobile finished with a score of 109.58, while the Arc A730M put up 67.63. The GeForce RTX 3060 Mobile was up to 62% faster than the Arc A730M. The performance delta looks accurate, considering that the Arc A730M performed like a GeForce RTX 3050 Mobile in previous gaming benchmarks.

 

In eight of the titles, the Arc A730M only managed to score a victory over GeForce RTX 3060 Mobile in Metro Exodus Enhanced Edition. On the other hand, Alchemist delivered up to 6.59% higher average framerates than its Ampere rival. Some of the most significant performance margins were in Boundary and Counter-Strike: Global Offensive. However, the GeForce RTX 3060 Mobile machine was utilizing Nvidia DLSS in the former, giving it an unfair advantage. 

 

The reviewer also provided some workstation GPU results in the shape of the popular SPECviewperf 2020 benchmark. The scenario didn't change, and the GeForce RTX 3060 Mobile continued to dominate the Arc A730M with performance deltas spanning from 20% up to a whopping 515%.

 

Sources

 

https://videocardz.com/newz/first-review-of-intel-alchemist-acm-g10-gpu-is-out-arc-a730m-is-outperformed-by-rtx-3060m-in-gaming

https://wccftech.com/intel-arc-a730m-high-end-mobility-gpu-slower-than-rtx-3060m-despite-latest-drivers/

https://www.tomshardware.com/news/geforce-rtx-3060-mobile-kicks-intel-arc-a730m-around

https://www.bilibili.com/video/BV1US4y1i7ne

https://www.ithome.com/0/623/070.htm

 

My thoughts

I'm guessing this might be why the lineup has been limited to Chinese markets first before global release. It might simply be that the software is not even close to ready. I know things were looking promising with the synthetics a few days ago, and even yesterday the first gaming benchmarks weren't looking too bad. But this is quite a different scenario altogether, as even the synthetics here have nothing beneficial going on for the A730M. This is supposed to be a relatively high-end GPU in the end and it's not really competitive with NVIDIA's mainstream offering. If this is the final performance to be expected when the product launches worldwide, the only saving grace is pricing; as the performance is pretty poor in this showing here. Obviously it would be best to wait until Arc lands into the hands of respected reviewers, instead of guesstimating performance from these early reviews. However, nevertheless, it's still quite disappointing.

Link to comment
Share on other sites

Link to post
Share on other sites

That's looking good for Intel as was said - but it's a bad time for them to launch a card that competes with midrange 30 series cards when 40 series is supposed to be much better. If the price is competitive, though... 

Link to comment
Share on other sites

Link to post
Share on other sites

If these numbers translate to game performance Intel is coming out swinging. I would have been impressed if their first discrete mobile GPU was mid-range gaming laptop spec like the 3060, but matching or beating a 3070 is very good. I didn't expect them to, but it seems like they won't be competing for the top of the line with the 40 series set to be released soon.

¯\_(ツ)_/¯

 

 

Desktop:

Intel Core i7-11700K | Noctua NH-D15S chromax.black | ASUS ROG Strix Z590-E Gaming WiFi  | 32 GB G.SKILL TridentZ 3200 MHz | ASUS TUF Gaming RTX 3080 | 1TB Samsung 980 Pro M.2 PCIe 4.0 SSD | 2TB WD Blue M.2 SATA SSD | Seasonic Focus GX-850 Fractal Design Meshify C Windows 10 Pro

 

Laptop:

HP Omen 15 | AMD Ryzen 7 5800H | 16 GB 3200 MHz | Nvidia RTX 3060 | 1 TB WD Black PCIe 3.0 SSD | 512 GB Micron PCIe 3.0 SSD | Windows 11

Link to comment
Share on other sites

Link to post
Share on other sites

58 minutes ago, Mel0nMan said:

That's looking good for Intel as was said - but it's a bad time for them to launch a card that competes with midrange 30 series cards when 40 series is supposed to be much better. If the price is competitive, though... 

 

I wouldn't say they are quite DOA, but it's definitely an odd time for launch. If the price is competitive as you mention, they definitely can have a winner on their hands!

 

56 minutes ago, BobVonBob said:

If these numbers translate to game performance Intel is coming out swinging. I would have been impressed if their first discrete mobile GPU was mid-range gaming laptop spec like the 3060, but matching or beating a 3070 is very good. I didn't expect them to, but it seems like they won't be competing for the top of the line with the 40 series set to be released soon.

 

I agree, if they can tweak the drivers to translate this benchmark performance into game performance, they will definitely be highly competitive parts. RTX 3060/3070 level of performance is quite appealing for mobile market, especially at the price of these supposed laptops that will be containing them.

 

Sadly, they are taking the AMD approach, which doesn't come as a surprise (not competing for the top of the line). However, the majority of gamers are going for 3060/3070 territory anyway. If the desktop Arc GPUs can be priced with the competitive edge, Intel will definitely do well, assuming their drivers are up to snuff. 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Lol imagine being AMD right now you literally spent most of your existence making GPUs only to be able to compete in the mid range market 97% of the time and then Intel says screw it let's make a GPU and comes out and swinging so hard their mobile chips land them up there with a 3070 oof thenon top if that the new 4000 series cards are coming out with 2-3x the performance

Link to comment
Share on other sites

Link to post
Share on other sites

I do wonder if the score is also related with intel CPU for that deep link action (if that was used for the GPU score).

where both gpu and igpu is used... maybe? at least it will be fun to see the performance differences and intel gpus are just 5 years away now! 😛

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, SlidewaysZ said:

Lol imagine being AMD right now you literally spent most of your existence making GPUs only to be able to compete in the mid range market 97% of the time and then Intel says screw it let's make a GPU and comes out and swinging so hard their mobile chips land them up there with a 3070 oof thenon top if that the new 4000 series cards are coming out with 2-3x the performance

 

Yeah, Intel is definitely coming out swinging, but they do have a former AMD employee assisting them. They did good here.

 

I've heard of 2x (or a bit more), but not 3x.

 

3 hours ago, leadeater said:

Yea no, that isn't happening.

 

Probably not 3x, but 2x is likely according to many rumors from relatively reliable sources. Some of those sources even say more than 2x (how much more would be the big question). I do believe 3x is a stretch, but looking at the proposed specs and TDP increases of Ada, I do not not see 2x as being an impossible achievement or goal for 4000-series. 

 

1 hour ago, Quackers101 said:

I do wonder if the score is also related with intel CPU for that deep link action (if that was used for the GPU score).

where both gpu and igpu is used... maybe? at least it will be fun to see the performance differences and intel gpus are just 5 years away now! 😛

 

Yeah, that would be interesting if they can implement a Hybrid GPU solution like Crossfire used to have with APUs linking to a dGPU to boost performance.

 

Hopefully though that now the laptop Arc stuff is out, it paves the way for us finally seeing the dGPU Arc stuff releasing at least by Fall. I know they've been delaying it, but they probably want a polished product, and don't want to have a "bad driver stigma" like AMD has somewhat developed for themselves. They can also use the Laptop SKUs as somewhat of a beta test for driver optimization.

 

It seems like the delays are never ending, but I think we will get there by at least Fall. As maybe the delays are strategic, because despite them launching entry level to upper mid range products (where they slot in currently), if the price is right, they can still position themselves to compete with NVIDIA next-gen accordingly. 

Link to comment
Share on other sites

Link to post
Share on other sites

40 minutes ago, BiG StroOnZ said:

Probably not 3x, but 2x is likely according to many rumors from relatively reliable sources. Some of those sources even say more than 2x (how much more would be the big question). I do believe 3x is a stretch, but looking at the proposed specs and TDP increases of Ada, I do not not see 2x as being an impossible achievement or goal for 4000-series. 

Let me ask you something, when was the last time has there been a Gen on Gen 2x performance increase? This sounds exactly as reliable as the Zen 4 rumors were, except far less believable. 

 

60% if we're lucky.

Link to comment
Share on other sites

Link to post
Share on other sites

Not bad, here's hoping it pushes Nvidia to decrease their prices for once, instead of increasing them because they figure they have no competition with AMD still playing catch up.

CPU: AMD Ryzen 3700x / GPU: Asus Radeon RX 6750XT OC 12GB / RAM: Corsair Vengeance LPX 2x8GB DDR4-3200
MOBO: MSI B450m Gaming Plus / NVME: Corsair MP510 240GB / Case: TT Core v21 / PSU: Seasonic 750W / OS: Win 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, leadeater said:

Let me ask you something, when was the last time has there been a Gen on Gen 2x performance increase? This sounds exactly as reliable as the Zen 4 rumors were, except far less believable. 

 

60% if we're lucky.

 

I understand what you're saying. But 980 Ti to 1080 Ti was 80%. Not too long ago or far off from a 100% increase in performance.

 

As, my main point is when is the last time we've seen NVIDIA push the limits of the Silicon as hard as is expected with the 4000-series.

 

I think 60% is definitely conservative this time around, all things considered. 60% minimum, my expectations.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Mel0nMan said:

If the price is competitive, though... 

with how nvidia and amd are now trying to keep GPU prices as high as possible?
nah, intel will follow the trend, the cards will do poorly and intel will kill them off.

*Insert Witty Signature here*

System Config: https://au.pcpartpicker.com/list/Tncs9N

 

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, Mel0nMan said:

That's looking good for Intel as was said - but it's a bad time for them to launch a card that competes with midrange 30 series cards when 40 series is supposed to be much better. If the price is competitive, though... 

 

Given this is their first real dGPU product, just be thankful we even have a 3rd player in the game now. We also have no idea where the a770m will fit.

Otherwise we're basically saying:

  • Don't release anything unless you can compete with something that isn't out yet
  • if you're not the best, don't bother

i would say 80% of people that bought a xx80 or xx90 series card don't actually need all that power anyway, they are just chasing the higher numbers, be it FPS or resolution, or both and maybe even the clout of having an xx80 or xx90 card.

🌲🌲🌲

 

 

 

◒ ◒ 

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, Arika S said:

i would say 80% of people that bought a xx80 or xx90 series card don't actually need all that power anyway, they are just chasing the higher numbers, be it FPS or resolution, or both and maybe even the clout of having an xx80 or xx90 card.

How dare you! My 6800 XT is totally not a waste with my 60Hz monitor and slow as hell 4930K! lol

Link to comment
Share on other sites

Link to post
Share on other sites

56 minutes ago, Arika S said:

 

i would say 80% of people that bought a xx80 or xx90 series card don't actually need all that power anyway, they are just chasing the higher numbers, be it FPS or resolution, or both and maybe even the clout of having an xx80 or xx90 card.

 

More like people bought whatever was available. I know people who "had to" buy a 3090 because that was the only card available.

 

But roll back to the 10 and 20 series, yeah. Usually someone buys the top-end part because that's "the be best card" not necessarily the one that best fits their needs. In most cases however, feature-creep in software just results in that card being more future-proof than buying "just what you need"

 

eg, you could buy a 1060, 2060, 3060, 4060 every year or you could buy a 1080, 3070, 5060, every second year, and bump down one tier each time because your needs haven't changed. Or you could be like 1080Ti/Titan - 4090, and skip the 20 and 30's because the 20's didn't offer any value a the time of release (RTX), and the 30's were just not even available.

 

To be fair/honest, someone who buys a new monitor, new GPU, new CPU, every year is just throwing money away. You're always better waiting on the tail-end of the release window to buy than on the head end.  Like it's cheaper to buy a DDR4 system than it is a DDR5 system, so you may as well buy the DDR4 system and hang onto it a few years than try to buy DDR5 when all the DDR5 stuff is slower than the DDR4 stuff.

 

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, leadeater said:

Let me ask you something, when was the last time has there been a Gen on Gen 2x performance increase? This sounds exactly as reliable as the Zen 4 rumors were, except far less believable. 

 

60% if we're lucky.

rdna 1 -> rdna 2

Lovelace from my understanding will get close to 2x as well, but at the cost of power consumption. will still be more performance per watt, but when Rdna 3 is also looking at close to 2x without that power draw idk.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, starsmine said:

rdna 1 -> rdna 2

Lovelace from my understanding will get close to 2x as well, but at the cost of power consumption. will still be more performance per watt, but when Rdna 3 is also looking at close to 2x without that power draw idk.

Huh?

 

https://www.techspot.com/review/2218-radeon-6700-xt-vs-5700-xt/

 

The only way I can see you making that argument that RDNA2 was twice as good as RDNA1 is if you compare the 6900XT vs. the 5700XT which is... a choice.

It's entirely possible that I misinterpreted/misread your topic and/or question. This happens more often than I care to admit. Apologies in advance.

 

珠江 (Pearl River): CPU: Intel i7-12700K (8p4e/20t); Motherboard: ASUS TUF Gaming Plus Z690 WiFi; RAM: G.Skill TridentZ RGB 32GB (2x16GB) DDR4 @3200MHz CL16; Cooling Solution: NZXT Kraken Z53 240mm AIO, w/ 2x Lian Li ST120 RGB Fans; GPU: EVGA Nvidia GeForce RTX 3080 10GB FTW3 Ultra; Storage: Samsung 980 Pro, 1TB; Samsung 970 EVO, 1TB; Crucial MX500, 2TB; PSU: Corsair RM850x; Case: Lian Li Lancool II Mesh RGB, Black; Display(s): Primary: ASUS ROG Swift PG279QM (1440p 27" 240 Hz); Secondary: Acer Predator XB1 XB241H bmipr (1080p 24" 144 Hz, 165 Hz OC); Case Fans: 1x Lian Li ST120 RGB Fan, 3x stock RGB fans; Capture Card: Elgato HD60 Pro

 

翻生 (Resurrection): CPU: 2x Intel Xeon E5-2620 v2; Motherboard: ASUS Z9PR-D12 (C602 chipset) SSI-EEB; RAM: Crucial 32GB (8x4GB) DDR3 ECC RAM; Cooling Solution: 2x Cooler Master Hyper 212 EVO; GPU: ASRock Intel ARC A380 Challenger ITX; StorageCrucial MX500, 500GB; PSU: Super Flower Leadex III 750W; Case: Phanteks Enthoo Pro; Expansion Card: TP-Link Archer T4E AC1200 PCIe Wi-Fi Adapter Display(s): Dell P2214HB (1080p 22" 60 Hz)

 

壯麗 (Glorious): Mainboard: Framework Mainboard w/ Intel Core i5-1135G7; RAM: G.Skill Ripjaws 32GB (2x16GB) DDR4 SODIMM @3200MHz CL22; eGPU: Razer Core X eGPU Enclosure w/ (between GPUs at the moment); Storage: Samsung 970 EVO Plus, 1TB; Display(s): Internal Display: Framework Display; External Display: Acer (unknown model) (1080p, 21" 75 Hz)

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, CT854 said:

Huh?

 

https://www.techspot.com/review/2218-radeon-6700-xt-vs-5700-xt/

 

The only way I can see you making that argument that RDNA2 was twice as good as RDNA1 is if you compare the 6900XT vs. the 5700XT which is... a choice.

The question was Gen on gen, to not compare flagship to flagship is a choice when comparing gen on gen. 
IF you want another example of gen on gen getting close to 2x that isnt that, then you have Pascal.

The big reasons we didnt get big gains on Turing and Ampere for rasterization is because all the die space got taken up by rtx and turing cores rather then focusing on cuda.  

image.png.c0d2e27e7f33f5a36fb6c00f2d8f86b2.png

Link to comment
Share on other sites

Link to post
Share on other sites

RTX 4000 series is going to berealeased this summer, ARC can't compete with it, unless very competitively priced...

Someone of course has mentioned it before, but just to be sure.

I edit my posts more often than not

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, starsmine said:

The question was Gen on gen, to not compare flagship to flagship is a choice when comparing gen on gen.

Okay but... why? What's your reasoning for comparing two flagship cards even though they are in completely different price classes and very different silicon? My reasoning for comparing within a card series is that they exist in a somewhat similar price class. Are you saying that if Nvidia hypothetically capped out Ada at an RTX 4060, you'd be compelled to compare that card against an RTX 3090 Ti as a representative comparison of the two architectures? I would have tons of trouble justifying that decision.

 

28 minutes ago, starsmine said:

IF you want another example of gen on gen getting close to 2x that isnt that, then you have Pascal.

A one-game 4K benchmark of a Titan X Maxwell vs. a Titan X Pascal gives a 60% improvement at Stock. Impressive to be sure, but I wouldn't call it close to 2x, nor would I call it representative since it's... a single game benchmark. What about average data across multiple test cases?

It's entirely possible that I misinterpreted/misread your topic and/or question. This happens more often than I care to admit. Apologies in advance.

 

珠江 (Pearl River): CPU: Intel i7-12700K (8p4e/20t); Motherboard: ASUS TUF Gaming Plus Z690 WiFi; RAM: G.Skill TridentZ RGB 32GB (2x16GB) DDR4 @3200MHz CL16; Cooling Solution: NZXT Kraken Z53 240mm AIO, w/ 2x Lian Li ST120 RGB Fans; GPU: EVGA Nvidia GeForce RTX 3080 10GB FTW3 Ultra; Storage: Samsung 980 Pro, 1TB; Samsung 970 EVO, 1TB; Crucial MX500, 2TB; PSU: Corsair RM850x; Case: Lian Li Lancool II Mesh RGB, Black; Display(s): Primary: ASUS ROG Swift PG279QM (1440p 27" 240 Hz); Secondary: Acer Predator XB1 XB241H bmipr (1080p 24" 144 Hz, 165 Hz OC); Case Fans: 1x Lian Li ST120 RGB Fan, 3x stock RGB fans; Capture Card: Elgato HD60 Pro

 

翻生 (Resurrection): CPU: 2x Intel Xeon E5-2620 v2; Motherboard: ASUS Z9PR-D12 (C602 chipset) SSI-EEB; RAM: Crucial 32GB (8x4GB) DDR3 ECC RAM; Cooling Solution: 2x Cooler Master Hyper 212 EVO; GPU: ASRock Intel ARC A380 Challenger ITX; StorageCrucial MX500, 500GB; PSU: Super Flower Leadex III 750W; Case: Phanteks Enthoo Pro; Expansion Card: TP-Link Archer T4E AC1200 PCIe Wi-Fi Adapter Display(s): Dell P2214HB (1080p 22" 60 Hz)

 

壯麗 (Glorious): Mainboard: Framework Mainboard w/ Intel Core i5-1135G7; RAM: G.Skill Ripjaws 32GB (2x16GB) DDR4 SODIMM @3200MHz CL22; eGPU: Razer Core X eGPU Enclosure w/ (between GPUs at the moment); Storage: Samsung 970 EVO Plus, 1TB; Display(s): Internal Display: Framework Display; External Display: Acer (unknown model) (1080p, 21" 75 Hz)

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Salv8 (sam) said:

with how nvidia and amd are now trying to keep GPU prices as high as possible?
nah, intel will follow the trend, the cards will do poorly and intel will kill them off.

kind of doubt that, depends on how they go with it. if they do it as an early starter path, then they are to play catch up? but if they really want to say they are "equal" to the other GPU companies then maybe, at least for gaming. Other content might do okey and just need more GPUs and drivers to software not glitching on their platform. so if they try to make bad value cards for gaming, and producing too many of them. then maybe they dont want to fully go out for games (or at least for some generations or so).

Link to comment
Share on other sites

Link to post
Share on other sites

16 hours ago, BiG StroOnZ said:

 

I wouldn't say they are quite DOA, but it's definitely an odd time for launch. If the price is competitive as you mention, they definitely can have a winner on their hands!

 

 

I agree, if they can tweak the drivers to translate this benchmark performance into game performance, they will definitely be highly competitive parts. RTX 3060/3070 level of performance is quite appealing for mobile market, especially at the price of these supposed laptops that will be containing them.

 

Sadly, they are taking the AMD approach, which doesn't come as a surprise (not competing for the top of the line). However, the majority of gamers are going for 3060/3070 territory anyway. If the desktop Arc GPUs can be priced with the competitive edge, Intel will definitely do well, assuming their drivers are up to snuff. 

 

Imo high-end gpus in laptops are just stupid. They just don't make sense for the form factor. If you compare a mid range gpu laptop with a high end one you can see the size difference is so big that tbh its not worth it imo. Also most of the time you would have to use an external monitor to actually make good use of the high end gpu.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, CT854 said:

Okay but... why? What's your reasoning for comparing two flagship cards even though they are in completely different price classes and very different silicon? My reasoning for comparing within a card series is that they exist in a somewhat similar price class. Are you saying that if Nvidia hypothetically capped out Ada at an RTX 4060, you'd be compelled to compare that card against an RTX 3090 Ti as a representative comparison of the two architectures? I would have tons of trouble justifying that decision.

You mean when Kepler came out?

Because thats what we did when the 680 came out, using the GK104 rather then GK100
Did it again when Maxwell cam out with the 980 with it using the GM204 rather then the 200. 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, starsmine said:

You mean when Kepler came out?

Because thats what we did when the 680 came out, using the GK104 rather then GK100
Did it again when Maxwell cam out with the 980 with it using the GM204 rather then the 200. 

What of this fact? Both GPUs slotted in at a similar price class.

It's entirely possible that I misinterpreted/misread your topic and/or question. This happens more often than I care to admit. Apologies in advance.

 

珠江 (Pearl River): CPU: Intel i7-12700K (8p4e/20t); Motherboard: ASUS TUF Gaming Plus Z690 WiFi; RAM: G.Skill TridentZ RGB 32GB (2x16GB) DDR4 @3200MHz CL16; Cooling Solution: NZXT Kraken Z53 240mm AIO, w/ 2x Lian Li ST120 RGB Fans; GPU: EVGA Nvidia GeForce RTX 3080 10GB FTW3 Ultra; Storage: Samsung 980 Pro, 1TB; Samsung 970 EVO, 1TB; Crucial MX500, 2TB; PSU: Corsair RM850x; Case: Lian Li Lancool II Mesh RGB, Black; Display(s): Primary: ASUS ROG Swift PG279QM (1440p 27" 240 Hz); Secondary: Acer Predator XB1 XB241H bmipr (1080p 24" 144 Hz, 165 Hz OC); Case Fans: 1x Lian Li ST120 RGB Fan, 3x stock RGB fans; Capture Card: Elgato HD60 Pro

 

翻生 (Resurrection): CPU: 2x Intel Xeon E5-2620 v2; Motherboard: ASUS Z9PR-D12 (C602 chipset) SSI-EEB; RAM: Crucial 32GB (8x4GB) DDR3 ECC RAM; Cooling Solution: 2x Cooler Master Hyper 212 EVO; GPU: ASRock Intel ARC A380 Challenger ITX; StorageCrucial MX500, 500GB; PSU: Super Flower Leadex III 750W; Case: Phanteks Enthoo Pro; Expansion Card: TP-Link Archer T4E AC1200 PCIe Wi-Fi Adapter Display(s): Dell P2214HB (1080p 22" 60 Hz)

 

壯麗 (Glorious): Mainboard: Framework Mainboard w/ Intel Core i5-1135G7; RAM: G.Skill Ripjaws 32GB (2x16GB) DDR4 SODIMM @3200MHz CL22; eGPU: Razer Core X eGPU Enclosure w/ (between GPUs at the moment); Storage: Samsung 970 EVO Plus, 1TB; Display(s): Internal Display: Framework Display; External Display: Acer (unknown model) (1080p, 21" 75 Hz)

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, Mel0nMan said:

That's looking good for Intel as was said - but it's a bad time for them to launch a card that competes with midrange 30 series cards when 40 series is supposed to be much better. If the price is competitive, though... 

Intel's biggest problem - Some games just outright don't work when they detect that you have an Intel GPU,

And some games don't play nicely with Intel's drivers.

A PC Enthusiast since 2011
AMD Ryzen 7 5700X@4.65GHz | GIGABYTE GTX 1660 GAMING OC @ Core 2085MHz Memory 5000MHz
Cinebench R23: 15669cb | Unigine Superposition 1080p Extreme: 3566
Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×