Jump to content

Imagination developed ray-tracing for smartphones 3 years ago.........will Apple rekindle the project?

I was pretty excited when I heard that Apple had brought Imagination Technologies back into business: https://www.anandtech.com/show/15272/imagination-and-apple-sign-new-agreement

 

Not only does this mean that all iPhones and iPads released within the next 2 years will have 2TFLOPS of graphics performance, but the possibility of RAY-TRACING as well as VR games coming to iOS or a hybrid between iPad OS & OS X.

 

There are three things that point to the likelihood of this happening: what smartphone graphics was capable of 3 years ago, PowerVR graphics performance today compared to 3 years ago, and what mobile graphics performance will look like in the next 2 years.

 

First, two important capabilities of smartphone graphics were demonstrated by Epic Games and Imagination Technologies around 2016/2017 ~ the year the iPhone 8, Oneplus 5, Galaxy S8 was released: that flagship mobile graphics could render particle-heavy graphics at 900p-1080p (albeit highly optimized with Vulkan) and light realtime ray-tracing.

 

 

More details:

Spoiler

.

.

 

comparisonV11_slide2.png

comparisonV11_slide4.png

 

That 10FPS frame chug tho.........

Spoiler

.

comparisonV8.png

.

.

.

.

.

.

.

https://www.imgtec.com/blog/unreal-engine-and-the-ray-tracing-revelation/

 

 

Secondly is the fact that smartphone GPUs are 80% more powerful and that the iPad Pro GPU performance has doubled since the UE4 and PowerVR ray-tracing demos were released (from ~500 GFLOPS to ~1 TFLOP and ~750 GFLOPS to ~1.5 TFLOPS respectively). Mobile graphics will double again to 2-3 TFLOPs ~ roughly 4x the performance of devices from 2017 ~ in upper midrange and flagship devices released between early 2022 and 2023.

 

PHONES IN 2017

zIjb19S.png

 

PHONES AVAILABLE TODAY

bgjCqfb.png

 

TODAY's IPAD PRO vs. TOP-END iGPUs and ENTRY-LEVEL dGPUs

zswMx5O.png

 

It can be expected that games in VR and/or with ray-tracing should be playable based on observations above:

  • 90-120fps @ 720p @ medium settings (no ray-tracing)
  • 60fps @ 720p w/ medium ray-tracing quality
  • 60fps @ 900p @ medium/high settings (no RT)
  • 45fps @ 900p w/ medium RT
  • 45fps @ 1080p @ medium/high settings (no RT)
  • 30fps @ 1080p w/ medium RT quality
  • 30fps @ 1440p @ medium/high settings (non-VR, without RT)
  • 75-90fps @ 1440p @ medium settings (for VR)

 

My final prediction is that flagship phones will have graphics horsepower equivalent or just shy of the GTX 970, RX 480/570/580, GTX 1650 SUPER, and GTX 1060 by 2023 which will make 120fps gaming at 720p, ray-tracing at 30-60fps, and decent AAA VR gaming possible on most phones & foldables. 

 

rIJACoL.png

  • Aztec Ruins ~ High Tier = 1080p ultra or 1440p medium
  • Manhattan 3.1.1 1440p & Car Chase & Aztec Ruins ~ Normal Tier = 1080p high, 720p with medium/high RT, or 1440p low
  • Manhattan 3.1 = 1080p low, 900p medium/high, or 720p with low/medium RT
  • Manhattan = 900p low/medium, 900p with low RT, or 720p high

 

This is based on the fact that graphics performance roughly doubles every 2-3 years and that ARM smartphones & x86 foldables will be able to run realtime ray-tracing or VR games at decent framerates as soon as they become on-par with the GTX 1060 and GTX 1650 SUPER (both being within 10-20% of the RX 480/580 & GTX 970 ~ minimums for a decent VR experience).

 

Here's footage of people playing VR on GPUs as low as the GTX 1050TI:

Spoiler

.

Playing VR on a Hades Canyon NUC (Vega M GH):

.

On a GTX 1050 TI & 970M (within 15% of a GTX 1650):

.

On a GTX 1060 Mobile and GTX 970:

.

.

On a RX 470/570:

.

On a RX 480/580:

.

 

And hopefully by that point these VR titles will be playable no matter what you're running it on:

Spoiler

.

.

 

I'm also sure that the wider adoption and integration of Vulkan API, RT & tensor cores, remote wireless charging, and game streaming via Wi-Gig, WiFi 6, and 5G will make real time ray-tracing and VR a reality for lower-end devices as well. ;)

 

Thoughts?

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Results45 said:

I'm also sure that the wider adoption and integration of Vulkan API, RT & tensor cores, remote wireless charging, and game streaming via Wi-Gig, WiFi 6, and 5G will make real time ray-tracing and VR a reality for lower-end devices as well. ;)

The problem with Apple is they're practically forcing people to use Metal. There's a Vulkan wrapper, but I think that's only available on macOS and not iOS.

 

Outside of that, it's nice to see someone else entering the market. Or at the very least, putting what Imagination did to use.

Link to comment
Share on other sites

Link to post
Share on other sites

My thoughts are there is a limit to the ability of these small devices to dissipate heat. There will be a limit reached soon where performance is hampered by heat generation 

The Daily Driver:
AMD Ryzen 7 3700x  |  EVGA GTX 1070 SC  |  48GB Crucial Ballistix Elite DDR4-3600  | Corsair 750D Case
AsRock X570 Pro 4 mobo

iRacing Sim Rig:

i5-10600k @ 5.0 GHz  |  EVGA GTX 1080ti Hybrid  | 32GB G.Skill Trident Z Neo DDR4-3600  |  Corsair Air 540 Case
ASUS Z490-E ROG Strix mobo

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, vukos said:

My thoughts are there is a limit to the ability of these small devices to dissipate heat. There will be a limit reached soon where performance is hampered by heat generation 

 

Yes that's what happens if you just try to pile more performance ~ like overclocking or adding more cores to an existing architecture/node size, but as long as transister node sizes keep shrinking we can keep pushing performance limits while maintaining or improving power & heat efficiency. Examples include 30-45% faster GPUs year after year, doubling the amount of cores/threads in consumer chips every 3 years, and increasing pre-overclock HEDT CPU boost frequencies to 5.5Ghz.

 

Though reaching thermal limitations could happen which may force us to resort to adopting new cooling tech like thermosiphons if we cannot keep increasing either heat/performance efficiency or clockspeeds past 2-3nm transistor gate sizes in 2026/2030.

 

On the other hand, higher cell density and remote wireless charging could lead to much smaller, more compact batteries in phones and foldables then more of the internal space can be used for cooling solutions

 

We still have options to increase computing capabilities in other ways beyond transistor nodes and thermal efficiency via things like alternate substrate materials, 3D-stacking, bio-transistors, AI/machine learning, neurological architectures, reliance on cloud-based hosting & HPC solutions, IoT, and parallel quantum computing. ?‍♂️ ?

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×