Jump to content

UE 5 demo featuring limitless detail: Gaze upon Unreal Engine's true form!

Delicieuxz
7 minutes ago, Commodus said:

In this case, it really was running on a PS5 (dev kit, but still).  The Game Awards had a follow-up interview where Epic stressed that they really did just load it on to the system and record what came from the HDMI output.

 

The choice of the PS5 was probably due to the storage more than anything.  Sony's system not only has an SSD, it has a highly optimized pipeline that makes it possible to load massive amounts of data very quickly.  The demo looks so good in part because Epic doesn't have to wait for gigabytes upon gigabytes of geometry and textures to load, it's just... there.

No one will know until it`s actually out

Also pcs got RAM for that... and insanely fast nvme ssds too, if it was about that they would have went with pc,tho it`s about the $$ they got from sony lol

Link to comment
Share on other sites

Link to post
Share on other sites

29 minutes ago, Goliath_1911 said:

It looks awesome and all, but i doubt it will look anything like that when it`s actually out on the ps5 ,it will look good, but not as good as that video.

Perhaps. Although I imagine Nanite can be combined with other techniques like level streaming, LODs, hell even normal maps if it is absolutely necessary. I doubt developers will be able to make a huge world using just Nanite, but with a combination of old and new technology, they'll get pretty close results on next gen hardware with a couple little tricks here and there. I'm pretty excited for UE5's release tbh.

Link to comment
Share on other sites

Link to post
Share on other sites

36 minutes ago, Goliath_1911 said:

It looks awesome and all, but i doubt it will look anything like that when it`s actually out on the ps5 ,it will look good, but not as good as that video.

I`ve seen plenty of trialers of gameplay footage running on a xboxox,ps4 pro to know that it won`t look like that lol

Well, A. It's a demo. B. It's not a good idea to base what the future console will have because of what's available on 7 year old $400 hardware.

 

34 minutes ago, FezBoy said:

I think what they're saying is that when it does "pop in" it can be far enough away that it's only one pixel rendered, as there would theoretically be enough memory savings via reverse tessellation to have an "infinite" render distance (obviously excluding things that render to be smaller than 1px*1px)

Which is exactly what I said. Though I don't know if it's realistically capable of bringing super high poly assets in at that kind of distance. And then logically, that draw distance changes with resolution. A single pixel on 1080p is far larger than a single pixel at 4k, so logically the draw distance on 4k should be 4x further. It seems more likely things will still pop in at a certain distance, but then just have smooth geometry transitions from there. It's hard to tell, because there's a definite possibility that doing things this way will in fact reduce the actual number of rendered polygons, so it could help push draw distance. Time will tell.

 

Anyways, as for my own curiosity and biases, UE4.25 has included first order ambisonic rendering and convoluted reverb, which is cool. I also finally got an explanation of what the difference is between 1st, 2nd, and 3rd order ambisonics. It's basically just directional resolution, with 1 being lowest and 3 being highest. They also explain the differences between ambisonics, VSS, and HRTF, which agrees with me in my personal vendetta against @an actual squirrel about what binaural audio is.

If the time stamp doesn't work correctly, go to 1:08:00 for the sound field/ambisonics intro that leads into the soundfield demo, and then explaining how it works and how it differs from VSS and can be used in conjunction with HRTF for binaural output.

 

#Muricaparrotgang

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Goliath_1911 said:

No one will know until it`s actually out

Also pcs got RAM for that... and insanely fast nvme ssds too, if it was about that they would have went with pc,tho it`s about the $$ they got from sony lol

Nah, it's more complicated than that.

 

Sony isn't just using NVMe SSDs like the ones you can buy now.  They offer even faster transfer rates on a basic level (5.5GB/s versus around 3GB/s), and there are also special architecture and compression methods that improve things further.  RAM is another story, although it's important to know that there will be gobs of bandwidth and less overhead.

 

While I wouldn't be surprised if Sony was involved here, there is an actual reason to use the PS5 as the target instead of a PC.  The same thing that makes PCs great for flexibility also hurts the absolute potential performance -- you're beholden to standard architectures, off-the-shelf operating systems, that sort of thing.  Don't get me wrong, a PC with a fast CPU, lots of RAM and a good NVMe SSD will still accomplish a lot, but there are some areas where the PS5 and Xbox Series X will actually come out ahead for a while.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, JZStudios said:

My favorite part is that sick ass mesquite logo in the background.

My second favorite part is the girl just Spider-Man crawling up a sheer rock face with no handholds at all.

This is incorrect. Nanite is working in reverse. It takes a super high poly model and at runtime reduces the polygon count based on screen space. Tessellation takes a low poly model and adds in more geometry to be determined typically by a displacement map of some sort. This has downsides in that it can't do anything with acute angles, and high values of tessellation or tessellation offset can lead to a lot of jagged geometry. It's also limited by the texture resolution of the displacement map, which are usually designed in an imperfect science sort of way. I'm not knowledgeable enough to know if tessellation works in screen space so further items have less tessellation or not.

 

You could argue that the end result should be similar, but Nanite is better. Sampling down from a higher quality mesh will produce better results than trying to fake it and sample up.

 

Anyways, it's a bit like the adaptive subdivision in Blender, where you give an object a ridiculously high poly count and at render time it determines how big the geometry is in pixel screen space and scales it to whatever value you assign. Except UE5 seems to do it natively, Blender requires you start with a low poly mesh and use it's internal modifiers.

Timestamps for the impatient, because he kind of drags on, 

1:50 explains why you'd use it and the advantages

5:00 shows what it actually does compared to standard methods.

The end result is that now there's far more detail that's actually visible, and far less detail that isn't. Even with traditional LODs you're still potentially rendering a lot of geometry you can't possibly see because it's sub pixel size, meanwhile what's right in front of the camera looks low res. Nanite now makes what's immediately in front of the camera high res with a linear drop off maintaining the same detail level across the entire scene.

No. Tessellation does that because all the stupid developers did that. There was no rule that you need to pump out bazillion triangles on any surface that doesn't need it. But they've done it anyway, coz reasons. Literally. And so most games with tessellation ran like dog shit instead of the opposite, run way faster at no visual expense of what we had at the time as "standard". LOD in the old days usually had 3 stages, high, medium and low, high for close distance, medium for medium distance and low for far away objects. And as that object moved away from you or was closing in on you, you could see the shift in model quality. And you had to "master" actual separate models and define which is used when. Tessellation is just advanced LOD. You feed it highest quality model you expect player to see close up and let tessellation engine do its job in progressively removing and adding triangles UP TO the quality you deemed "highest fidelity expected". Basically the reverse of what everyone was doing. What everyone was doing was adding some model and forcing tessellation unit to add things that don't exist there basically predicting non existent things. It may sometimes work on cylindrical or spherical objects where you can easily project polygon smoothing via curvature calculation and adding in extra polygons to smooth it, but when you render a cuboid object, it literally doesn't matter if you use 5 million polygons on it or just 12 which is minimum for a cuboid object. It'll look the same. And you'll be facing the same issue on I don't know rocks and things where there is an expected shape and just pumping in polygons doesn't actually make things better since they don't have a predictable end shape. Sphere does, rock doesn't. For sphere, the engine can clearly know that adding polygons on the curve and chopping away angled surfaces and making them more detailed with more polygons also yields actual results (it's how ATi's TruForm, the grand daddy of tessellation worked and it's why we sometimes got funny results of inflated things because the engine was too eager adding in polygons where they shouldn't be). It usually just doesn't on other surfaces. And if that was actually the case even for cylindrical or spherical surfaces I'd never witness them again in blocky forms since introduction of tessellation yet that's just not the case even today in many games. Still bunch of ugly ass blocky objects. The whole point of tessellation was that developers shouldn't be having to make decisions whether some objects like railings are insignificant and not worth spending polygons on it and require extra work. It would always spend more polygons up close and less further away where you really wouldn't see the difference so obviously. Developer would just make the high quality model they have to make anyway and just import it into the game and engine would do the rest. The problem at that time was that developer worked on super high poly models in lets say Maya anyway and then they had to manually (by hand) chop details away until performance was acceptable for whatever visuals they wanted because production models are always higher fidelity than in-game ones. And that wasted a lot of time. In the end they were still wasting time in chopping away details and forcing tessellation unit to fill them back in which is just retarded wasteful thing.

 

Also, what you're describing under the video, that's literally how tessellation works. It's not unique or new to Nanite. It's LITERALLY how tessellation works. My god, no wonder all developers were going bananas with it when no one seems to understand it... Also the level of detail is not the same across the scene. It's distance based progressive detail. The amount of polygons to display is fixed at certain distance from the viewport, however as player moves around, objects in your view are changing distance between them and viewport. The detail is not constant through the scene, it's dynamic. Constant quality means that object half a meter in front of you has 15 million polygons and it still has 15 million polygons when it's 200 meters away. Dynamic is when same object has 15 million polygons in front of you and 5000 when it's 200 meters away. There is no point in forcing 35 million polygons on a 15 million polygons base model. Unless it's a sphere, chances are you won't see any visual difference up close, but you'll just waste 20 million polygons that could be used better on something else. Anything else, no matter how insignificant that object might feel to the developer or the player. It's why in the past misc items like oranges on tables or glasses looked like they are made of 16 polygons. Because they were considered low priority props. Tessellation was meant to remove that decision making from developers. They'd waste less time on a lot of things and players would still have nice looking objects. Because that orange you see in game on some table didn't start its life as 16 polygons object during production. It began its life as something with 50.000 polygons. And they chopped them away to fit the performance metrics and status of misc object. That was literally the point of tessellation. Developer would just stick that 50.000 polygon object into the game and let tessellation do the rest. Also fun fact, object models can be compressed highly for game package purposes where textures do not. So sticking massive models into game doesn't really create same problems as massive textures.

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, WereCatf said:

Well, that was a very impressive video and if they didn't "embellish the truth" or anything, ie. it was actually running in real-time on a PS5, that's some fricking major technological feat. I was irked by the unstable framerate, but eh, that's probably just a matter of some tuning.

They are more withholding some truth in the video itself. The video is only 1440p at 30 FPS with unstable framerates but they didn't state what resolution or FPS they aimed for the engine and likely wanted to lead people to believe it was 4K at least. This is the same situation as Microsofts "Not all games will be 4k 60 as its up to the developers" but since Sony/Epic were vague about it they've avoided explicit drama.

Tim Sweeney even claims the PS5 is the only hardware this is capable of being done on, claiming not even high end PC's can do it "due to the PS5's SSD tech."

Link to comment
Share on other sites

Link to post
Share on other sites

These look really impressive. I really need to save up some money during this quarantine to upgrade my rig and be ready for this and other upcoming games. 😅

Link to comment
Share on other sites

Link to post
Share on other sites

53 minutes ago, RejZoR said:

No. Tessellation does that because all the stupid developers did that.

No, that's not how tessellation works. Tesselation adds geometry, not reduces it.

Read THIS directly from Nvidia, who invented it, and then tell me that's not how it works. I'm not reading the rest of that text wall because you have a basic misunderstanding of what tesselation is and how it works. Nanite is taking, in real time, direct from the game engine, a super high res model and dynamically, in real time, reducing the amount of geometry based on camera distance. Tessellation does not work that way. At all.

 

Tessellation works exactly in the way I described. It takes a low poly model and subdivides the hell out of it, and then using displacement textures attempts to bring it back into line with a higher poly model. This is fundamentally different from how Nanite works, and Nanite is far superior in multiple ways. For example, tessellation, being controlled by a texture only has height information. Therefore things with overhangs are not represented at all and low areas next to high areas are prone to diffuse texture stretching and the result can be very polygonal due to limitations of the displacement texture resolution. This leads to spikes in the displacement or square texel sized chunks. Nanite doesn't have these issues, because the base mesh has the overhangs and a theoretically unlimited resolution built in and then reduces down to pixel sizes instead of upscaling like tessellation.

#Muricaparrotgang

Link to comment
Share on other sites

Link to post
Share on other sites

i flat out call bullshit

the shear fact that the game didn't slow down when showing the triangles at 2:22 should be a big fucking red flag.

games always slow down when showing triangles of models and terrain, mainly because it's processing extra crap which puts extra load on the system which slows it down.

add that to the fact that a high end pc would have trouble doing this but a $500 console can pull it off?

hmmm

BuT hEy! PrEaTy GrApHiCs!!!

tenor.gif?itemid=11443556

(i will delete this god damn post if epic games releases the source for everyone and it can actually run on a gaming computer)

*Insert Witty Signature here*

System Config: https://au.pcpartpicker.com/list/Tncs9N

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Salv8 (sam) said:

add that to the fact that a high end pc would have trouble doing this but a $500 console can pull it off?

 

This is all down to optimisation, if you know exactly what hardware you are going to run on (down to the CPU L1, L2 cache leaves etc) you can optimise things massively compared to generic targets.

Most importantly with things like this is getting the `massive amounts` of data to the gpu on time otherwise you have dropped frames (much worst than lower avg frame rate),  In a system were you know exactly the throughput speed from your ssd to the hard drive (on a games console there is nothing else using your hard drive, windows update is not running randomly etc) you can test and test and test and you can optimise and optimise to get to a point were it works even with amazing quality. 

Another benerfit of the fixed hardware target is you can do things like pre-compile (and pre prepear) data for the GPU. On a PC you need to do this Just in Time since everyone has a differnt GPU.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, JZStudios said:

No, that's not how tessellation works. Tesselation adds geometry, not reduces it.

Read THIS directly from Nvidia, who invented it, and then tell me that's not how it works. I'm not reading the rest of that text wall because you have a basic misunderstanding of what tesselation is and how it works. Nanite is taking, in real time, direct from the game engine, a super high res model and dynamically, in real time, reducing the amount of geometry based on camera distance. Tessellation does not work that way. At all.

 

Tessellation works exactly in the way I described. It takes a low poly model and subdivides the hell out of it, and then using displacement textures attempts to bring it back into line with a higher poly model. This is fundamentally different from how Nanite works, and Nanite is far superior in multiple ways. For example, tessellation, being controlled by a texture only has height information. Therefore things with overhangs are not represented at all and low areas next to high areas are prone to diffuse texture stretching and the result can be very polygonal due to limitations of the displacement texture resolution. This leads to spikes in the displacement or square texel sized chunks. Nanite doesn't have these issues, because the base mesh has the overhangs and a theoretically unlimited resolution built in and then reduces down to pixel sizes instead of upscaling like tessellation.

Nonsense and I don't give a shit what NVIDIA said. Tessellation works both ways and there is no rule that it can only add triangles when it can clearly remove them as well. And removing is always easier and more accurate because you already know how actual detailed model looks like and then you're cutting it down. Having low poly model and pumping polygons on it is like having texture the size of 500x500 and then trying to make a 8192x8192 texture by resizing that tiny texture and expecting it to be sharp and detailed. Polygons are no different, just more forgiving.

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, Commodus said:

Nah, it's more complicated than that.

 

Sony isn't just using NVMe SSDs like the ones you can buy now.  They offer even faster transfer rates on a basic level (5.5GB/s versus around 3GB/s), and there are also special architecture and compression methods that improve things further.  RAM is another story, although it's important to know that there will be gobs of bandwidth and less overhead.

 

While I wouldn't be surprised if Sony was involved here, there is an actual reason to use the PS5 as the target instead of a PC.  The same thing that makes PCs great for flexibility also hurts the absolute potential performance -- you're beholden to standard architectures, off-the-shelf operating systems, that sort of thing.  Don't get me wrong, a PC with a fast CPU, lots of RAM and a good NVMe SSD will still accomplish a lot, but there are some areas where the PS5 and Xbox Series X will actually come out ahead for a while.

This ssd thing is all a marketing gimmick

pc had them for years, FYI there are 5gb nvme ssds out there, also consoles don`t have VRAM, and separate ram that`s why they gotta use the ssd for that 

basically a pc with 16gb of ram and 6/8 vram would not be using the ssd for that 

they are just trying to do what Apple does, reinvent something that was already there and make it a big deal LOL

 

You are seeing this tech now cause of new gen consoles have finally got SSDs , because most developers wouldnt bother use a feature that exists only on 1 platform

What epic did with UE 5 is great don`t get me wrong, its finally utilizing SSDs, but i think it is using it in a way to compensate the lack of seperate vram and ram ( I mean they got 8gb for ram and vram instead of 6 vram , 16 ram)

 

Also every console gen they would start overhyping that they will come out ahead of pcs, but i don`t think so, right now i can run all games on ultra settings with 120+ fps with no issues, which i doubt new consoles could do, maybe at 1080p with high settings.

I hope they are actually good cause i got both consoles and they are both sitting collecting dust since whenever i plug them in i notice the major difference in frames, loading , details, i go back to my pc. i like sitting on the couch playing games, more comfortable,but can`t bother move the case there  :P 

when u get used to high frame rates u start to really notice fps drops and low frame rates. 

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, straight_stewie said:

Desktop graphics cards are leaps and bounds from what's in even the most modern consoles, and I'm sure that game developers will use them to great effect to augment the new capabilities of this engine system.

if you mean before ps5 and the new xbox is released sure, but this new consoles are beasts, both should perform pretty dam close to a 2080 ti

2 hours ago, Salv8 (sam) said:

i flat out call bullshit

the shear fact that the game didn't slow down when showing the triangles at 2:22 should be a big fucking red flag.

games always slow down when showing triangles of models and terrain, mainly because it's processing extra crap which puts extra load on the system which slows it down.

add that to the fact that a high end pc would have trouble doing this but a $500 console can pull it off?

hmmm

BuT hEy! PrEaTy GrApHiCs!!!

tenor.gif?itemid=11443556

(i will delete this god damn post if epic games releases the source for everyone and it can actually run on a gaming computer)

that 500 dollar console has a 8 core zen 2 and around a 2080 super card so just looking at price wont tell you everything. plus the ps5 has a really high frequency which will help a lot with pixel fill rate

11 hours ago, Goliath_1911 said:

No one will know until it`s actually out

Also pcs got RAM for that... and insanely fast nvme ssds too, if it was about that they would have went with pc,tho it`s about the $$ they got from sony lol

 

1 hour ago, Goliath_1911 said:

This ssd thing is all a marketing gimmick

pc had them for years, FYI there are 5gb nvme ssds out there, also consoles don`t have VRAM, and separate ram that`s why they gotta use the ssd for that 

basically a pc with 16gb of ram and 6/8 vram would not be using the ssd for that 

they are just trying to do what Apple does, reinvent something that was already there and make it a big deal LOL

 

You are seeing this tech now cause of new gen consoles have finally got SSDs , because most developers wouldnt bother use a feature that exists only on 1 platform

What epic did with UE 5 is great don`t get me wrong, its finally utilizing SSDs, but i think it is using it in a way to compensate the lack of seperate vram and ram ( I mean they got 8gb for ram and vram instead of 6 vram , 16 ram)

 

Also every console gen they would start overhyping that they will come out ahead of pcs, but i don`t think so, right now i can run all games on ultra settings with 120+ fps with no issues, which i doubt new consoles could do, maybe at 1080p with high settings.

I hope they are actually good cause i got both consoles and they are both sitting collecting dust since whenever i plug them in i notice the major difference in frames, loading , details, i go back to my pc. i like sitting on the couch playing games, more comfortable,but can`t bother move the case there  :P 

when u get used to high frame rates u start to really notice fps drops and low frame rates. 

this consoles i am afraid to say will be beasts, on pcs reading and writing to your ssd takes cpu cycles and quite a lot of them if you want 4+GB/s, the ps5 and the xbox have controllers that reduce the cpu overhead, you can brute force it but you will need maybe 4+ extra cores just to do the IO, then we get to the fact that that same controller also compresses memory on the fly increasing effective bandwidth to over 9GB/s and with peaks of 20GB/s, you point out that consoles don't have vram, true which also means they don't have a bottleneck between the cpu's and the gpu's memory pools.

pcs right now will have a hard time streaming as much data as ps5 can no matter how powerful you think your system is.

gpu wise its really powerful as well, the ps5 should be just below a 2080 ti and the xbox just over it, i wonder how this will affect pc graphics prices

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, cj09beira said:

if you mean before ps5 and the new xbox is released sure, but this new consoles are beasts, both should perform pretty dam close to a 2080 ti

that 500 dollar console has a 8 core zen 2 and around a 2080 super card so just looking at price wont tell you everything. plus the ps5 has a really high frequency which will help a lot with pixel fill rate

 

this consoles i am afraid to say will be beasts, on pcs reading and writing to your ssd takes cpu cycles and quite a lot of them if you want 4+GB/s, the ps5 and the xbox have controllers that reduce the cpu overhead, you can brute force it but you will need maybe 4+ extra cores just to do the IO, then we get to the fact that that same controller also compresses memory on the fly increasing effective bandwidth to over 9GB/s and with peaks of 20GB/s, you point out that consoles don't have vram, true which also means they don't have a bottleneck between the cpu's and the gpu's memory pools.

pcs right now will have a hard time streaming as much data as ps5 can no matter how powerful you think your system is.

gpu wise its really powerful as well, the ps5 should be just below a 2080 ti and the xbox just over it, i wonder how this will affect pc graphics prices

 

About that ram and vram, u do realize they got speeds of 20+ gbytes/s??

edit: there are already 5gb/s ssds for pc too.....

the consoles are basically using the ssd as slower ram...

And bruh are you for real??? you actually think they will perform as good as a 2080 TI 

LOL

they said the same as last gen , but that was pure BS 

 

also yes they can run insanely good looking games, but can they pump out the fps???? NOPE 

i ran forza horizon 4 on xbox and pc, pc ran it at ~130 fps at all extreme settings while that console ran it at a 30 fps with lower graphics settings

 

I aint going all pc master race , just saying this is overhyping shit.

 

The same way they overhyped the xbone x and ps4 pro, which turned that they are not outputting actual 4k , just some ai stretched stuff...

 

Are you really convinced that the new gen consoles will perform close to a 2080 ti??? If u do believe that , then ur in for a huge disappointment when the consoles come out.......

 

Here`s a fun image of the overhype back then and now , see a pattern??? lmao

zfr9WPA.thumb.jpg.204f9ebc4553317edad21cd6bf718aa9.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

Wow, that's a sick real-time graphics demo like I've never seen before. Looking back, that's better looking than many CGI movie animations in the past, and that took a render farm of weeks to months of computational power (ray traced, I know, raw cycles required, not the same comparison in method). So while this isn't pure ray traced, it's good enough!

 

Well that settles it. It couldn't be more clear. We need a PS5 PCIe card for the GPU. 🤣

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, StDragon said:

Wow, that's a sick real-time graphics demo like I've never seen before. Looking back, that's better looking than many CGI movie animations in the past, and that took a render farm of weeks to months of computational power (ray traced, I know, raw cycles required, not the same comparison in method). So while this isn't pure ray traced, it's good enough!

 

Well that settles it. It couldn't be more clear. We need a PS5 PCIe card for the GPU. 🤣

 

 

Global illumnation sort of is ray traced, just with very low sample and bounce count, so it doesn't really hurt performance that hard and still gives alright shading effect (basically any sort of shading gives depth to scene opposed ot none, even if very low precision).

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Goliath_1911 said:

 

About that ram and vram, u do realize they got speeds of 20+ gbytes/s??

the consoles are basically using the ssd as slower ram...

And bruh are you for real??? you actually think they will perform as good as a 2080 TI 

LOL

they said the same as last gen , but that was pure BS 

 

also yes they can run insanely good looking games, but can they pump out the fps???? NOPE 

i ran forza horizon 4 on xbox and pc, pc ran it at ~130 fps at all extreme settings while that console ran it at a 30 fps with lower graphics settings

 

I aint going all pc master race , just saying this is overhyping shit.

 

The same way they overhyped the xbone x and ps4 pro, which turned that they are not outputting actual 4k , just some ai scaled up stuff....

 

Are you really convinced that the new gen consoles will perform close to a 2080 ti??? If u do believe that , then ur in for a huge disappointment when the consoles come out.......

 

Here`s a fun image of the overhype back then and now , see a pattern??? lmao

zfr9WPA.thumb.jpg.204f9ebc4553317edad21cd6bf718aa9.jpg

lets do some math then, i grabbed Gears tactics as it was the first that popped up on overclock3d.net (1440 ultra), in this game the 2080 ti has 125 max 100 average 79 min (99%tile)

a 5700xt has 88 max 72 av 66.8 min which turns out to be: 42%, 38%, 20% higher performance,

now what we know of the consoles gpu

xbox x 52 Cus which is 30% higher than a 5700xt, also has 25% higher memory bandwidth at around the same frequency, all this together would put a rdna 1 xbox x at around: 

112 max 92 average 85 min

which is around 10% slower than a 2080 ti before any improvements due to rdna 2 or optimizations due to being a known unique hardware configuration

 

the ps 5 is a bit harder to get to but lets try

first ps5 has 36 Cus vs the 40 of the 5700xt, but how much does that matter?, in a hardware unboxed video from multiple tests when using the same power target the difference is 3.5% or less stock is around 10% (different power targets), the frequency difference seems to be around 15.7% (2200 vs 1900)

ps5 98 max 80 ave 74.58 min

which puts the ps5 25% bellow the 2080 ti before rdna 2 is accounted for

 

the xbox x is right there with it, 8 fps difference on average fps, sadly we dont know how much rdna 2 improves "ipc"

granted the ps5 is slower with a 20 fps difference.

 

 

with this i hope to have shown that the consoles wont be slow and i am no console guy at all, but i have to appreciate what they did on the hardware side

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, cj09beira said:

lets do some math then, i grabbed Gears tactics as it was the first that popped up on overclock3d.net (1440 ultra), in this game the 2080 ti has 125 max 100 average 79 min (99%tile)

a 5700xt has 88 max 72 av 66.8 min which turns out to be: 42%, 38%, 20% higher performance,

now what we know of the consoles gpu

xbox x 52 Cus which is 30% higher than a 5700xt, also has 25% higher memory bandwidth at around the same frequency, all this together would put a rdna 1 xbox x at around: 

112 max 92 average 85 min

which is around 10% slower than a 2080 ti before any improvements due to rdna 2 or optimizations due to being a known unique hardware configuration

 

the ps 5 is a bit harder to get to but lets try

first ps5 has 36 Cus vs the 40 of the 5700xt, but how much does that matter?, in a hardware unboxed video from multiple tests when using the same power target the difference is 3.5% or less stock is around 10% (different power targets), the frequency difference seems to be around 15.7% (2200 vs 1900)

ps5 98 max 80 ave 74.58 min

which puts the ps5 25% bellow the 2080 ti before rdna 2 is accounted for

 

the xbox x is right there with it, 8 fps difference on average fps, sadly we dont know how much rdna 2 improves "ipc"

granted the ps5 is slower with a 20 fps difference.

 

 

with this i hope to have shown that the consoles wont be slow and i am no console guy at all, but i have to appreciate what they did on the hardware side

i didnt say they are gonna be slow , i said it is getting overhyped.

there are other things at play here other than the GPU, 

this is all in theory, as i said let`s see in practice instead of overhyping shit.( as in when it comes out)

Link to comment
Share on other sites

Link to post
Share on other sites

I think it is all PR, the same happens every console cycle, they say its going to be a revolution and it is somewhat but never is to the point the marketing says.

 

This new tech is very impressive and should help move things forward for everyone and i think that is something we should celebrate, but at the same time take all of this with a handful of salt. This is nothing more than an advert to say how great Epic is and that the PS5 is going to be more expensive than ever and there is a reason for that, its going to be much better than previous consoles. While i know i will get a PS5 i wont on day one, my PC will way out perform it and no titles will be out that anyone in reality will care about or could just simply wait for. I think this will be interesting to watch and see how this new tech will be implemented and what flavour they chose, because there are going to be a few for each platform. None the less its great to see things moving forward and i think we can all admit Epic made a great video that makes us all very hopeful for whats to come.

My Current Build: https://uk.pcpartpicker.com/list/36jXwh

 

CPU: AMD - Ryzen 5 3600X | CPU Cooler: Corsair H150i PRO XT | Motherboard: Asus - STRIX X370-F GAMING | RAM: G.SKILL Trident Z RGB 2x8Gb DDR4 @3000MHz | GPU: Gigabyte - GeForce RTX 2080 Ti 11 GB AORUS XTREME Video Card | Storage: Samsung - 860 EVO 250GB M.2-2280 - Sandisk SSD 240GB - Sandisk SSD 1TB - WD Blue 4TB| PSU: Corsair RM (2019) 850 W 80+ Gold Certified Fully Modular ATX Power Supply | Case: Corsair - Corsair Obsidian 500D RGB SE ATX Mid Tower Case | System Fans: Corsair - ML120 PRO RGB 47.3 CFM 120mm x 4 & Corsair - ML140 PRO RGB 55.4 CFM 140mm x 2 | Display: Samsung KS9000 |Keyboard: Logitech - G613 | Mouse: Logitech - G703 | Operating System: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

This is very neat and all. Look amazing.

 

In 10-20 years, looks like shit :P

Link to comment
Share on other sites

Link to post
Share on other sites

21 hours ago, Delicieuxz said:

 

Without understanding the tech in more detail, I have the question: If this can run on a PS5, what does that imply for the future of AMD's and Nvidia's GPU businesses?

Well, they still make the GPUs in the consoles, and will continue to sell the hardware to do this on PC... I wouldn't expect much of a difference for them.

 

21 hours ago, Delicieuxz said:

 

 

BTW, remember when Euclideon came out with their Unlimited Detail voxel-based engine demos, and people pointed-out that it's only a static image and said there'd never be 'unlimited' detail? Euclideon were savaged for even claiming the concept is attainable.

 

It seems that the things people say today can't be done invariably end-up being done - and often a lot sooner than people anticipate. Just not when it comes to mainstream flying cars or new CPU architectures from Intel.

 

And as we all know, no object heaviear than air can possibly fly! :D 

 

7 hours ago, JZStudios said:

I'm not reading the rest of that text wall

I guess that explains the quality of your reply.

Link to comment
Share on other sites

Link to post
Share on other sites

Quote

I couldn't get any exact specifications from Epic, but on a conference call earlier this week I asked how an RTX 2070 Super would handle the demo, and Epic Games chief technical officer Kim Libreri said that it should be able to get "pretty good" performance. But aside from a fancy GPU, you'll need some fast storage if you want to see the level of detail shown in the demo video.

 

Sony was heckled a bit for its focus on the PlayStation 5's storage speed, and if all you're imagining is loading screens disappearing more quickly, it does seem like an odd focus. But it's about moving beyond loading screens entirely, to the point where "you can bring in [the demo's] geometry and display it despite it not all fitting in memory," says Epic CEO Tim Sweeney. 

That was from PCGamer's site.

 

I do understand the storage aspect. You've having to stream data from storage in real-time or often. That's why the PS5 has fast NVMe storage. But I do think that's to offset the fact it only has 16GB of RAM which is shared between CPU and GPU. So a PC with 16GB of RAM (or 32+GB) should more than suffice to act as a buffer. But it's rather moot as PCIe 4.0 storage is obtainable now with 5GB/s reads.

 

As for the RTX 2070 Super comment, I wonder if that's also comparative to the PS5 at the same resolution. I would think so, but....???

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, cj09beira said:

lets do some math then, i grabbed Gears tactics as it was the first that popped up on overclock3d.net (1440 ultra), in this game the 2080 ti has 125 max 100 average 79 min (99%tile)

a 5700xt has 88 max 72 av 66.8 min which turns out to be: 42%, 38%, 20% higher performance,

now what we know of the consoles gpu

xbox x 52 Cus which is 30% higher than a 5700xt, also has 25% higher memory bandwidth at around the same frequency, all this together would put a rdna 1 xbox x at around: 

112 max 92 average 85 min

which is around 10% slower than a 2080 ti before any improvements due to rdna 2 or optimizations due to being a known unique hardware configuration

 

the ps 5 is a bit harder to get to but lets try

first ps5 has 36 Cus vs the 40 of the 5700xt, but how much does that matter?, in a hardware unboxed video from multiple tests when using the same power target the difference is 3.5% or less stock is around 10% (different power targets), the frequency difference seems to be around 15.7% (2200 vs 1900)

ps5 98 max 80 ave 74.58 min

which puts the ps5 25% bellow the 2080 ti before rdna 2 is accounted for

 

the xbox x is right there with it, 8 fps difference on average fps, sadly we dont know how much rdna 2 improves "ipc"

granted the ps5 is slower with a 20 fps difference.

 

 

with this i hope to have shown that the consoles wont be slow and i am no console guy at all, but i have to appreciate what they did on the hardware side

 

I'm sure your aware but there's rarely linear scaling with raw on board hardware. So thats probably overstating it a bit.

 

14 minutes ago, StDragon said:

I do understand the storage aspect. You've having to stream data from storage in real-time or often. That's why the PS5 has fast NVMe storage. But I do think that's to offset the fact it only has 16GB of RAM which is shared between CPU and GPU. So a PC with 16GB of RAM (or 32+GB) should more than suffice to act as a buffer. But it's rather moot as PCIe 4.0 storage is obtainable now with 5GB/s reads.

 

Absolutely, being able to store it in ram is very valuable. That said i think it' also important to note that as the player plays the game what needs to be buffered to account for what they could do is going to change and thats going to mean feeding the buffer new data from storage. So there is a point at which no amount of ram can compensate for slow storage.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, cj09beira said:

if you mean before ps5 and the new xbox is released sure, but this new consoles are beasts, both should perform pretty dam close to a 2080 ti

 

The specs I've seen put the PS5s graphics just below a 2060. By the time it releases, the 3xxx series will be out. Plus the console cards likely won't have the ancillary support for the other things that GPGPUs can do.

ENCRYPTION IS NOT A CRIME

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Salv8 (sam) said:

i flat out call bullshit

the shear fact that the game didn't slow down when showing the triangles at 2:22 should be a big fucking red flag.

games always slow down when showing triangles of models and terrain, mainly because it's processing extra crap which puts extra load on the system which slows it down.

add that to the fact that a high end pc would have trouble doing this but a $500 console can pull it off?

I'm not saying no, but I'm not sure that's actually accurate. The engine was rendering 1 poly per pixel, so distant objects have much less resolution than standard methods. It also cleared out textures, unless it was just an overlay, which is possible. If it is an overlay then it's in a sense always rendering that, it's just not visible so switching it on wouldn't have a big impact.

 

7 hours ago, RejZoR said:

Nonsense and I don't give a shit what NVIDIA said.

Okay. Yeah, they invented it, but they don't know how it works. DACs and ADCs are also identical and anyone who says otherwise is an idiot and they're doing it wrong. Especially the people that build them. All dipshits.

 

2 hours ago, SpaceGhostC2C said:

I guess that explains the quality of your reply.

The fact that I know what I'm talking about? He literally said NVIDIA doesn't know how their own technology works.

And now it's gone into stupid console vs pc arguments which has absolutely nothing to do with the engine.

#Muricaparrotgang

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×