Jump to content

Amazon releases its new game engine based on CryEngine

5 minutes ago, Bouzoo said:

I will when I get home, on phone atm. The part how better they look I leave to devs, and to engine. Btw I was talking tesselation in general. :)

Pretty odd to reply to something you haven't seen.

 

Using tessellation on objects you'll never see is stupid. Using more tessellation than necessary (meaning no increase in visual quality) is also stupid. More is not necessarily better or worth the trade-off.

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, Trixanity said:

Pretty odd to reply to something you haven't seen.

 

Using tessellation on objects you'll never see is stupid. Using more tessellation than necessary (meaning no increase in visual quality) is also stupid. More is not necessarily better or worth the trade-off.

I was responding to part where you implied that higher tesselation doesn't mean higher details when it should mean. It's a different thing if it was not properly implemented or was used too much. 

The ability to google properly is a skill of its own. 

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, Bouzoo said:

I was responding to part where you implied that higher tesselation doesn't mean higher details when it should mean. It's a different thing if it was not properly implemented. 

It was a reply to all the posts by that user in this thread, as the post clearly states. I just didn't bother quoting every single post. 

 

Yes, more tessellation = more detail.

 

But at what cost? Adding things because a higher number is better is silly. There will be diminishing returns while the performance will plummet unabated. That's why you don't simply dial it up to 11. That goes for more than just tesselation. 

Link to comment
Share on other sites

Link to post
Share on other sites

Interesting, I'm curious to see how it will do. I mean Cryengine is already really good.

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Ducky One 3 TKL (Cherry MX-Speed-Silver)Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, XTankSlayerX said:

 

EDIT: I'm way to tired to figure out how to make this look like my old threads...

 

First of all, mark all quoted text and press Ix, something like that, at the right side on format bar to remove the white background. :P

The ability to google properly is a skill of its own. 

Link to comment
Share on other sites

Link to post
Share on other sites

It's free so that's really good. Not sure how it will compete vs current ones which are quite good.

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Ducky One 3 TKL (Cherry MX-Speed-Silver)Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

I'm still going to use UE4 for my game. Mainly because I'm used to the ui.

Wishing leads to ambition and ambition leads to motivation and motivation leads to me building an illegal rocket ship in my backyard.

 

Link to comment
Share on other sites

Link to post
Share on other sites

How are they able to give it away for free if it is based on the CryEngine? Will have to pay Crytech for some sort of license that covers all downloads?

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Dabombinable said:

AMD's got to get its ass in to gear and improve its GPU further (they started improving tessellation with Tonga/Fiji), so its actually good if they get pushed to improve part of their GPU architectures.

Will you sing the same song once Nvidia is destroyed in dx12 due to their seriously lacking compute power?

-------

Current Rig

-------

Link to comment
Share on other sites

Link to post
Share on other sites

54 minutes ago, Misanthrope said:

powWill you sing the same song once Nvidia is destroyed in dx12 due to their seriously lacking compute power?

What features of DX12 are compute heavy? And you do realise that both vendors will have issues with current cards. Just go back to when DirectX 8.1, 9 and 11 came out.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

I've used the source engine and UE4 extensively and looking at that video of the high poly counts just fucken amazes me. I'm going to have a 980Ti hybrid in two days... Crysis here I come :) and free game engines is an amazing thing. I applaud amazon for this, without free game engines I wouldn't have gotten into programming, and wouldn't be where I am today

  

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, Misanthrope said:

Will you sing the same song once Nvidia is destroyed in dx12 due to their seriously lacking compute power?

Lol, this was blown way out of proportion. Async compute (which exists on Nvidia but is offloaded to CPU) is a single feature of DX12 FEATURE LEVEL 1, which has dozens of other features. AMD doesn't even support ALL of them. Besides, by the time games that start utilising all the features of DX12 come out, Pascal will be out and the comparison will be made with the true DX12 cards which are the 400 AMD cards and 1000 Nvidia cards.

 

In relation to the discussion about CryEngine's tesselation issue.  That was from a few years ago now, caused by a rushed implementation of DX11 to CryEngine. Current CryEngine titles don't experience this problem, so I am not seeing how it's still a topic..?

i7 6700K - ASUS Maximus VIII Ranger - Corsair H110i GT CPU Cooler - EVGA GTX 980 Ti ACX2.0+ SC+ - 16GB Corsair Vengeance LPX 3000MHz - Samsung 850 EVO 500GB - AX760i - Corsair 450D - XB270HU G-Sync Monitor

i7 3770K - H110 Corsair CPU Cooler - ASUS P8Z77 V-PRO - GTX 980 Reference - 16GB HyperX Beast 1600MHz - Intel 240GB SSD - HX750i - Corsair 750D - XB270HU G-Sync Monitor
Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, PerfectTemplar said:

Lol, this was blown way out of proportion. Async compute (which exists on Nvidia but is offloaded to CPU) is a single feature of DX12 FEATURE LEVEL 1, which has dozens of other features. AMD doesn't even support ALL of them.

 

That may be true, but among all the new DX12 features, async compute is one of the few that do not add rendering techniques, all it does is improve the performance of existing ones. Thus, it is one of the more important ones to achieve the "point" of DX12: reduced render overhead.

 

3 hours ago, PerfectTemplar said:

Besides, by the time games that start utilising all the features of DX12 come out, Pascal will be out and the comparison will be made with the true DX12 cards which are the 400 AMD cards and 1000 Nvidia cards.

 

There is no reason to believe Pascal will support async compute to any greater level than Maxwell. AMD started embracing the feature on HD 6xxx series cards, and ever since then it's been in their backbone.

 

Nvidia's stance has been that the feature is not going to have any real gaming impact and offload it to CPU still.

 

NOW they know that it will impact DX12 performance, and due to their reaction to the AotS benchmarks it smells like they still don't have it in for Pascal. Remember that new architectures leave the design board as much as a year before manufacturing. If 15 months ago, Pascal didn't have onboard async compute functionality, it still wouldn't when it comes to market even if AotS proved the feature's importance halfway to manufacturing.

 

They COULD frankenstein a dedicated chip to the PCB though for this feature. Certainly wouldn't put it past them, they've done similar things in the past, like with the Fermi H.264 encoder. But then they could end up with a lot of other problems early on in drivers. And potentially longer PCBs.

In case the moderators do not ban me as requested, this is a notice that I have left and am not coming back.

Link to comment
Share on other sites

Link to post
Share on other sites

Amazon appears to have reached that critical point in a companies life where they've outgrown their original business and are looking to expand into other ventures but don't seem to really know what business venture they should expand into so they're trying out everything in an effort to work out what's best for them.

 

This is when lots of businesses fail.

Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Corsair MP600 1TB PCIe Gen 4 | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Ubuntu 20.04.2 LTS |

 

Server:-

Intel NUC running Server 2019 + Synology DSM218+ with 2 x 4TB Toshiba NAS Ready HDDs (RAID0)

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, That Norwegian Guy said:

 

That may be true, but among all the new DX12 features, async compute is one of the few that do not add rendering techniques, all it does is improve the performance of existing ones. Thus, it is one of the more important ones to achieve the "point" of DX12: reduced render overhead.

 

 

There is no reason to believe Pascal will support async compute to any greater level than Maxwell. AMD started embracing the feature on HD 6xxx series cards, and ever since then it's been in their backbone.

 

Nvidia's stance has been that the feature is not going to have any real gaming impact and offload it to CPU still.

 

NOW they know that it will impact DX12 performance, and due to their reaction to the AotS benchmarks it smells like they still don't have it in for Pascal. Remember that new architectures leave the design board as much as a year before manufacturing. If 15 months ago, Pascal didn't have onboard async compute functionality, it still wouldn't when it comes to market even if AotS proved the feature's importance halfway to manufacturing.

 

They COULD frankenstein a dedicated chip to the PCB though for this feature. Certainly wouldn't put it past them, they've done similar things in the past, like with the Fermi H.264 encoder. But then they could end up with a lot of other problems early on in drivers. And potentially longer PCBs.

Hmh, there is every reason to believe as Pascal is said to have a feature level of 2. Nvidia's own charts indicated full feature level 1 support for DX12 on Pascal which they never said would be the case for Maxwell. AMD is going to introduce a full feature level GPU as well.

 

I don't remember them even saying anything about a-sync compute. Nvidia generally don't talk publicly about anything that is controversial.

 

They wouldn't have to frankenstein a chip. Based on Nvidia's own graphs and presentations, Pascal will be a massive improvement over Maxwell.

i7 6700K - ASUS Maximus VIII Ranger - Corsair H110i GT CPU Cooler - EVGA GTX 980 Ti ACX2.0+ SC+ - 16GB Corsair Vengeance LPX 3000MHz - Samsung 850 EVO 500GB - AX760i - Corsair 450D - XB270HU G-Sync Monitor

i7 3770K - H110 Corsair CPU Cooler - ASUS P8Z77 V-PRO - GTX 980 Reference - 16GB HyperX Beast 1600MHz - Intel 240GB SSD - HX750i - Corsair 750D - XB270HU G-Sync Monitor
Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, PerfectTemplar said:

I don't remember them even saying anything about a-sync compute. Nvidia generally don't talk publicly about anything that is controversial.

They released a very butthurt statement towards Ashes of the Singularity devs claiming the game/ benchmark was not a good representative of DX12.

 

24 minutes ago, PerfectTemplar said:

Hmh, there is every reason to believe as Pascal is said to have a feature level of 2. Nvidia's own charts indicated full feature level 1 support for DX12 on Pascal which they never said would be the case for Maxwell. AMD is going to introduce a full feature level GPU as well.

They claim that with Maxwell's 980 Ti... Does not have native async compute, just the logic to offload it to CPU. Technically, that is supporting the feature.

In case the moderators do not ban me as requested, this is a notice that I have left and am not coming back.

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, That Norwegian Guy said:

They released a very butthurt statement towards Ashes of the Singularity devs claiming the game/ benchmark was not a good representative of DX12.

 

They claim that with Maxwell's 980 Ti... Does not have native async compute, just the logic to offload it to CPU. Technically, that is supporting the feature.

That's because Ashes of Singularity isn't a good indicator of DX12. It isn't just about A-sync compute though. Chris Roberts (guy making Star Citizen) actually said that most games coming out with DX12 are not fully implementing it from the ground up, which is something that creates a situation where performance is negligible and inconsistent. Something Star Citizen won't be doing. Wait for DX12 to be more understood by developers and implemented properly. It doesn't even give that much of a benefit to AMD in AoS, not as much as it should anyway.

 

Also http://wccftech.com/amd-full-support-dx12-today-fury-missing-dx12-features/

 

There is really no reason to believe Nvidia will shoot themselves in the foot and not implement proper a-sync compute the next time around.

 

Also, I still don't understand what you said about frankensteining a chip. You know Pascal is a totally new chip, it's not a rehash of Maxwell..

i7 6700K - ASUS Maximus VIII Ranger - Corsair H110i GT CPU Cooler - EVGA GTX 980 Ti ACX2.0+ SC+ - 16GB Corsair Vengeance LPX 3000MHz - Samsung 850 EVO 500GB - AX760i - Corsair 450D - XB270HU G-Sync Monitor

i7 3770K - H110 Corsair CPU Cooler - ASUS P8Z77 V-PRO - GTX 980 Reference - 16GB HyperX Beast 1600MHz - Intel 240GB SSD - HX750i - Corsair 750D - XB270HU G-Sync Monitor
Link to comment
Share on other sites

Link to post
Share on other sites

So I guess several of you already answered my question preemptively: yes you are singing a different tune on DX12. I expect nothing but this same line of excuses and apologies for Nvidia's eventual lack of performance.

-------

Current Rig

-------

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, PerfectTemplar said:

Also, I still don't understand what you said about frankensteining a chip. You know Pascal is a totally new chip, it's not a rehash of Maxwell..

I meant that if this late in the game, if the GPU architecture they designed still lacked async shader engine capabilities, they could make a secondary chip for that purpouse to go on the PCB, separate from the GPU itself, like they did with the Fermi H.264 unit. So the assembled card would be the Frankenstein, not the GPU itself.

In case the moderators do not ban me as requested, this is a notice that I have left and am not coming back.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Misanthrope said:
45 minutes ago, That Norwegian Guy said:

I meant that if this late in the game, if the GPU architecture they designed still lacked async shader engine capabilities, they could make a secondary chip for that purpouse to go on the PCB, separate from the GPU itself, like they did with the Fermi H.264 unit. So the assembled card would be the Frankenstein, not the GPU itself.

So I guess several of you already answered my question preemptively: yes you are singing a different tune on DX12. I expect nothing but this same line of excuses and apologies for Nvidia's eventual lack of performance.

Right.....and you do realise that both manufacturers still only have first gen DirectX 12 GPU? There is always performance issues with them.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, valdyrgramr said:

That was THQ.

 

Well, I was right  coming in here knowing Dab would be randomly bitching about AMD, a-fucking-again.  Claims he isn't a fanboy.  The OP didn't even mention AMD.

 

So, he's just proving that he's a fanboy even more.  He claims he does it to Intel and Nvidia too, but nope he leaves it to AMD regardless of how many times the others have messed up too.  

 

I'd suggest people start putting him on ignore like me because it's getting pointless to argue with his fanboy ideology.

 

As for the OP that's nice to know as I'm a dev major. :D  As for things like tesselation, you can just turn it off...funny how Dab didn't mention that. 

 

 

 

Learn to read, idiot:

On 9/2/2016 at 8:08 PM, huilun02 said:

Hopefully it doesn't have tessellation based on CryEngine...

On 9/2/2016 at 8:14 PM, Dabombinable said:

AMD's got to get its ass in to gear and improve its GPU further (they started improving tessellation with Tonga/Fiji), so its actually good if they get pushed to improve part of their GPU architectures.


Its a known fact that AMD's GPU really don't handle tessellation too well, and that Tonga was designed to fix that. Also, https://www.google.com.au/search?q=ad+hominem&ie=utf-8&oe=utf-8&gws_rd=cr&ei=gy27Vu2VMsjQ0AS8lbXABg

And so what if you can disable tessellation, still doesn't change the fact that AMD's current GPU handle it poorly.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, valdyrgramr said:

No, Dab pretends he's against all the companies that do it.  However, he always goes to threads to start bitching about AMD when AMD wasn't even mentioned in the OP at all.  He's a rabid fanboy who cries like AMD killed his mother, and never does the same towards Nvidia or Intel, like he claims. I've gone to most of the threads he's done it in, and he always makes excuses and lies up for the other 2.  Dab's just a 3 year-old needing to throw a tantrum.  He get's so defensive when you call him out for it.  "I use this and this! I can't fanboy!  I never said that!  Stop with the ad-hominem attacks on me!  It's always AMD's fault!"  He has no position because he's a rabid fanboy.

Going off the rails because you refuse to read what I posted. Stop being so immature.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

22 hours ago, Hawx said:

This point has been disproven already. When in-game, the renderer culls objects out of sight. When in wireframe mode, that object culling is disabled. Crytek even pointed this out themselves.

gonna leave this here so that the crysis fkin AMD circlejerk stops.

And btw, we all know that AMD only got proper tessellation going on their recent gpu's

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×