Jump to content

a new type of shilling - game developer locks content to specific Intel i7 CPUs [updated]

Just now, patrickjp93 said:

To not stick your foot in your mouth :D

Or somewhere else, on somebody else, lol. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, patrickjp93 said:

You can't have minimum requirements be overclocked parts. As for the cache, quantity and speed are equally important.

So 128MB of EDRAM that is only on 1 generation of CPU is important? Also, its a given that people with K parts run them overclocked, and i7 6700K also have the same amount of cache as my 4790K....

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Dabombinable said:

So 128MB of EDRAM that is only on 1 generation of CPU is important? Also, its a given that people with K parts run them overclocked, and i7 6700K also have the same amount of cache as my 4790K....

Watch out. Patrick is going to say "It's also on Skylake" because some mobile Skylake SKU's have eDRAM as well. Then I would say "yeah, but the eDRAM on Skylake no longer acts as an L4 victim cache, and instead acts as a buffer for memory itself". Then we will get into an argument, he will disappear for another week or two, and i'll be lonely.

 

Wait, what were we talking about again?

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, MageTank said:

Watch out. Patrick is going to say "It's also on Skylake" because some mobile Skylake SKU's have eDRAM as well. Then I would say "yeah, but the eDRAM on Skylake no longer acts as an L4 victim cache, and instead acts as a buffer for memory itself". Then we will get into an argument, he will disappear for another week or two, and i'll be lonely.

 

Wait, what were we talking about again?

Trying to get him to understand that locking content because you don't have a 5th, 6th or 7th gen i7 is bullshit (especially if you read the response of the devs, and the fact that Intel did heavily invest directly in their game).

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

37 minutes ago, Dabombinable said:

So 128MB of EDRAM that is only on 1 generation of CPU is important? Also, its a given that people with K parts run them overclocked, and i7 6700K also have the same amount of cache as my 4790K....

I said cache optimization is important, but since my opinion is worthless to you, take it from these industry experts!

Spoiler

 

 

 

 

 

You still can't make overclocking a requirement. Also, my dad runs a 4790K and my mom a 6700K (upgrades from Q6600 and I7 920 respectively), and neither of them overclocks.

 

And the timings, look aside buffer, and predictors get better with each generation.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

38 minutes ago, MageTank said:

Watch out. Patrick is going to say "It's also on Skylake" because some mobile Skylake SKU's have eDRAM as well. Then I would say "yeah, but the eDRAM on Skylake no longer acts as an L4 victim cache, and instead acts as a buffer for memory itself". Then we will get into an argument, he will disappear for another week or two, and i'll be lonely.

 

Wait, what were we talking about again?

A cache is a cache, and the games industry certainly doesn't have the expertise to optimize for victim cache vs. pure hierarchy. All this has done is increase the time to reach main memory if data is nowhere on the CPU die.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, patrickjp93 said:

I said cache optimization is important, but since my opinion is worthless to you, take it from these industry experts!

  Reveal hidden contents

 

 

 

 

 

You still can't make overclocking a requirement. Also, my dad runs a 4790K and my mom a 6700K (upgrades from Q6600 and I7 920 respectively), and neither of them overclocks.

 

And the timings, look aside buffer, and predictors get better with each generation.

fact<opinion

 

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Dabombinable said:

fact<opinion

 

In this case I've stated no opinions, so your response is both inane AND baseless.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, patrickjp93 said:

In this case I've stated no opinions, so your response is both inane AND baseless.

 

5 minutes ago, patrickjp93 said:

A cache is a cache, and the games industry certainly doesn't have the expertise to optimize for victim cache vs. pure hierarchy. All this has done is increase the time to reach main memory if data is nowhere on the CPU die.

 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, MageTank said:

 

 

That is a fact, measured and verified by the C++ community. The fact I still can disassemble modern games and not find SSE or AVX except in a COUPLE non-critical code paths is proof the industry has its head up its ass.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, patrickjp93 said:

That is a fact, measured and verified by the C++ community. The fact I still can disassemble modern games and not find SSE or AVX except in a COUPLE non-critical code paths is proof the industry has its head up its ass.

Dude, calm the fuck down. There's no need to be aggressive or have hostile behavior.

 

The problem with limiting the game mode to recent i7s is that most people don't have them. The gamer would not benefit from it. And so why should it be in the game if the majority of the audience can't take advantage of it?

Judge a product on its own merits AND the company that made it.

How to setup MSI Afterburner OSD | How to make your AMD Radeon GPU more efficient with Radeon Chill | (Probably) Why LMG Merch shipping to the EU is expensive

Oneplus 6 (Early 2023 to present) | HP Envy 15" x360 R7 5700U (Mid 2021 to present) | Steam Deck (Late 2022 to present)

 

Mid 2023 AlTech Desktop Refresh - AMD R7 5800X (Mid 2023), XFX Radeon RX 6700XT MBA (Mid 2021), MSI X370 Gaming Pro Carbon (Early 2018), 32GB DDR4-3200 (16GB x2) (Mid 2022

Noctua NH-D15 (Early 2021), Corsair MP510 1.92TB NVMe SSD (Mid 2020), beQuiet Pure Wings 2 140mm x2 & 120mm x1 (Mid 2023),

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, patrickjp93 said:

That is a fact, measured and verified by the C++ community. The fact I still can disassemble modern games and not find SSE or AVX except in a COUPLE non-critical code paths is proof the industry has its head up its ass.

Generalizing an entire industry is not a fact, it's an opinion. You simply do not know if they have the expertise to do it or not. Just because they haven't done it, does not mean they are incapable of doing so. Perhaps (and bear with me on this one buddy) they program to reach the broadest audience possible, aiming for the lowest common denominator? It's almost as if their profits directly coincide with sales, which directly coincides with consumers being able to properly enjoy their product. Mind blowing, I know.

 

I'll be here if you need me. 

 

For all others, this explains what Patrick and I are discussing: 

 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, patrickjp93 said:

That is a fact, measured and verified by the C++ community. The fact I still can disassemble modern games and not find SSE or AVX except in a COUPLE non-critical code paths is proof the industry has its head up its ass.

you're like my 3rd fave person on here cos u take all the bait

 

4bf.png

Stuff:  i7 7700k @ (dat nibba succ) | ASRock Z170M OC Formula | G.Skill TridentZ 3600 c16 | EKWB 1080 @ 2100 mhz  |  Acer X34 Predator | R4 | EVGA 1000 P2 | 1080mm Radiator Custom Loop | HD800 + Audio-GD NFB-11 | 850 Evo 1TB | 840 Pro 256GB | 3TB WD Blue | 2TB Barracuda

Hwbot: http://hwbot.org/user/lays/ 

FireStrike 980 ti @ 1800 Mhz http://hwbot.org/submission/3183338 http://www.3dmark.com/3dm/11574089

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, AluminiumTech said:

Dude, calm the fuck down. There's no need to be aggressive or have hostile behavior.

 

The problem with limiting the game mode to recent i7s is that most people don't have them. The gamer would not benefit from it. And so why should it be in the game if the majority of the audience can't take advantage of it?

Why make Crysis if only top-end hardware can run it today? To push the boundaries and prove it's necessary to get higher quality.

 

The daftness in this community is getting to be truly annoying.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, patrickjp93 said:

Why make Crysis if only top-end hardware can run it today? To push the boundaries and prove it's necessary to get higher quality.

I have no issue with requiring top end hardware. The problem lies in the fact that it requires one manufacturer's high end product to utilize.

 

If it was supported on AMD's previously high end FX CPUs in addition to the i7s then I would have less of a problem with it.

 

But locking out content for one specific brand of CPUs is wrong and you know it but you're trying to justify it.

 

Judge a product on its own merits AND the company that made it.

How to setup MSI Afterburner OSD | How to make your AMD Radeon GPU more efficient with Radeon Chill | (Probably) Why LMG Merch shipping to the EU is expensive

Oneplus 6 (Early 2023 to present) | HP Envy 15" x360 R7 5700U (Mid 2021 to present) | Steam Deck (Late 2022 to present)

 

Mid 2023 AlTech Desktop Refresh - AMD R7 5800X (Mid 2023), XFX Radeon RX 6700XT MBA (Mid 2021), MSI X370 Gaming Pro Carbon (Early 2018), 32GB DDR4-3200 (16GB x2) (Mid 2022

Noctua NH-D15 (Early 2021), Corsair MP510 1.92TB NVMe SSD (Mid 2020), beQuiet Pure Wings 2 140mm x2 & 120mm x1 (Mid 2023),

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, MageTank said:

Generalizing an entire industry is not a fact, it's an opinion. You simply do not know if they have the expertise to do it or not. Just because they haven't done it, does not mean they are incapable of doing so. Perhaps (and bear with me on this one buddy) they program to reach the broadest audience possible, aiming for the lowest common denominator? It's almost as if their profits directly coincide with sales, which directly coincides with consumers being able to properly enjoy their product. Mind blowing, I know.

 

I'll be here if you need me. 

 

For all others, this explains what Patrick and I are discussing: 

 

It's not a generalization if it's measured and found to be true across every single AAA studio.

 

Yes I do, because the senior dev is the same old guard it was 5 years ago, and nothing has really changed in those 5 years in terms of optimization techniques.

 

Yes it does, because instead of pursuing vectorization which objectively has a much higher impact on performance, they chased multithreading their existing code, which is vastly easier to do. It's clear they don't have the capability.

 

The widest audience possible still has SSE1-4.2 at this point. Hell, Sandy Bridge, Bulldozer, Jaguar, and later all have AVX. It's also a VR game which is only suited to high-end hardware anyway that has AVX2 and FMA3 at its disposal. The excuses have run out.

 

The Ad Hominem is the last weapon of a desperate man who's already lost the argument.

 

5 minutes ago, AluminiumTech said:

I have no issue with requiring top end hardware. The problem lies in the fact that it requires one manufacturer's high end product to utilize.

 

If it was supported on AMD's previously high end FX CPUs in addition to the i7s then I would have less of a problem with it.

 

But locking out content for one specific brand of CPUs is wrong and you know it but you're trying to justify it.

 

Not Intel's fault AMD's hardware sucks. And it's up to the game devs to decide what hardware they can reasonably support.

 

Not if that brand is the only one truly capable of supporting the program.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, patrickjp93 said:

The Ad Hominem is the last weapon of a desperate man who's already lost the argument.

I don't think this means what you think it means.

 

1 minute ago, patrickjp93 said:

It's not a generalization if it's measured and found to be true across every single AAA studio.

 

Yes I do, because the senior dev is the same old guard it was 5 years ago, and nothing has really changed in those 5 years in terms of optimization techniques.

 

Yes it does, because instead of pursuing vectorization which objectively has a much higher impact on performance, they chased multithreading their existing code, which is vastly easier to do. It's clear they don't have the capability.

 

The widest audience possible still has SSE1-4.2 at this point. The excuses have run out.

Again, you have zero evidence to prove that these people lack the expertise. Just because they do not do it, does not mean they don't know how. This is something you cannot prove either, because it's an opinion. 

 

I'll end this with a question. If what you say is true, and it's vastly superior, why is it that an entire industry avoids doing it? Is it likely that they know, and simply choose not to do it because it's not applicable, inferior, or detrimental to their products? Or is it more likely that an entire industry has no idea what they are doing, and a random software engineer (not even a professional game developer) knows something that they do not? Gotta say, only one of these things seem likely to me. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, MageTank said:

I don't think this means what you think it means.

 

Again, you have zero evidence to prove that these people lack the expertise. Just because they do not do it, does not mean they don't know how. This is something you cannot prove either, because it's an opinion. 

 

I'll end this with a question. If what you say is true, and it's vastly superior, why is it that an entire industry avoids doing it? Is it likely that they know, and simply choose not to do it because it's not applicable, inferior, or detrimental to their products? Or is it more likely that an entire industry has no idea what they are doing, and a random software engineer (not even a professional game developer) knows something that they do not? Gotta say, only one of these things seem likely to me. 

Yes I can, and I can cite cppcons 2012-2016 to do it.

 

Because it's time-consuming to optimize for vectorization and hyper threading, and games studios are already under enormous time pressure.

 

It can't be detrimental, because I keep committing new vector code to Unreal 4, and it keeps being accepted.

 

It's very likely that an industry built primarily of 3rd tier programmers coming from games and multimedia studies curricula rather than rigorous computer science or systems engineering lacks optimization expertise as a whole, and it's reflected in the code that keeps being produced.

 

Random people like me and the man above constantly prove we do know things the games industry experts don't.

 

You don't understand your own probability if you think it's more likely that vectorization and hyper threading can't be applied to games which are almost entirely data-parallel and embarrassingly parallel applications, especially when I have standing evidence on my blog here on LTT. The games industry is not built of the best nor the second best, and a lot of it has to do with salary. Good C++ devs are 250,000 USD salary devs and higher (Herb Sutter CPPCon). The games industry lives off of $40,000-salary devs, primarily fresh out of high school and college, with a few senior devs making roughly $100,000 a year. Real experts don't work in that industry. They work at supercomputers, microsoft, redhead, canonical, SAP, Intel, IBM, Amazon, Facebook, Twitter, Google, and companies who pay very competitive salaries for that expertise.

 

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, patrickjp93 said:

Yes I can, and I can cite cppcons 2012-2016 to do it.

 

Because it's time-consuming to optimize for vectorization and hyper threading, and games studios are already under enormous time pressure.

 

It can't be detrimental, because I keep committing new vector code to Unreal 4, and it keeps being accepted.

 

It's very likely that an industry built primarily of 3rd tier programmers coming from games and multimedia studies curricula rather than rigorous computer science or systems engineering lacks optimization expertise as a whole, and it's reflected in the code that keeps being produced.

 

Random people like me and the man above constantly prove we do know things the industry experts don't.

 

You don't understand your own probability if you think it's more likely that vectorization and hyper threading can't be applied to games which are almost entirely data-parallel and embarrassingly parallel applications, especially when I have standing evidence on my blog here on LTT. The games industry is not built of the best nor the second best, and a lot of it has to do with salary. Good C++ devs are 250,000 USD salary devs and higher (Herb Sutter CPPCon). The games industry lives off of $40,000-salary devs, primarily fresh out of high school and college, with a few senior devs making roughly $100,000 a year.

 

Where in that video do they say that the entire gaming industry doesn't know what they are doing? I'd rather not watch an entire hour long video. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, MageTank said:

Where in that video do they say that the entire gaming industry doesn't know what they are doing? I'd rather not watch an entire hour long video. 

I used that video to prove random people can know things experts don't, which you discounted. And sorry, but the whole video is valuable. As for the whole games industry, I would put that in Mike Acton's talk in the previous reply.

 

If I can get 10x speedups over industry-standard code, the industry has a problem with lack of expertise, plain and simple.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, patrickjp93 said:

I used that video to prove random people can know things experts don't, which you discounted. And sorry, but the whole video is valuable. As for the whole games industry, I would put that in Mike Acton's talk in the previous reply.

 

If I can get 10x speedups over industry-standard code, the industry has a problem with lack of expertise, plain and simple.

I didn't discount your knowledge. I simply brought up a scenario in which statistically speaking, the "random people" were likely to be incorrect. You've yet to prove otherwise, so such an assumption is safe to make, is it not? My entire post was to point out that the industry's "lack of expertise" was your opinion, and not a fact, simply because you said you stated no opinions. Also, using another mans opinion to bolster your claims still does not make it a fact. 

 

As for you getting 10x speedups over industry standard code, I ask again, post the evidence (along with a methodology in which a third party can review said evidence). Don't hit me with a "it's my secret closed source code!" because that becomes hearsay, which doesn't fly around me. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, MageTank said:

I didn't discount your knowledge. I simply brought up a scenario in which statistically speaking, the "random people" were likely to be incorrect. You've yet to prove otherwise, so such an assumption is safe to make, is it not? My entire post was to point out that the industry's "lack of expertise" was your opinion, and not a fact, simply because you said you stated no opinions. Also, using another mans opinion to bolster your claims still does not make it a fact. 

 

As for you getting 10x speedups over industry standard code, I ask again, post the evidence (along with a methodology in which a third party can review said evidence). Don't hit me with a "it's my secret closed source code!" because that becomes hearsay, which doesn't fly around me. 

You misunderstood the variables at play then, because your forecast sucks.

 

I've left you an open book. Read.

 

No, it's a fact. You just don't want to accept that reality.

 

True, but when those statements are based on the raw data freely available in the talks and in the pdfs provided on GitHub in the conference repository, it's a fact.

 

Already done. See my blog here on LTT, SIMD Optimization. I believe I actually invited you onto that blog personally once.

 

You can see the code yourself, the compiler version, the OS, the kernel version (I was on Linux because Win10 makes my MacBook Pro thermal throttle b/c it can't use the iGPU), the hardware, and see my results.

 

 

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, patrickjp93 said:

You misunderstood the variables at play then, because your forecast sucks.

 

I've left you an open book. Read.

 

No, it's a fact. You just don't want to accept that reality.

 

True, but when those statements are based on the raw data freely available in the talks and in the pdfs provided on GitHub in the conference repository, it's a fact.

 

Already done. See my blog here on let, SIMD Optimization. I believe I actually invited you onto that blog personally once.

 

You can see the code yourself, the compiler version, the OS, the kernel version (I was on Linux because Win10 makes my MacBook Pro thermal throttle b/c it can't use the iGPU), the hardware, and see my results.

I didn't misunderstand the variables, there were only two, lol. Either they have the expertise, or they don't. You cannot prove someone doesn't know something, unless they specifically say they do not, or are proven wrong. You generalized an entire industry, which means you must have evidence that every single AAA studio lacks the expertise. If so, present it, or give up and admit it was your opinion.

 

As for your blog, you did invite me to it. I recall you admitting such a scenario was impossible due to a severe lack of memory bandwidth. Why is it that all of a sudden, it's a good idea to implement it in gaming now? Did a memory subsystem appear out of thin air, making your theory (I use theory because that is exactly what it is, a theory) suddenly possible? If so, i'd love to see it. 

 

I certainly hope you are not hinging your entire basis on memory bandwidth. This is an industry in which their consumers swear memory bandwidth doesn't matter, lol. 

My (incomplete) memory overclocking guide: 

 

Does memory speed impact gaming performance? Click here to find out!

On 1/2/2017 at 9:32 PM, MageTank said:

Sometimes, we all need a little inspiration.

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

59 minutes ago, patrickjp93 said:

In this case I've stated no opinions, so your response is both inane AND baseless.

http://www.dictionary.com/browse/opinion

http://www.dictionary.com/browse/fact

 

Don't make me break out the esults.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, MageTank said:

I didn't misunderstand the variables, there were only two, lol. Either they have the expertise, or they don't. You cannot prove someone doesn't know something, unless they specifically say they do not, or are proven wrong. You generalized an entire industry, which means you must have evidence that every single AAA studio lacks the expertise. If so, present it, or give up and admit it was your opinion.

 

As for your blog, you did invite me to it. I recall you admitting such a scenario was impossible due to a severe lack of memory bandwidth. Why is it that all of a sudden, it's a good idea to implement it in gaming now? Did a memory subsystem appear out of thin air, making your theory (I use theory because that is exactly what it is, a theory) suddenly possible? If so, i'd love to see it. 

 

I certainly hope you are not hinging your entire basis on memory bandwidth. This is an industry in which their consumers swear memory bandwidth doesn't matter, lol. 

You speculated on the likelihood that they do. The variables at play to make the conclusion of likelihood are many.

 

Yes you can; exams do that all the time.

 

I do, simply watch ALL of those CPPCon videos.

 

I stand corrected, Prysin invited you.

 

No, the scenario is not impossible. Once again, selective reading and quoting is a nasty habit of yours. This is the difference between micro benchmarking and performance profiling. Obviously you will be running multiple calculations per vertex, and the entire vertex list will be in cache if you're smart about it. Thus bandwidth won't be an issue unless you start the task without prefetching the data.

 

It is a good idea. It gets a 10x speedup now, but it also should be used in conjunction with more complex physics for the best effect. Amdahl's Law and Gustafson's Law are still in effect. The memory may be the achilles heel to getting the entire list iterated over, but that doesn't mean you can't do more than just translate the vertex list. Scaling, sheering, rotation, and translation all happen in that exact order. You'd be stupid to iterate over the list 4 times rather than do the 4 ops to each vertex as it comes along. You increase the compute cycles and work done, but you don't increase the runtime.

 

It doesn't matter yet, because the code they write is still scalar.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Guest
This topic is now closed to further replies.


×