Jump to content

'Batman: Arkham Knight' PC Players Slam Rocksteady Studios' Free Game Compensation Gift [Update 14]

Rohith_Kumar_Sp

 

Aight, your lack of reading comprehension makes me want to take 12 gauge to my own face. You are a brick wall that refuses to read and understand what someone says if it's contrary to your own ideas. You are wrong, I have said why. Good luck in your endeavors of sticking poles in places they don't belong.

.

Link to comment
Share on other sites

Link to post
Share on other sites

Aight, your lack of reading comprehension makes me want to take 12 gauge to my own face. You are a brick wall that refuses to read and understand what someone says if it's contrary to your own ideas. You are wrong, I have said why. Good luck in your endeavors of sticking poles in places they don't belong.

 

Help the world get rid of stupidity ;)

 

I've read everything you wrote, but I disagree with you, and I've used sources to back up my points and disprove yours. If you are right, prove it.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

Help the world get rid of stupidity ;)

 

I've read everything you wrote, but I disagree with you, and I've used sources to back up my points and disprove yours. If you are right, prove it.

 

Just because you don't want to read is not my fault. You're the one refusing to do your end of the work. I have told you why GameWorks games are not up to par. You refuse to understand why what I say is more likely than not correct.

Now I KNOW you are trolling: GameWorks gives a SMALL performance impact? Really? Ok, lets see:

NOTE that I do not say no performance impact. You are running models, effects, processing, etc. which requires more power to draw. Obviously your frame rate will go down.


We can have effects like what's shown below without straining a graphics card very hard. It works in tech demos, why does it not work in games just as well? The middle-man that put it there: developers.

.

Link to comment
Share on other sites

Link to post
Share on other sites

Just because you don't want to read is not my fault. You're the one refusing to do your end of the work. I have told you why GameWorks games are not up to par. You refuse to understand why what I say is more likely than not correct.


We can have effects like what's shown below without straining a graphics card very hard. It works in tech demos, why does it not work in games just as well? The middle-man that put it there: developers.

 

I've literally shown you several benches showing how much single gameworks effects costs in performance. And if you start to add them up, the end performance hit is quite substantial.The few that costs little also gives little graphical improvement compared to the industry standards e.g. FXAA (so why bother?).

 

When HairWorks takes a full Titan Black just to render 1 Witcher wolf at 1080p60 in a tech demo, it disproves your point that these tech demos functions perfectly. Not even a quad Titan X SLI would run any open world game well, if all GameWorks effects would be as taxing as their tech demos.

 

And how can a dev mess up GameWorks when it is delivered in black boxed DLL's they cannot change, nor even see the code of? They just call functions in the DLL's. Not really much to mess up for the dev. Do you don't know what a function is in programming? You don't seem to understand GameWorks or how it works for the dev.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

I've literally shown you several benches showing how much single gameworks effects costs in performance. And if you start to add them up, the end performance hit is quite substantial.The few that costs little also gives little graphical improvement compared to the industry standards e.g. FXAA (so why bother?).

 

When HairWorks takes a full Titan Black just to render 1 Witcher wolf at 1080p60 in a tech demo, it disproves your point that these tech demos functions perfectly. Not even a quad Titan X SLI would run any open world game well, if all GameWorks effects would be as taxing as their tech demos.

 

And how can a dev mess up GameWorks when it is delivered in black boxed DLL's they cannot change, nor even see the code of? They just call functions in the DLL's. Not really much to mess up for the dev. Do you don't know what a function is in programming? You don't seem to understand GameWorks or how it works for the dev.

And obviously you know even though several times before (including today) said you don't have the knowledge for game creation. What makes you an expert? Is GameWorks really a "blackbox" or is it more crap you spew?

 

You say it takes a single Titan Black to render a wolf but are you sure? You do know they exist in game in packs and yet you don't have your frame rate tanking to zero. Demos are always done on the extreme end and never actually implemented as such. But what do I know? Obviously you're the blessing in disguise that knows everything. :rolleyes:

 

You don't know what you're talking about and then act like you know every detail under the sun, when you clearly don't. Even though I haven't played with GameWorks nearly to the extent as actual developers I know what you're saying is incorrect. Libraries can be altered, they're bases to work off of, not copy and paste.

.

Link to comment
Share on other sites

Link to post
Share on other sites

Here's a thought.

 

Release some modding tools, admit you're incompetent, and let the community have a crack at fixing what you should have fixed before ever releasing the damned game.

Ketchup is better than mustard.

GUI is better than Command Line Interface.

Dubs are better than subs

Link to comment
Share on other sites

Link to post
Share on other sites

And obviously you know even though several times before (including today) said you don't have the knowledge for game creation. What makes you an expert? Is GameWorks really a "blackbox" or is it more crap you spew?

 

You say it takes a single Titan Black to render a wolf but are you sure? You do know they exist in game in packs and yet you don't have your frame rate tanking to zero. Demos are always done on the extreme end and never actually implemented as such. But what do I know? Obviously you're the blessing in disguise that knows everything. :rolleyes:

 

You don't know what you're talking about and then act like you know every detail under the sun, when you clearly don't. Even though I haven't played with GameWorks nearly to the extent as actual developers I know what you're saying is incorrect. Libraries can be altered, they're bases to work off of, not copy and paste.

 

Actual high level game developers calls it black box and criticizes the hell out of it. Are they not competent either? http://linustechtips.com/main/topic/137965-developers-publicly-criticze-nvidias-gameworks-program-on-twitter-for-its-blackbox-nature/

 

Look up their names, they are not small fish, but some of the biggest in the industry.

 

This is the excerpt of the NVidia tech demo:

 

 

Point is that these tech demos are useless. They are taking the effects to level over 9000 in a completely unrealistic way. Do they technically work? Probably. But they don't look like that in the end, because the performance hit is unrealistically high. And that was 2 years ago. Implementations is all that matters and the fact is that these GameWorks effects are excessively taxing on performance, with little to show for it. And again, the performance hit is higher on AMD, due to the black boxed nature of it.

I've shown you NVidia's own benches for HairWorks in WItcher 3.

 

Calling a function is not copy paste. Not sure what you are getting at. But you might fall into the same trap as @patrickjp93 : Just because something is technically possible, does not mean their contract allows for it. Either way, I trust the actual devs more on this issue than anyone else. Call me naive, but at least I know they have inside knowledge, experience and are competent on the matter.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

Actual high level game developers calls it black box and criticizes the hell out of it. Are they not competent either? http://linustechtips.com/main/topic/137965-developers-publicly-criticze-nvidias-gameworks-program-on-twitter-for-its-blackbox-nature/

 

Look up their names, they are not small fish, but some of the biggest in the industry.

 

This is the excerpt of the NVidia tech demo:

 

Point is that these tech demos are useless. They are taking the effects to level over 9000 in a completely unrealistic way. Do they technically work? Probably. But they don't look like that in the end, because the performance hit is unrealistically high. And that was 2 years ago. Implementations is all that matters and the fact is that these GameWorks effects are excessively taxing on performance, with little to show for it. And again, the performance hit is higher on AMD, due to the black boxed nature of it.

I've shown you NVidia's own benches for HairWorks in WItcher 3.

 

Calling a function is not copy paste. Not sure what you are getting at. But you might fall into the same trap as @patrickjp93 : Just because something is technically possible, does not mean their contract allows for it. Either way, I trust the actual devs more on this issue than anyone else. Call me naive, but at least I know they have inside knowledge, experience and are competent on the matter.

the biggest fish in the industry didn't even really know how to do multithreading until 2-3 years ago. Game programmers are, on average, near the bottom of the barrel where quality is concerned, breadth as well. If devs were actually good at what they do, they wouldn't need Nvidia's DLLs. Is it really so expensive to have an expert in classical mechanics and a programmer with a background in calculus and linear algebra to work together and make an equivalent physics system? My old lighting algorithm ended up being used in the Unreal Temple of Time demo. I wrote that senior year of high school while doing the optics chapter of AP Physics B+C. Fluid mechanics is harder, but come on...

 

Your definition of competent is astounding... 

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

the biggest fish in the industry didn't even really know how to do multithreading until 2-3 years ago. Game programmers are, on average, near the bottom of the barrel where quality is concerned, breadth as well. If devs were actually good at what they do, they wouldn't need Nvidia's DLLs. Is it really so expensive to have an expert in classical mechanics and a programmer with a background in calculus and linear algebra to work together and make an equivalent physics system? My old lighting algorithm ended up being used in the Unreal Temple of Time demo. I wrote that senior year of high school while doing the optics chapter of AP Physics B+C. Fluid mechanics is harder, but come on...

 

Your definition of competent is astounding... 

 

A couple of points:

 

Just because EA games didn't have multithreading, doesn't mean he was not competent in doing so (remember that programming the functions might not hit the market until several years later). Time, ressource and compatibility might all be problems. Just look at him preferring to have DX12/win10 as min spec for holidays 2016. That would open up a world of opportunities, but as things are now, it's not realistic due to market reasons.

 

Games are real time. Everything has to be perfectly synced. That gives issues you often don't see in other software.

 

Devs implement GameWorks, because they don't have the resources to make such things themselves (money essentially). Also The publisher/owner of the dev can just accept the sponsorship from NVidia and be forced to implement the mess, even if they don't want to, and are more than competent in making such effects themselves.

 

If non gaming programmers are so much more competent, why don't they work in the gaming business? Supply and demand my friend. Using that against gaming is pointless.

 

All in all, with high compute software, you build the hardware systems for the software. In gaming, it's the other way around, and you have an extreme diversity of hardware setups. You know the specifications are vastly different.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

A couple of points:

 

Just because EA games didn't have multithreading, doesn't mean he was not competent in doing so (remember that programming the functions might not hit the market until several years later). Time, ressource and compatibility might all be problems. Just look at him preferring to have DX12/win10 as min spec for holidays 2016. That would open up a world of opportunities, but as things are now, it's not realistic due to market reasons.

 

Games are real time. Everything has to be perfectly synced. That gives issues you often don't see in other software.

 

Devs implement GameWorks, because they don't have the resources to make such things themselves (money essentially). Also The publisher/owner of the dev can just accept the sponsorship from NVidia and be forced to implement the mess, even if they don't want to, and are more than competent in making such effects themselves.

 

If non gaming programmers are so much more competent, why don't they work in the gaming business? Supply and demand my friend. Using that against gaming is pointless.

 

All in all, with high compute software, you build the hardware systems for the software. In gaming, it's the other way around, and you have an extreme diversity of hardware setups. You know the specifications are vastly different.

The problem with that argument is multithreading has been possible in code since 2000. Threads are native to C++ 99.

 

Games programmers are paid garbage by comparison.

 

No, you build the software and hardware together. You build a cluster OS for the hardware you intend to use.

 

And code multiversioning has been a thing forever.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

 

 

So uhh, yeah. You took it away while I was out and about. Don't really have anything to add nor do I really want to. Developers are incompetent for the most part. Creation engine isn't complete garbage yet Bethesda still manages to make it worse. I'm by no means an expert in programming in general but I can see flaws with code processes when I see em. Things just aren't done logically and efficiently.

.

Link to comment
Share on other sites

Link to post
Share on other sites

So uhh, yeah. You took it away while I was out and about. Don't really have anything to add nor do I really want to. Developers are incompetent for the most part. Creation engine isn't complete garbage yet Bethesda still manages to make it worse. I'm by no means an expert in programming in general but I can see flaws with code processes when I see em. Things just aren't done logically and efficiently.

std::vector<VisualObject> movableObjects;//initialize everything, set starting coordinates and directions//other prep work//FUNCTIONSvoid update(float updateTimeMillis) {    //move view point, make decisions on color changes, texture loading, etc.    ...    //check for collisions in parallel, using in-lined function calls (no stack frame created)    //and loop-unrolling to make explicit use of branching AVX 256    //assume there is a mutex or semaphore to lock an object for analysis/deletion and a smart    //function to skip if locked and come back built into the calls to inline function     //collisionUpdate    #pragma omp parallel for    for(int i = movableObjects.size()-1; i >= 0; i -= 8) {        movableObjects[i].collisionUpdate(); //will run each type of object's unique function        movableObjects[i-1].collisionUpdate();        ...        movableObjects[i-7].collisionUpdate();    }    //collided objects now destroyed or had states updated to change physical effects routines,    //update all vertices of all objects in parallel,    //ensure dummies exist if not in multiples of 8 to take advantage of AVX 256 in loop unrolling    #pragma omp parallel for    for(int i = movableObjects.size()-1; i >= 0; i -= 8) {         movableObjects[i].update(updateTimeMillis);        movableObjects[i-1].update(updateTimeMillis);        ...        movableObjects[i-7].update(updateTimeMillis);    }        VisualObject.draw(); //draw the scene and all objects in it.} 

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

std::vector<VisualObject> movableObjects;//initialize everything, set starting coordinates and directions//other prep work//FUNCTIONSvoid update(float updateTimeMillis) {    //move view point, make decisions on color changes, texture loading, etc.    ...    //check for collisions in parallel, using in-lined function calls (no stack frame created)    //and loop-unrolling to make explicit use of branching AVX 256    //assume there is a mutex or semaphore to lock an object for analysis/deletion and a smart    //function to skip if locked and come back built into the calls to inline function     //collisionUpdate    #pragma omp parallel for    for(int i = movableObjects.size()-1; i >= 0; i -= 8) {        movableObjects[i].collisionUpdate(); //will run each type of object's unique function        movableObjects[i-1].collisionUpdate();        ...        movableObjects[i-7].collisionUpdate();    }    //collided objects now destroyed or had states updated to change physical effects routines,    //update all vertices of all objects in parallel,    //ensure dummies exist if not in multiples of 8 to take advantage of AVX 256 in loop unrolling    #pragma omp parallel for    for(int i = movableObjects.size()-1; i >= 0; i -= 8) {         movableObjects[i].update(updateTimeMillis);        movableObjects[i-1].update(updateTimeMillis);        ...        movableObjects[i-7].update(updateTimeMillis);    }        VisualObject.draw(); //draw the scene and all objects in it.} 

Oh don't test me now...

 

Brbs grabbing my notes.


I don't see a problem so far, is that the code you were talking about in the post above?

.

Link to comment
Share on other sites

Link to post
Share on other sites

Oh don't test me now...

 

Brbs grabbing my notes.


I don't see a problem so far, is that the code you were talking about in the post above?

This is just me giving game programmers the finger. This isn't hard. You design to be parallel from the beginning. You design to have good data management from the beginning. This is missing all the OpenGL stuff from my graphics class, but you see my point. Parallel isn't hard, and OpenMP makes it a joke. There's no excuse to not use it UNLESS there is a specific scenario for when you need finer thread control. Heck even basic optimization to take advantage of newer instructions via implicit parallelism IS NOT HARD!

 

It's just so disappointing to see the games industry floundering over stuff that is literally a cakewalk for a college sophomore (when I took HPC). You design for it from the start.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

This is just me giving game programmers the finger. This isn't hard. You design to be parallel from the beginning. You design to have good data management from the beginning. This is missing all the OpenGL stuff from my graphics class, but you see my point. Parallel isn't hard, and OpenMP makes it a joke. There's no excuse to not use it UNLESS there is a specific scenario for when you need finer thread control. Heck even basic optimization to take advantage of newer instructions via implicit parallelism IS NOT HARD!

 

It's just so disappointing to see the games industry floundering over stuff that is literally a cakewalk for a college sophomore (when I took HPC). You design for it from the start.

That's really simple. Brings a tear to my eye because I'm tired of seeing dozens and dozens of lines of code to do one thing.

.

Link to comment
Share on other sites

Link to post
Share on other sites

That's really simple. Brings a tear to my eye because I'm tired of seeing dozens and dozens of lines of code to do one thing.

You know what the real beauty of that design is? You never have to touch it again. It's modular, extensible, concise, readable, and scales wonderfully.

 

Make as many complex visual objects as you want. That code never has to change. Make their update functions as intricate as you want. The core stays the same. That boilerplate code is transferable to any game I want to make in OpenGL. You can probably do the same with Direct X. And in DX 12, for the equivalent of the VisualObject.draw() function that literally just loops through the children and draws them all (the whole game environment is a series of children of VisualObject other than lights and sound objects), you can do the following for parallel draw calls. It's so easy EA could do it!

 

public class VisualObject {    VisualObject* parent = null;    vector<VisualObject> children;    int omp_max_thread_count = omp_get_max_threads();    //if quad I5, will get 4, if quad I7, will get 8    void VisualObject::addChild(const VisualObject &v) {        v.setParent(this);        children.push_back(v);    }    void VisualObject::draw() {        //draw children in parallel using dynamic scheduling in case some objects         //have complex functions that take far longer than others, the other threads able to        //"steal" work from the thread taking the most time, auto scaling with core count        #pragma omp parallel for schedule(dynamic)        for(int i = 0; i < children.size(); i += 8) {            children[i].draw();            children[i+1].draw();            ...            children[i+7].draw();        }    }    struct {        bool operator()(VisualObject v1, VisualObject v2)        {               return v1.complexity() < v2.complexity();        }       } VOComparator;    //Sort Visual Objects by draw complexity into equal-sized buckets for the draw() function     //to handle, most expensive draws first per thread.    void VisualObject::loadBalance() {        //If on GCC/Clang/ICC, compile with -fopenmp and -D_GLIBCXX_PARALLEL to get parallel sort        std::sort(children.begin(), children.end(), VOComparator);        int chunkSize = children.size()/omp_max_thread_count;        vector<VisualObject> sortedSet(children.size());        #pragma omp parallel for        for(int i = 0; i < omp_max_thread_count; i++){            for(int j = 0; j < chunkSize; j++){                sortedSet[i * chunkSize + j] = children[j * omp_max_thread_count + i];            }        }        children = sortedSet;    } // end loadBalance}; //end VisualObject class

 

And again, write once, take anywhere! Seriously it's not that hard. Anyone with basic knowledge of C++ and some willingness to do algebra can be miles ahead of most game industry programmers. You cannot be more optimal than that without going into native threads or doing some unrolling of the inner loop of the load balance function. Go ahead and try. It will take you forever to beat it. At least 95% optimal CPU performance and 99% CPU usage with minimal effort. John Carmack can eat his heart out, because he thought I was crazy when I showed him this at GDC. He said it would never scale well without fine, native thread control and would be impossible to debug. Problem is, built like that, the only thing that can go wrong is a driver error or one of your draw/update functions itself is broken. That's the beauty of the high-level abstraction OpenMP gives you. It's not suited for everything, but just having parallel for loops for the major portions of the CPU code has saved you programming time and squeezed out a great deal of performance that is platform-independent.

 

Any game dev who says optimization is hard when they can't even match this is so full of crap. It's not difficult until the last 5%, and each 1% better is exponentially harder to get. If you have CilkPlus, then the loop unrolling can be done in 1 line and will be a hair faster, but that's the bulk of the work: get threads launched as high up in the program as possible as few times as possible, with as little cross-talk as possible, and you've got the bulk of it.

 

@LukaP, @GoodBytes

 

I'm sure you two would have a suggestion or two, but it's insane that so many game devs don't even know OpenMP exists and how easy it can make their lives. And it's laughable that so many games are so poorly optimized on the CPU side.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

The problem with that argument is multithreading has been possible in code since 2000. Threads are native to C++ 99.

 

Games programmers are paid garbage by comparison.

 

No, you build the software and hardware together. You build a cluster OS for the hardware you intend to use.

 

And code multiversioning has been a thing forever.

 

Sure, and when did the consumer CPU's actually support such things? I guess you can argue the Pentium 4 was capable with a second hyperthreaded thread. But Core 2 wasn't launched until 2006. Even then, multicore did not become properly mainstream until years later. Why would ANY game dev make a multithreaded engine in 2000?

 

Here's what you don't seem to factor in: This is a business, where games are made for a market, based on what is available of hardware. Just because something is technically possible does by no means mean that it makes sense to do so for a company. In fact the gaming/graphics engines are often reused and developed upon, as much as possible, to save time and money.

The big technical leaps comes when a game dev finally has to build a new engine from the ground up. That's when these new things get's implemented.

 

Point is that you build a server system, both hardware and software together (and yes you do build the hardware for the software, if for instance you need it to support CUDA). But you should get the point. It's not the same as a consumer market, like gaming, where you have a plethora of different systems you have no influence on, but have to support.

 

Again just look at EA who wants DX12/Win10 to be min spec. They can technically do that right now, but what would the point be, if you exclude over 50% of your market? That would be highly incompetent. Concurrent asynchronous compute is also very much possible now, except NVidia cards will crash and burn if using it. By your ideology, this should just be forced through, damn the consumer base. It's just not how the gaming market works, incompetent programmers or not.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

Sure, and when did the consumer CPU's actually support such things? I guess you can argue the Pentium 4 was capable with a second hyperthreaded thread. But Core 2 wasn't launched until 2006. Even then, multicore did not become properly mainstream until years later. Why would ANY game dev make a multithreaded engine in 2000?

Here's what you don't seem to factor in: This is a business, where games are made for a market, based on what is available of hardware. Just because something is technically possible does by no means mean that it makes sense to do so for a company. In fact the gaming/graphics engines are often reused and developed upon, as much as possible, to save time and money.

The big technical leaps comes when a game dev finally has to build a new engine from the ground up. That's when these new things get's implemented.

Point is that you build a server system, both hardware and software together (and yes you do build the hardware for the software, if for instance you need it to support CUDA). But you should get the point. It's not the same as a consumer market, like gaming, where you have a plethora of different systems you have no influence on, but have to support.

Again just look at EA who wants DX12/Win10 to be min spec. They can technically do that right now, but what would the point be, if you exclude over 50% of your market? That would be highly incompetent. Concurrent asynchronous compute is also very much possible now, except NVidia cards will crash and burn if using it. By your ideology, this should just be forced through, damn the consumer base. It's just not how the gaming market works, incompetent programmers or not.

Anyone who doesn't prepare for the future will be b*tchslapped by it. Even those who do aren't immune. Games devs should have been ready and should have had extensible design.

I know what the business is, and guess what? My method saves money and time and makes it possible for the software to run on just about anything.

Straw man argument. It's one thing to have vendor-specific code for GPUs, but x86 is x86. You can multi version the code with a simple comment to handle legacy support. It's not that hard, it's not expensive, and there's no excuse. It costs studios way more money to be reworking their engines when they could have had a pristine core that never needed to be modified again.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

..why don't you just make a plug-and-play game engine that looks and functions like Scratch but is actually efficient? :P

I'm one man. I can't write 2 million lines that fast. If people want to help, I can do lighting, but cloth and fluid are differential equation approximation, and I never took diff. EQ.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

I'm one man. I can't write 2 million lines that fast. If people want to help, I can do lighting, but cloth and fluid are differential equation approximation, and I never took diff. EQ.

so despite all your huffin an puffin about how easy it is to make an optimized game, you still arent up to task yourself...

 

how about you read up on diff EQ... perhaps there is some skeletons in the closet there that fucks with the rest.

 

Code is after all like a puzzle game. One part can make or break unless it ties in perfectly with the rest.

Link to comment
Share on other sites

Link to post
Share on other sites

so despite all your huffin an puffin about how easy it is to make an optimized game, you still arent up to task yourself...

how about you read up on diff EQ... perhaps there is some skeletons in the closet there that fucks with the rest.

Code is after all like a puzzle game. One part can make or break unless it ties in perfectly with the rest.

Even indie games of note often have 10+ people on the team and use Unity Engine as a base. Not to mention they don't develop it while also being 20 credit hours deep into senior year of university. You're comparing apples and oranges. What I need is a team that understands design must come first, code style must be adhered to for the sake of debugging and optimization, and commenting must be thorough where applicable so the engine can be extended as quickly and cheaply as possible (when you come back to code you wrote even two weeks ago without comments, it takes time just to figure out what it does and how it works). Once the engine is done, using it is up to game designers. I'll always be working on the engine side and consulting on performance. I will be the first to admit I lack some of the skills needed to envision a complex game flow through an environment. I can give you some of the sharpest tools made by man, but I don't necessarily have the capability to use them to the same degree Bethesda, Square, Bioware, and EA do. I can write a good story line and character backgrounds, but I'll remain one of the unsung heroes if I get into the games industry.

Diff EQ approximation does boil down to Taylor Series manipulations, but I haven't done the studying. I had to do discrete math and linear algebra for my major, and I did the physics sequence for our science credits. I could never fit in Diff EQ. Taylor series are themselves embarrassingly parallel and can be done tail-recursively for shorthand but still incredibly fast loop structure. After 600 places or so you hit diminishing returns, so under AVX 256 that's 3 ops per place or 1800 ops/8 = 225 AVX256 instructions. Even on 1 core at 2GHz that's child's play for up to 4. I could go even more balls to the wall and offload it to those pesky iGPUs no one seems to have any love for with a simple:

#pragma offload target(gfx) map(vertice_vec[1:vertice_vec.size()] out(vertice_vec) in(vertice_vec)

{

//parallel Taylor approximation reduction

//parallel vertex manipulation

}

And hey, suddenly at max settings and realism, ram speed will start to matter since you need bandwidth for this and for ray tracing. Finally a reason for 2400+ MHz RAM will exist for the layman gamer ;)

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Even indie games of note often have 10+ people on the team and use Unity Engine as a base. Not to mention they don't develop it while also being 20 credit hours deep into senior year of university. You're comparing apples and oranges. What I need is a team that understands design must come first, code style must be adhered to for the sake of debugging and optimization, and commenting must be thorough where applicable so the engine can be extended as quickly and cheaply as possible (when you come back to code you wrote even two weeks ago without comments, it takes time just to figure out what it does and how it works). Once the engine is done, using it is up to game designers. I'll always be working on the engine side and consulting on performance. I will be the first to admit I lack some of the skills needed to envision a complex game flow through an environment. I can give you some of the sharpest tools made by man, but I don't necessarily have the capability to use them to the same degree Bethesda, Square, Bioware, and EA do. I can write a good story line and character backgrounds, but I'll remain one of the unsung heroes if I get into the games industry.

Diff EQ approximation does boil down to Taylor Series manipulations, but I haven't done the studying. I had to do discrete math and linear algebra for my major, and I did the physics sequence for our science credits. I could never fit in Diff EQ. Taylor series are themselves embarrassingly parallel and can be done tail-recursively for shorthand but still incredibly fast loop structure. After 600 places or so you hit diminishing returns, so under AVX 256 that's 3 ops per place or 1800 ops/8 = 225 AVX256 instructions. Even on 1 core at 2GHz that's child's play for up to 4.

please dont use bethesda as an example. Their stuff reeks of shitty coding and quality control.

 

EA is a publisher, they do not neccessarily make the shit. They give you the money and tools to make shit.

 

game DEVS would be ArenaNet, Crystal Dynamics, NaghtyDog, Valve, Blizzard Studios and Dice.....

 

EA, Nexon (although they do help with development), Sony Entertainment and Microsoft (the latter two for consoles) are publishes. Much like book publishers they make the game come to life through asset sharing, financial backing and sheer industry influence.

 

 

A good example of what EA does;

DIce made Frostbite engine. EA liked what they saw and BOUGHT the RIGHTS and DEVS related to Frostbite. EA then allows the use of the Frostbite engine, and has their OWN crew that works on it with game devs working on other titles such as Dragon Age....

 

Basically, tehy do what Nvidia does with Gameworks. They send engineers and tech to speed up or aid the process. They do however, not really make the game.

Link to comment
Share on other sites

Link to post
Share on other sites

please dont use bethesda as an example. Their stuff reeks of shitty coding and quality control.

EA is a publisher, they do not neccessarily make the shit. They give you the money and tools to make shit.

game DEVS would be ArenaNet, Crystal Dynamics, NaghtyDog, Valve, Blizzard Studios and Dice.....

EA, Nexon (although they do help with development), Sony Entertainment and Microsoft (the latter two for consoles) are publishes. Much like book publishers they make the game come to life through asset sharing, financial backing and sheer industry influence.

A good example of what EA does;

DIce made Frostbite engine. EA liked what they saw and BOUGHT the RIGHTS and DEVS related to Frostbite. EA then allows the use of the Frostbite engine, and has their OWN crew that works on it with game devs working on other titles such as Dragon Age....

Basically, tehy do what Nvidia does with Gameworks. They send engineers and tech to speed up or aid the process. They do however, not really make the game.

Fair enough.

Is Crystal Dynamic still around? I haven't seen them since the Gex days.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Fair enough.

Is Crystal Dynamic still around? I haven't seen them since the Gex days.

Square-Enix owns them.  They're the ones making the Tomb raider reboots.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×