Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
VegetableStu

[STREAM OVER] Nvidia Gamescom 2018 Keynote Live Thread!

Recommended Posts

Posted · Original PosterOP
7 minutes ago, pas008 said:

So math calculations aren't needed for rendering?

Remember early cuda days please

3 hours ago, leadeater said:

Calculations that are NOT matrix operations.

OH SHIT COMMENT LOOP RUN

 

Link to post
Share on other sites
15 minutes ago, VegetableStu said:

OH SHIT COMMENT LOOP RUN

 

Lol if anyone remembers early cuda

They would see this potential

Games have many uses for matrices considering they are build using the ideal in many

And tensor being able to calculate faster is given

Even on the smaller form

Lifts weight off of cuda for now til a library is formed for other aspects like cuda in the beginning if people remember

2x faster is still faster minus 20%

 

This is why i think the price hike and stated numbers on that infiltrator demo

That demo had shit to do with rt

 

 

Link to post
Share on other sites
42 minutes ago, pas008 said:

So math calculations aren't needed for rendering?

Remember early cuda days please

You are simply ignoring the limitations of the Tensor cores, I have posted them. That is the only matrix operations they can do, that's it only that. There's nothing to remember, CUDA is a programming language and it has Tensor functions added to it for math tasks that can use the Tensor cores, you can't just make them do something they can't do.

 

Tensor can't do "math calculations" it can do a very select type of math calculations, so select it's a limited type of matrix operations with conditions you must adhere to or you can't use the Tensor cores.

 

Quote

Basically, to utilize Tensor Cores, operations must be “matrix multiply and accumulate” (not all operations involved in DL would fit that, though a lot are, such as convolutions), such operations must be invoked with the appropriate parameters (1 and 2 — this is the job of the DL framework implementation), and the data must be FP16 (4), but the third point is a big factor: “Both input and output channel dimensions must be a multiple of eight.” — this might not be often satisfied depending on the matrix dimensions under computation.

If it's not this it's not happening.

 

Edit:

It's like using a calculator with only a multiplication button and only accept 3 digit numbers, want to add 2 + 5 sorry can't do that. Want to use 2 digit numbers, nope can't do that.

Link to post
Share on other sites
13 minutes ago, leadeater said:

You are simply ignoring the limitations of the Tensor cores, I have posted them. That is the only matrix operations they can do, that's it only that. There's nothing to remember, CUDA is a programming language and it has Tensor functions added to it for math tasks that can use the Tensor cores, you can't just make them do something they can't do.

 

Tensor can't do "math calculations" it can do a very select type of math calculations, so select it's a limited type of matrix operations with conditions you must adhere to or you can't use the Tensor cores.

 

If it's not this it's not happening.

And you are telling someone that has witness voodoo type tech reapproached

And renewed even sli 3 gens

Patches for hyper threading to high count counts and multithreading 

Engines to add sli support long after created even when many said can't happen

Sound go from stereo to surround still with 2 speakers

And now you are telling me matrices can't be written to calculations and algorithms to make things faster?

Especially with a neutral network of cores

 

Hmm

 

Link to post
Share on other sites

Edit:

It's like using a calculator with only a multiplication button and only accept 3 digit numbers, want to add 2 + 5 sorry can't do that. Want to use 2 digit numbers, nope can't do that.

 

 

That's why their are algorithms and libraries of code

For other minds to find use for

You think crypto and many other uses for cpus and gpus were established by a group?

No many others find a way like the security vulnerbilities on cpus

Takes time

Link to post
Share on other sites

I could see it now enough cuda cores too run games 240 plus fps at 720/1080p

Rt cores and enough tensor for ray

tracing effects then more tensor to upscale to which ever resolution you want

Link to post
Share on other sites
3 hours ago, pas008 said:

That's why their are algorithms and libraries of code

Code libraries can only utilize the hardware capabilities on offer, that is the limitation that cannot be worked around on Tensor cores. CPUs have very few limitations on the operations they can do, GPUs less again than CPUs but that is in relation to CUDA Cores/Stream Processors/Shader Cores or any other name used to refer to the Shader Pipeline and DirectCompute/GPGPU. These are all very flexible and can do a wide variety of computational types on many different data types and structures, you write a library to do a common task or function that you or others want to use that can run on the target hardware. You're not making the hardware do something it cannot do, you have to utilize the capabilities it has.

 

Tensor cores have a hardware limitation in the type of computation it can do and the data type it can work with, anything outside of these bounds is impossible to execute on Tensor cores. No library can get around that, you cannot make hardware do something it cannot do.

 

No DX11 library can execute on or make a non DX11 GPU run DX11 functions. It is simply impossible to get my old ATI X800 to run a DX11 game, it does not support it at the hardware level. This is the concept that we are limited to on Tensor cores but to a much stricter degree.

 

2 hours ago, pas008 said:

I could see it now enough cuda cores too run games 240 plus fps at 720/1080p

Rt cores and enough tensor for ray tracing effects then more tensor to upscale to which ever resolution you want

That could be done, quality/errors would be a problem as you scale up more and more past the original resolution but would be potentially a far better approach to current upscaling in speed and quality.

 

Tensor cores can also be used for complex physics in games, it's already a use case that is widely used in science. Fluid dynamics for air in racing car games, flight simulators, ballistics etc.

 

I'd rather see them get used for much better AI in strategy games because those are usually terrible/stupid as hell. Or maybe NPC/enemies in FPS games that aren't totally thick and blind.

Link to post
Share on other sites
5 hours ago, leadeater said:

Code libraries can only utilize the hardware capabilities on offer, that is the limitation that cannot be worked around on Tensor cores. CPUs have very few limitations on the operations they can do, GPUs less again than CPUs but that is in relation to CUDA Cores/Stream Processors/Shader Cores or any other name used to refer to the Shader Pipeline and DirectCompute/GPGPU. These are all very flexible and can do a wide variety of computational types on many different data types and structures, you write a library to do a common task or function that you or others want to use that can run on the target hardware. You're not making the hardware do something it cannot do, you have to utilize the capabilities it has.

 

Tensor cores have a hardware limitation in the type of computation it can do and the data type it can work with, anything outside of these bounds is impossible to execute on Tensor cores. No library can get around that, you cannot make hardware do something it cannot do.

 

No DX11 library can execute on or make a non DX11 GPU run DX11 functions. It is simply impossible to get my old ATI X800 to run a DX11 game, it does not support it at the hardware level. This is the concept that we are limited to on Tensor cores but to a much stricter degree.

 

That could be done, quality/errors would be a problem as you scale up more and more past the original resolution but would be potentially a far better approach to current upscaling in speed and quality.

 

Tensor cores can also be used for complex physics in games, it's already a use case that is widely used in science. Fluid dynamics for air in racing car games, flight simulators, ballistics etc.

 

I'd rather see them get used for much better AI in strategy games because those are usually terrible/stupid as hell. Or maybe NPC/enemies in FPS games that aren't totally thick and blind.

neural networks are getting more and more use cases

nothing is going to happen over night but having a nn in everyone(espeically programmers/engineers/etc) pc can eventually change many things, more complex languages involved

I know will be limited to gpu uses only but after awhile of people coding and using them we might prolly will see more use cases for them on gpu level too

sry I cant be so closed minded

Link to post
Share on other sites
12 hours ago, leadeater said:

You are simply ignoring the limitations of the Tensor cores, I have posted them. That is the only matrix operations they can do, that's it only that. There's nothing to remember, CUDA is a programming language and it has Tensor functions added to it for math tasks that can use the Tensor cores, you can't just make them do something they can't do.

 

Tensor can't do "math calculations" it can do a very select type of math calculations, so select it's a limited type of matrix operations with conditions you must adhere to or you can't use the Tensor cores.

 

If it's not this it's not happening.

 

Edit:

It's like using a calculator with only a multiplication button and only accept 3 digit numbers, want to add 2 + 5 sorry can't do that. Want to use 2 digit numbers, nope can't do that.

Couldn't you simply do a 2 digit number by simply adding a .0 at the end? 

Link to post
Share on other sites
3 hours ago, Brooksie359 said:

Couldn't you simply do a 2 digit number by simply adding a .0 at the end? 

Well that kinda isn't the point, but fine whole numbers only :P.

Link to post
Share on other sites
7 hours ago, pas008 said:

I know will be limited to gpu uses only but after awhile of people coding and using them we might prolly will see more use cases for them on gpu level too

sry I cant be so closed minded

It's not being closed minded, hardware can only do what it can do. People can find as many amazing uses for Tensor cores as they like but they can only ever do what the hardware is capable of. You're wanting or expecting the impossible, birds can fly humans cannot so unless we sprout wings or Tensor cores actually get hardware changes to allow them to do more we aren't flying and Tensor cores are limited to the functions it can do.

 

Limited functionality does not makes them useless, you look at what it can do and find ways to utilize that capability.

 

And something you are probably not aware of but in AI Tensor cores are used for what is called training, once you have trained the AI the actual functions you have trained it to do are not executed on Tensor cores. So you could use AI to find or develop better rendering methods but you would execute that on something other than Tensor cores, like RT cores for example.

Link to post
Share on other sites
11 minutes ago, leadeater said:

Well that kinda isn't the point, but fine whole numbers only :P.

I am just saying there may be ways to manipulate math to do those sorts of calculations using matrices. I mean infinite series are used to calculate equations using polynomials. Maybe some would find a way to use only matrices multiplication to replicate the other forms of math. Not saying that it is for sure possible but I am also saying that there is a chance it is. 

Link to post
Share on other sites
12 minutes ago, leadeater said:

It's not being closed minded, hardware can only do what it can do. People can find as many amazing uses for Tensor cores as they like but they can only ever do what the hardware is capable of. You're wanting or expecting the impossible, birds can fly humans cannot so unless we sprout wings or Tensor cores actually get hardware changes to allow them to do more we aren't flying and Tensor cores are limited to the functions it can do.

 

Limited functionality does not makes them useless, you look at what it can do and find ways to utilize that capability.

 

And something you are probably not aware of but in AI Tensor cores are used for what is called training, once you have trained the AI the actual functions you have trained it to do are not executed on Tensor cores. So you could use AI to find or develop better rendering methods but you would execute that on something other than Tensor cores, like RT cores for example.

oh wow

do you realize all the sdks and tools available already for tensor ai

do you realize when cuda came out people like you were closed minded about its possibilities but look what happen there

and this is a neural network now

1 minute ago, Brooksie359 said:

I am just saying there may be ways to manipulate math to do those sorts of calculations using matrices. I mean infinite series are used to calculate equations using polynomials. Maybe some would find a way to use only matrices multiplication to replicate the other forms of math. Not saying that it is for sure possible but I am also saying that there is a chance it is. 

exactly

maybe

translate coding languages

Link to post
Share on other sites
8 minutes ago, Brooksie359 said:

I am just saying there may be ways to manipulate math to do those sorts of calculations using matrices. I mean infinite series are used to calculate equations using polynomials. Maybe some would find a way to use only matrices multiplication to replicate the other forms of math. Not saying that it is for sure possible but I am also saying that there is a chance it is. 

Correct, but Tensor cores are limited in the matrix functions it can do as well. So you have to make sure you are adhering to those limitations, Tensor cores can't just multiply any two matrices plus it's actually multiply two and one another but you could just add an all zero matrix it get around that, however the resulting output is restricted as well.  

 

Quote

“Both input and output channel dimensions must be a multiple of eight.”

 

Link to post
Share on other sites
11 minutes ago, pas008 said:

oh wow

do you realize all the sdks and tools available already for tensor ai

do you realize when cuda came out people like you were closed minded about its possibilities but look what happen there

and this is a neural network now

Yes more than you, look you don't understand this as much as you need to. Tensor cores cannot be made to do functions they can't do, you have to accept that. CUDA also developed over time at the hardware level not just people got better at using it. GPUs can do more now they they ever could before that is why CUDA is more useful now not because people purely got better are writing CUDA code.

 

Really wish you stop just throwing out buzz words with no understanding of what they mean.

Link to post
Share on other sites
7 minutes ago, leadeater said:

Yes more than you, look you don't understand this as much as you need to. Tensor cores cannot be made to do functions they can't do, you have to accept that. CUDA also developed over time at the hardware level not just people got better at using it. GPUs can do more now they they ever could before that is why CUDA is more useful now not because people purely got better are writing CUDA code.

 

Really wish you stop just throwing out buzz words with no understanding of what they mean.

tensor isnt going to get that same treatment lol

 

 

Link to post
Share on other sites
57 minutes ago, leadeater said:

Yes more than you, look you don't understand this as much as you need to. Tensor cores cannot be made to do functions they can't do, you have to accept that. CUDA also developed over time at the hardware level not just people got better at using it. GPUs can do more now they they ever could before that is why CUDA is more useful now not because people purely got better are writing CUDA code.

 

Really wish you stop just throwing out buzz words with no understanding of what they mean.

 

41 minutes ago, pas008 said:

tensor isnt going to get that same treatment lol

 

 

He does have a point. It is entirely possible that tensor cores will change at a hardware level as well over time. Anyways the new AA is actually pretty exciting so it will be interesting to see how that helps gaming performance while using AA. 

Link to post
Share on other sites
1 minute ago, Brooksie359 said:

He does have a point. It is entirely possible that tensor cores will change at a hardware level as well over time. Anyways the new AA is actually pretty exciting so it will be interesting to see how that helps gaming performance while using AA. 

That was one of the points though, Tensor cores would need hardware changes to allow them to do more. I suspect RT cores are just modified Tensor cores anyway, if Tensor cores could do what RT cores could do it would make sense to just pack more of those in a Turing die for the performance increase.

 

Tensor cores, like all fixed function units, are only able to be so fast because of how tailored they are to do a specific function or subset of functions. That allows you to use a small area of the die because far less logic is required, so the more general you make the Tensor cores you are in danger of bloating the die area or lowering the performance of them.

 

There's always a trade off between flexibility and performance. I'm actually amazed there are three different core types in Turing at all.

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×