Jump to content
To encourage social distancing, you must leave two blank lines at the start and end of every post, and before and after every quote. Failure to comply may result in non-essential parts of the forum closing. Click for more details. ×
Search In
  • More options...
Find results that contain...
Find results in...
wkdpaul

Google Stadia to use AI to predict gamer's actions

Recommended Posts

This is going nowhere, if it would be a thing about generating optional frame that need to be coded for every single game. Most frames generated use a time counter of some sort to generate those frame at proper time. This is a complete rewrite of modules like physic engine, sound engine, etc. They are obviously simply talking about an AI version of what we called in the domain anti-lag feature.

Link to post
Share on other sites
2 hours ago, WereCatf said:

In relation to this story, Google did mention some time ago one thing they do with the Stadia that should shave a few milliseconds off the latency: the controller itself is actually connected to the servers, instead of the signals from the controller first going to the console and then from the console to the servers -- it skips a couple of steps in-between compared to traditional controller-schemes. That's a perfectly reasonable and valid way of reducing some latency.

 

That kinda makes sense, but we're talking about maybe 4ms in the grand scheme of things assuming the user is on fiber and not dsl/cable/3g-umts/4g-lte.

 

In order of latency:

1. Distance from the data center. The only people who will have a good experience will be those who play only on their home realm (say SFO, NYC, MTL, or some other place with reasonably good connectivity and de-facto fiber availability to the equipment inside the home/unit.) You may still be looking at 6ms even if you're playing from a building next door to the data center.

2. Device hops inside your home. Some people have WiFi, some have wired connections, most of the time they're directly connected to the termination point inside the home, but then you have people who use their own routers, or have used Ethernet-over-coax/phone/power, wifi-extenders, and so forth that add latency twice.

3. Device hops between your home and your ISP backhaul. Some ISP's will send all their traffic to a central point in the city, or a neighboring city before sending it to another backhaul.

4. Lack of IPv6 support by ISP's, IPv4 carrier NAT, and so forth add routing latency

 

Quote

Tracing route to SITE_IN_SFO_AREA [2001:XXX:XXX:XXX::XXX]
over a maximum of 30 hops:

  1     1 ms    <1 ms    <1 ms  node-XXXX.ipv6.telus.net (Telus Gateway)
  2     8 ms     5 ms     7 ms  node-XXXXX.ipv6.telus.net 
  3     8 ms     9 ms     8 ms  XXXXX.bb.telus.com
  4     *        *        *     Request timed out.
  5    27 ms    27 ms    27 ms  XXXXXX (San Jose)
  6  2557 ms   993 ms   108 ms  XXXXXXX (Fremont) 
  7    26 ms    26 ms    26 ms  XXXXXXXXXX

So for example, IPv6, from Vancouver to a server in a Data center in the SF Bay area, the latency goes to 5ms just to get to Telus's equipment, 8ms at the carrier hotel, and is then 27ms as soon as it's in the SF Bay area. 27ms = 1.5frames of latency

 

IPv4 to the exact same machine:

Quote

Tracing route to keenspot.com [66.220.2.19]
over a maximum of 30 hops:

  1    <1 ms    <1 ms    <1 ms  (Telus Gateway)
  2     7 ms    12 ms    13 ms  (Telus)
  3     8 ms     7 ms     7 ms  (Telus)
  4     8 ms     8 ms     8 ms  XXXXX.bb.telus.com 
  5    11 ms    11 ms    11 ms  XXXXXX (Portland)
  6    26 ms    26 ms    26 ms  XXXXXXX (Palo Alto)
  7    56 ms    29 ms    57 ms  XXXXXXXX (Fremont)
  8    27 ms    28 ms    27 ms  XXXXXXXXXX

One additional Telus hop, different route. Note that the Fremont node in both traces is pretty laggy. They're different ports on the same core routers as far as I know. Also note that the Telus gateway has an actual IPv6 address, where as the ipv4 address is the class c 192.168.x.x address.

 

So... just how much latency is there from here to Google? (trace 8.8.8.8)

Quote

 7     8 ms     8 ms     8 ms  dns.google [8.8.8.8]

8ms, ok so that's half a frame. But WHERE is that machine? It's very likely routed to an edge machine that's in Vancouver or Seattle. So let's try a more obvious address. Google.com

Quote

Tracing route to google.com [2607:f8b0:400a:808::200e]
over a maximum of 30 hops:

 

7     8 ms     8 ms     8 ms  sea15s11-in-x0e.1e100.net [2607:f8b0:400a:808::200e]

 

Tracing route to google.com [172.217.3.174]
over a maximum of 30 hops:

 

 7     8 ms     7 ms     7 ms  sea15s11-in-f14.1e100.net [172.217.3.174]

Not bad, but if this was fiber it would be 5ms.

 

5. People will want to play with their friends. So if you're on the Seattle Node, and your friend is on the Florida node, you can't play with each other (or if you can, you end up with hard-to-synchronize situations where you have 10ms and they have 20ms to their realm, but there's a 100ms gap between your two realms.) Now imagine someone playing from Alaska, or a 3G wireless connection in Australia, or a satellite connection in the middle of nowhere. You may be able to discount being able to play from satellite internet, but you can't cut off entire countries rural regions because you don't want to operate a realm in every city on the planet.

 

Like, I get where Google is coming from here, and I could see "serving the top 25% of internet users who live in the major cities" as being a short term goal, but what's the end goal here? I don't think the business has thought this far ahead. They probably looked at how successful MMO games are, and went "hey, what if we can make it so everyone can have the same experience and eliminate the cheat potential at the same time?" Or maybe it was as simple as "hey make your games exclusive to us, and they will never be pirated again." Like Let's Play's weren't a thing.

 

Using AI to try and predict how the player will act is likely going to have unforeseen consequences for competitive gameplay. It might work to solve jitter in latency so that a player pushing forward on their analog stick is "still pushing forward" until told to stop. Unlike traditional MMO game actions where if you drop the connection your character just stands still and gets beat up, or in some better games, has an auto-attack/defense the server will still apply even if you don't react.

 

 

 

Link to post
Share on other sites

going with this line of thinking, a game service that plays the game for you?  that sounds like something ubisoft would do as a microtransaction, so you could 'save some time" if you got into the game much later than others

 

also, read that some games could potentially have "less latency" than a beast of a rig running the game locally that is sitting right next to you?

 

I pressed x for doubt


Rock On!

Link to post
Share on other sites

Technical realities aside,  I find it amusing that people don't think they are predictable.

 

 

 

 


QuicK and DirtY. Read the CoC it's like a guide on how not to be moron.  Also I don't have an issue with the VS series.

Sometimes I miss contractions like n't on the end of words like wouldn't, couldn't and shouldn't.    Please don't be a dick,  make allowances when reading my posts.

Link to post
Share on other sites

So it'll be okay on average, awesome when it actually works, and absolute shit when it won't? Hmm this reminds me of the whole frame time thing a couple years back with multiple GPUs where you'd have an excellent frame rate average but every so often frame latency would shoot up and you'd be having a lesser experience than if the frame times were more constant.

 

5 minutes ago, mr moose said:

Technical realities aside,  I find it amusing that people don't think they are predictable.

Yep, they just hadn't experienced this yet http://people.ischool.berkeley.edu/~nick/aaronson-oracle/ (which is super simple programmatically too so a smarter solution would probably work even better)

Link to post
Share on other sites

the only way for google to reduce latency to an acceptable level for everyone is to put servers every few miles which isnt going to happen. maybe they should only target major cities

Link to post
Share on other sites
5 minutes ago, mr moose said:

Technical realities aside,  I find it amusing that people don't think they are predictable.

 

 

 

 

Nobody is predictable to the accuracy required for this to be effective. Even if it does manage to predict it correctly it saves a negligible amount of latency which in most cases wont outweigh the latency added from the hardware not being local. I laughed when I heard them say it would have less latency than a home gaming PC. There is no way they don't realize that for the majority of people that will not be the case even when it works. 

Link to post
Share on other sites
5 minutes ago, spartaman64 said:

the only way for google to reduce latency to an acceptable level for everyone is to put servers every few miles which isnt going to happen. maybe they should only target major cities

Yeah I don't see how this would work anywhere but major cities anyway, in the US you'd need a fiber connection to even have a decent gaming experience with low latency similar to a gaming PC or console.

Link to post
Share on other sites
6 hours ago, BuckGup said:

I sorta doubt the extent at to which this works. If they can predict a gamers future actions then what's the difference than predicting a persons in the real world. If this is true than we are screwed harder than most thought as Google can now predict the future

 

56 minutes ago, mr moose said:

Technical realities aside,  I find it amusing that people don't think they are predictable.

My goal now in life is to be predictably unpredictable.

Link to post
Share on other sites

 

5 hours ago, RejZoR said:

Lol, it's all BS that CAN'T be done. If NVIDIA could pull something like this, they could render insane 3D scenes if they could literally predict time. But they can't do it even locally on their GPU where they have nanosecond latencies. And somehow ppl think Google can do it remotely through bunch of latency hoops measured in milliseconds. All I can say is LMAO. It can't be done. You can't interpolate non existing data and if you try you'll get horrendous artefacting on the output side. You can't "smooth out" millions of pixels of which existence you cannot predict in any way, shape or form.

 

There are tiling optimizations in remote software where only sections that change are sent over. But that works if you have a static screen and something small changes on it, like button state or text in a window. In 3D games, 99% of pixels on screen change at least 60 times a second. The other 1% is static GUI, assuming it's not even transparent at any degree. Which in 100% cases, it is. It's literally "player input, render action, capture it, stream it back to user to reflect his action". There is NO other way going around this. Locally rendered games only send actions data to server and to other clients. Here, they are sending entire video data. Something graphic cards require hundreds of gigabytes of throughput to materialize. So, yeah, lol.

Google would potentially have millions of people's worth of data to go on. 


Can you name a company with more gaming AI and data expertise than Google?


R9 3900x; 64GB RAM | RTX 2080 | 1.5TB Optane P4800x

1TB ADATA XPG Pro 8200 SSD | 2TB Micron 1100 SSD
HD800 + SCHIIT VALI | Topre Realforce Keyboard

Link to post
Share on other sites
2 hours ago, Brooksie359 said:

Nobody is predictable to the accuracy required for this to be effective. Even if it does manage to predict it correctly it saves a negligible amount of latency which in most cases wont outweigh the latency added from the hardware not being local. I laughed when I heard them say it would have less latency than a home gaming PC. There is no way they don't realize that for the majority of people that will not be the case even when it works. 

Hence why I said technical realities aside,   We don't know exactly how accurate it has to be in order to reduce computation times.   But more than that, my post was a general observation that people think they are truly random and unpredictable.


QuicK and DirtY. Read the CoC it's like a guide on how not to be moron.  Also I don't have an issue with the VS series.

Sometimes I miss contractions like n't on the end of words like wouldn't, couldn't and shouldn't.    Please don't be a dick,  make allowances when reading my posts.

Link to post
Share on other sites

Well if Stadia only has linear story-driven games then this program would be really effective.


Specs: Motherboard: Asus X470-PLUS TUF gaming (Yes I know it's poor but I wasn't informed) RAM: Corsair VENGEANCE® LPX DDR4 3200Mhz CL16-18-18-36 2x8GB

            CPU: Ryzen 7 2700X @ 4.2Ghz          Case: Antec P8     PSU: G.Storm GS850                        Cooler: Antec K240 with two Noctura Industrial PPC 3000 PWM

            Drives: Samsung 970 EVO plus 250GB, Micron 1100 2TB, Seagate ST4000DM000/1F2168 GPU: EVGA RTX 2080 ti Black edition @ 2Ghz

                                                                                                                             

Link to post
Share on other sites
6 hours ago, comander said:

 

Google would potentially have millions of people's worth of data to go on. 


Can you name a company with more gaming AI and data expertise than Google?

You can't predict the unpredictable. It's just physically impossible. It literally doesn't matter how many jiggabytes of user data they have or expertise in whatever. Netcode interpolation looks bad when it wanders too far away from what will actually happen and you seriously believe anyone can predict the exact position of over 2 million pixels (that's for 1080p which would be expected as baseline and 4K at over 8 million pixels). Because guess what, they aren't only doing motion data. It's whole visual data. Something graphic cards need hundreds of gigabytes of throughput to even materialize in a smooth enough fashion to be usable. Meaning ANY deviation from what is actually happening means you'd have not just motion and actions interpolated but whole visual data. Have fun looking at smudged and blurry image because it would constantly be trying to catch up to what's actually happening. That can work with motion and actions only because they can deviate from the rendered graphics a bit and you won't even notice it unless it actually lags and desyncs entirely. With whole image, you'll clearly see it. It just can't be done. With regular games, input and render is local, motion and actions are transmitted between players and then merged locally. Stadia is local input and remote render. Meaning every motion and action you do needs to first reach Google, they render whatever you're doing and send it your way and it has to reflect exactly what you did, otherwise it'll just be the most horrible experience ever. And they have a problem with that latency. The round trip of your action to Google and back with everything in between involved. It's literally why this streaming games shit never took off and never will unless we'll have Google servers 100m away from homes with max 1ms latency and bandwidth to stream any video quality. But then you're just solving the problem with brute force, not overhyped magical "Ai" buzzword nonsense. And that's exactly the only thing you can do to make streamable gaming actually work. You cannot usably predict the state of 2+ million pixels to enhance user input latency because of distance.

Link to post
Share on other sites
22 minutes ago, RejZoR said:

You can't predict the unpredictable. It's just physically impossible.

This is literally one of the most predictable things there is, assuming you limit the scope adequately.
You don't have to be perfect. Just OKish some of the time. 

If you want to learn more about how things work this is a really easy course on ML. 

https://www.coursera.org/learn/machine-learning

 

22 minutes ago, RejZoR said:

Netcode interpolation looks bad when it wanders too far away from what will actually happen and you seriously believe anyone can predict the exact position of over 2 million pixels (that's for 1080p which would be expected as baseline and 4K at over 8 million pixels). Because guess what, they aren't only doing motion data. It's whole visual data. Something graphic cards need hundreds of gigabytes of throughput to even materialize in a smooth enough fashion to be usable.

They'd use an API.
https://en.wikipedia.org/wiki/Application_programming_interface

 

It's highly unlikely they'd try to pre-calculate the graphics, they'd likely look into things like physics and doing some (not all) tasks that'd be done on a CPU which would help get a bit of an edge since the GPU would be able to be fed data a little earlier.

Think of this as being conceptually similar to speculative execution. You make a few guesses so you can keep on doing work without stopping. You drop the work you've done if your guess was wrong. You don't have to predict everything, just a handful of elements.

Speculative execution has existed for A LONG TIME. 

This is also a case where A LOT of things are either "basically the same as 1 frame ago" or "about what you'd expect based on current trends"


R9 3900x; 64GB RAM | RTX 2080 | 1.5TB Optane P4800x

1TB ADATA XPG Pro 8200 SSD | 2TB Micron 1100 SSD
HD800 + SCHIIT VALI | Topre Realforce Keyboard

Link to post
Share on other sites

now you can blame the AI when you die! :P 


I spent $2500 on building my PC and all i do with it is play no games atm & watch anime at 1080p(finally)...

Builds:

The Toaster Project! Northern Bee!

 

The original LAN PC build log! (Old, dead and replaced by The Toaster Project & 5.0)

Spoiler

"Here is some advice that might have gotten lost somewhere along the way in your life. 

 

#1. Treat others as you would like to be treated.

#2. It's best to keep your mouth shut; and appear to be stupid, rather than open it and remove all doubt.

#3. There is nothing "wrong" with being wrong. Learning from a mistake can be more valuable than not making one in the first place.

 

Follow these simple rules in life, and I promise you, things magically get easier. " - MageTank 31-10-2016

 

 

Link to post
Share on other sites

I have to think this tech would primarily be used in Rhythm games which are completely latency dependent, and would probably be the hardest genre to put on Stadia otherwise due to latency concerns.
A game like say... Guitar Hero would could theoretically use this tech because there are limited inputs.

or Crypt of the Necrodancer, a game where you move your character to the beat of the music. Again the inputs are limited, and even then it probably be easy to predict what the next action the player will take based on the limited inputs.

I doubt this tech could be used for the lastest shooter.

Link to post
Share on other sites

This principal already gets used in CPU's. It's called branch prediction. 

If anyone wants to read up about it: Branch prediction

 

I can see it be usefull in a lot of singleplayer games. Multiplayer however might be more difficult to pull off. Probably needs to be done serverside, and that's not in googles control. The gamedev should implement this then.

Link to post
Share on other sites
1 hour ago, comander said:

This is literally one of the most predictable things there is, assuming you limit the scope adequately.
You don't have to be perfect. Just OKish some of the time. 

If you want to learn more about how things work this is a really easy course on ML. 

https://www.coursera.org/learn/machine-learning

 

They'd use an API.
https://en.wikipedia.org/wiki/Application_programming_interface

 

It's highly unlikely they'd try to pre-calculate the graphics, they'd likely look into things like physics and doing some (not all) tasks that'd be done on a CPU which would help get a bit of an edge since the GPU would be able to be fed data a little earlier.

Think of this as being conceptually similar to speculative execution. You make a few guesses so you can keep on doing work without stopping. You drop the work you've done if your guess was wrong. You don't have to predict everything, just a handful of elements.

Speculative execution has existed for A LONG TIME. 

This is also a case where A LOT of things are either "basically the same as 1 frame ago" or "about what you'd expect based on current trends"

And how will they predict me moving mouse at XY coordinates on screen at exact given time in combination of 10+ buttons on keyboard. You can't predict impossibility lol. There is literally NO  Ai, API or whatever that can ever predict anything like this, ever. And you'd have to predict that WITH graphics, otherwise you're not really solving anything lol. The lag is created because Google needs to wait for what you'll do and then feed you back visual representation of your actions. But everyone thinks Google can somehow predict that. It's impossible. They can have all the user data in this world and infinite computation power and they still cannot predict user actions in real time.

 

Also I know what API and ML is. Machine can learn how you play in general. It cannot ever learn how you'll act exactly at any given time in realtime. Just throwing buzzowords at things and expect them to miraculously work, people have way too high expectations of Ai. Which in majority of cases is nothing but bunch of IF statements that run in a loop and they output data depending on what falls in line with given rules that are present there. Learning part of it is just storing and feeding output data so the looping can comb through what has already encountered before. Programmer still needs to quite specifically create routines that do all this. But people hear "Ai" buzzword and think it's all just sentient intelligence that knows all, does all. Shit like this just doesn't exist yet and won't for many more years to come...

Link to post
Share on other sites
Spoiler

 

A part from that what Google is stating being total BS...

 

I think you are forgetting that usual button to pixel lag on consoles is usually over 100ms and that's on super fast gaming monitors... But it still doesn't matter as it's fine for most "gamers" just look how many console users there are and they don't see anything wrong with it. I don't think we need to mention USA Internet infrastructure is a mess.

 

Game streaming as video will be BS at least for me until we get some quantum internet with like 1ms encoders and with almost no visual loss.

 

 

PS. 1080p 144hz anyday over 4k 60hz


7700k @ 4.7 GHZ on Gigabyte Z270-Gaming K3, Thermalright Macho Rev B, 16GB Crucial 3000MHZ CL15 , Gigabyte RTX 2070 Super ~1980Core, Windows 10

Link to post
Share on other sites

erm.. what about mechanics changing based on action taken?

AKA

if I press right I walk into a collision box that triggers a cutscene, if I walk left I don't. You'd need fully rewindable games, and that's not someting anyone is coding for.

You'd need multiple instances running in parallel to the frame. this sounds like a massive resource drain for very little effect. The ultimate diminished return. Lets build a server so I can guess stuff right sometimes.

10 hours ago, mr moose said:

Technical realities aside,  I find it amusing that people don't think they are predictable.

read above.

 

Link to post
Share on other sites
1 hour ago, RejZoR said:

And how will they predict me moving mouse at XY coordinates on screen at exact given time in combination of 10+ buttons on keyboard. You can't predict impossibility lol. There is literally NO  Ai, API or whatever that can ever predict anything like this, ever. 

 

This is a relatively small amount of the computation that needs to be done. It can be skipped. 
Why do you think that this component is material vs doing things like calculating scene physics?

 

1 hour ago, RejZoR said:

And you'd have to predict that WITH graphics, otherwise you're not really solving anything lol. The lag is created because Google needs to wait for what you'll do and then feed you back visual representation of your actions. But everyone thinks Google can somehow predict that. It's impossible. They can have all the user data in this world and infinite computation power and they still cannot predict user actions in real time.

You don't have to do this. If there's 10 steps to be done before rendering a frame, you can calculate the most time consuming 5 in advance. 

1 hour ago, RejZoR said:

People have way too high expectations of Ai. Which in majority of cases is nothing but bunch of IF statements that run in a loop and they output data depending on what falls in line with given rules that are present there. Learning part of it is just storing and feeding output data so the looping can comb through what has already encountered before. Programmer still needs to quite specifically create routines that do all this. But people hear "Ai" buzzword and think it's all just sentient intelligence that knows all, does all. Shit like this just doesn't exist yet and won't for many more years to come...

While that was true around 50 years ago and there's still some use in machine learning for what are effectively overglorified IF-Then statements (e.g. RandomForests, Gradient Boosted Trees, etc. - it's also inaccurate to think of them as being in a loop as each tree can be run in any order - also the way the models are built is RADICALLY different, people are not writing if-then statements) the cutting edge stuff generally relies on neural networks which work quite a bit different from that. https://machinelearningmastery.com/gentle-introduction-xgboost-applied-machine-learning/

If you want to criticize neural nets, you can say that it's overglorified curve fitting - NOT if-then statements. 

Here's a visual example of a VERY VERY VERY VERY simple neural network running on a very small amount of data that's very simple. 
https://playground.tensorflow.org/

 

 

 

1 hour ago, RejZoR said:

Programmer still needs to quite specifically create routines that do all this. 

A lot of the work done these days ends up being data collection/engineering, feature engineering and hyper-parameter tuning. Some work might also be done to ensure things like numerical stability (this was legitimately a HUGE issue 10 years ago) though I can't speak to that issue very well. Same applies to working on computational efficiency (without tanking model performance)... Initialization is also fun.
 

1 hour ago, RejZoR said:

But people hear "Ai" buzzword and think it's all just sentient intelligence that knows all, does all. Shit like this just doesn't exist yet and won't for many more years to come...

No one who knows their stuff is going to debate that. You can build an "appliance" that does one thing and does it well (like a toaster or microwave) and that's about it here and today. The best we can hope for in the midterm would be an appliance which can choose between appliances. This would not be creativity. The closet thing to creativity would be "trying random stuff during simulations and seeing what works"


R9 3900x; 64GB RAM | RTX 2080 | 1.5TB Optane P4800x

1TB ADATA XPG Pro 8200 SSD | 2TB Micron 1100 SSD
HD800 + SCHIIT VALI | Topre Realforce Keyboard

Link to post
Share on other sites
13 hours ago, mr moose said:

Technical realities aside,  I find it amusing that people don't think they are predictable.

For once, I agree with you. I, personally, know full fucking well I am entirely predictable in a billion ways when it comes to my actions. My hubby would definitely call me predictable, if one was to ask him!

 

Though, my movements aren't all that predictable; I am clumsy and I have some nervous-system issues making many of my movements twitchy and jittery, plus my hands and fingers tremble constantly -- it'd be interesting to see how much of my finger-movements, for example, could actually be predicted and what the curves I mentioned in an earlier post would look like for me. Might have to add that on my todo-list.


Hand, n. A singular instrument worn at the end of the human arm and commonly thrust into somebody’s pocket.

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Announcements

  • April fools topics should be posted in Off Topic

×