Jump to content

Google's Deepmind AI ALPHAGO Hands Historic Second Consecutive Loss to World Champion Go Player

Curufinwe_wins

IF this shows up on WAN Show (not hoping or anything, just covering bases...)

Here is a rough guide to pronunciation: "kouru-fin(hard I)-way".

Or you can just say Anthony...

 

 

Sources:

http://www.theverge.com/2016/3/9/11184362/google-alphago-go-deepmind-result

https://gogameguru.com/alphago-defeats-lee-sedol-game-1/ <-Good short commentary on match 1

https://gogameguru.com/alphago-races-ahead-2-0-lee-sedol/  <-Good short commentary on match 2

http://www.britgo.org/intro/intro2.html <- Introduction to the rules of Go

http://www.nature.com/nature/journal/v529/n7587/full/nature16961.html <- DeepMind scientific journal article describing how AlphaGo works.

 

Introduction:

The game of Go is one of the oldest in the world. It originated at least 2500 years ago (with legendary status up to 4000 years) in China. It has a very simple ruleset yet almost limitless possibilities. As it is played on a 19x19 square board, early turns have around 360 possibilities. If a computer wished to examine all of the possibilities in the first 20 moves (assuming asymmetric play), it would require examining 2.82*10^54 possibilities.This rapid divergence means that the same brute force methodologies applied to many ai programs including those for Chess cannot work here. There is simply not enough time in the world. As a result, it had widely been believed within the AI community that a professional-level Go program was at least 10 years away.

 

Some Go "Snip-its":

  • Generally speaking, it is held that each turn a player has around 200 possible moves.
  • Games generally last 150-250 total turns, over the course of 4-6 hours.
  • Games very very rarely end in a tie, and almost always end in retreat/forfeit/withdrawal.
  • Keeping track of game score/lead is arguably the most difficult aspect of the game.

 

Enter AlphaGo from Deepmind (Google):

 

Quote

Deepmind notes that while chess has an average of around 20 possible moves for a given position, Go gives the player about ten times as many options, resulting in a massively higher branching factor that is far harder for any AI to deal with.

 

Appropriately enough given DeepMind’s parent company, the solution requires a more efficient search algorithm. But this isn’t much use for Go without an improved ability to evaluate the game itself, which is the biggest challenge for computers — it’s much harder for them to work out who’s winning than it is with a game like chess, for example. AlphaGo is powered by two deep neural networks that guide its machine learning and search techniques, arriving at the best move by narrowing and shortening the tree diagram of possibilities. And while the program initially learned to play Go by being fed data from historical real-world matches, it’s since been trained further by playing thousands of matches against itself, continually reinforcing its ability. "I think it’s very impressive," says Murray Campbell, an IBM research scientist who was one of the principal creators of Deep Blue. "They’ve clearly advanced the state of the art."

AlphaGo last year did the unthinkable. It defeated the European champion (Fan Hui rank #663) 5-0, the first time an AI has even approached professional level prowess. The Deepmind program then set out for a more challenging match, to take on one of the top players in the world, 18-time world champion,  Lee Sedo. The winner of the 5 match series walks away with 1 Million USD (if Deepmind wins it goes to charity).

 

Such was the general consensus in the world that Lee Sedo made this statement before the first match:

Quote

"I believe human intuition is too advanced for AI to have caught up yet," he said, but noted that since learning more about AlphaGo’s capabilities he’s gotten slightly more nervous and is no longer convinced he’ll win the series 5-0. I don’t make mistakes often," he said, "but if I make any single mistake as a human being I might lose."

 

 

Indeed, the matches thus far have been very interesting. In match 1, both Lee and AlphaGo made moves that generally were considered mistakes, and AlphaGo forced a relatively fast match in its favor. (See Link 2). To even make it this far against a human opponent was considered a dramatic achievement.

 

Quote

"I don’t regret accepting this challenge," said Lee. "I am in shock, I admit that, but what's done is done. I enjoyed this game and look forward to the next. I think I failed on the opening layout so if I do a better job on the opening aspect I think I will be able to increase my probability of winning."

 

Going into match 2, Lee was much less confident. He claimed a 50-50 shot at winning. He never had a chance. AlphaGo played a phenomenal game of Go with several moves that were so unorthodox commentators were itching to call them plain mistakes. According to those same people, Lee played effectively a perfect game, and yet was behind the entire time (See Link 3).

 

Quote

Yesterday I was surprised but today it's more than that — I am speechless," said Lee in the post-game press conference. "I admit that it was a very clear loss on my part. From the very beginning of the game I did not feel like there was a point that I was leading." DeepMind founder Demis Hassabis was "speechless" too. "I think it's testament to Lee Se-dol's incredible skills," he said. "We're very pleased that AlphaGo played some quite surprising and beautiful moves, according to the commentators, which was amazing to see."

 

When a journalist asked Lee what he thought AlphaGo’s weaknesses were, he quipped “I guess I lost the game because I wasn’t able to find any weaknesses.”

In response to the same question, Hassabis explained that while DeepMind can estimate AlphaGo’s strength internally, “we need somebody of the incredible skill of Lee Sedol to creatively explore and see what weaknesses AlphaGo maybe has, so we can see them.”

“That’s why we’re having this match, to find out,” said Hassabis.

 

 

Match 3 is to begin Saturday night 9:30 pm (CDT).

 

 

 

 

EDIT: It must be noted that AlphaGo is designed to maximize win probability not margin of victory, so it will favor a match where it wins by .5 points at 95% probability to a match where it wins by 15 points at a 90% probability. This behavior may be a significant factor in the rapid changes in aggression seen in the program.

 

 



Match 1:

 

Match 2:

 

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

:D 

Google knows.™

NEW PC build: Blank Heaven   minimalist white and black PC     Old S340 build log "White Heaven"        The "LIGHTCANON" flashlight build log        Project AntiRoll (prototype)        Custom speaker project

Spoiler

Ryzen 3950X | AMD Vega Frontier Edition | ASUS X570 Pro WS | Corsair Vengeance LPX 64GB | NZXT H500 | Seasonic Prime Fanless TX-700 | Custom loop | Coolermaster SK630 White | Logitech MX Master 2S | Samsung 980 Pro 1TB + 970 Pro 512GB | Samsung 58" 4k TV | Scarlett 2i4 | 2x AT2020

 

Link to comment
Share on other sites

Link to post
Share on other sites

And then google robots enslaved the human race

Desktop - Corsair 300r i7 4770k H100i MSI 780ti 16GB Vengeance Pro 2400mhz Crucial MX100 512gb Samsung Evo 250gb 2 TB WD Green, AOC Q2770PQU 1440p 27" monitor Laptop Clevo W110er - 11.6" 768p, i5 3230m, 650m GT 2gb, OCZ vertex 4 256gb,  4gb ram, Server: Fractal Define Mini, MSI Z78-G43, Intel G3220, 8GB Corsair Vengeance, 4x 3tb WD Reds in Raid 10, Phone Oppo Reno 10x 256gb , Camera Sony A7iii

Link to comment
Share on other sites

Link to post
Share on other sites

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

why is the video so choppy or is that just comcast

Link to comment
Share on other sites

Link to post
Share on other sites

A great achievement in bot design for games but its a bit behind the times (a few thousand years) given I'd rather have deep neural networks be applied to more relevant titles given having more intelligent non-human opponents would be great in single player 3d RPG games for much deeper game play.

 

Also it is technically just a demo of the new neural network driven google search engine just for go with a really good I'm feeling lucky button since it is using a huge dataset to operate on (like the search engine) and is also hand tuned. For it to be useful in modern gaming applications they would have to slim it down a ton and generalize it to much more complex games. 

 

One other interesting thing is I'm pretty sure a non-Go playing expert computer programmer could beat alpha GO if they looked in its resultant model and had a chance to study its dataset and programming as AlphaGo had a chance to study all previous human player moves and tons of human mediated automated training as well. All you need to find is a bug in the software and exploit it in play and given how buggy Google's software is I'm sure there are tons of vulnerabilities a normal Go player would never try but would magically cause the software to fail. Then of course the programmers would patch it and it would turn into a human game of finding programming bugs instead of playing Go. Then comes the real question of how do you program a self patching program because if you could do that you would actually have a shot at a general purpose AI if it worked well.

 

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, Roawoao said:

A great achievement in bot design for games but its a bit behind the times (a few thousand years) given I'd rather have deep neural networks be applied to more relevant titles given having more intelligent non-human opponents would be great in single player 3d RPG games for much deeper game play.

 

Also it is technically just a demo of the new neural network driven google search engine just for go with a really good I'm feeling lucky button since it is using a huge dataset to operate on (like the search engine) and is also hand tuned. For it to be useful in modern gaming applications they would have to slim it down a ton and generalize it to much more complex games. 

 

One other interesting thing is I'm pretty sure a non-Go playing expert computer programmer could beat alpha GO if they looked in its resultant model and had a chance to study its dataset and programming as AlphaGo had a chance to study all previous human player moves and tons of human mediated automated training as well. All you need to find is a bug in the software and exploit it in play and given how buggy Google's software is I'm sure there are tons of vulnerabilities a normal Go player would never try but would magically cause the software to fail. Then of course the programmers would patch it and it would turn into a human game of finding programming bugs instead of playing Go. Then comes the real question of how do you program a self patching program because if you could do that you would actually have a shot at a general purpose AI if it worked well.

 

Are you kidding me? You have no idea how insanely impressive this is...

 

NO ONE IN THE ENTIRE AI COMMUNITY THOUGHT THIS WAS POSSIBLE FOR ANOTHER 10 YEARS!

 

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, Roawoao said:

snip

Don't understand why you would include games of much larger scales (such as those with more AIs)... An AI like this takes a huge amount of processing power from what I imagine. You wouldn't be able to scale this into an actual video game with more players and more mechanics to the game.

 

this doesn't compare to a search engine. Unlike a search engine, this AI actually has to think about future moves and adapt to the opponent's moves instead of just landing at one possible result.

 

Also, I highly doubt that the programmers themselves would be able to compete against their own AI creation, unless they purposely inserted a bug which made it easier to win. You're statement is also implying that the developers of this AI would also be able to defeat the world champion go player, which I would think is unlikely.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, dragosudeki said:

Don't understand why you would include games of much larger scales (such as those with more AIs)... An AI like this takes a huge amount of processing power from what I imagine. You wouldn't be able to scale this into an actual video game with more players and more mechanics to the game.

 

this doesn't compare to a search engine. Unlike a search engine, this AI actually has to think about future moves and adapt to the opponent's moves instead of just landing at one possible result.

 

Also, I highly doubt that the programmers themselves would be able to compete against their own AI creation, unless they purposely inserted a bug which made it easier to win. You're statement is also implying that the developers of this AI would also be able to defeat the world champion go player, which I would think is unlikely.

You don't understand how the current search engine works it also thinks about your previous moves even where you are, what others are searching, the news, history, emails, ...

 

The one result you search for is not the one and only thing the search engine does it has to consider what about your future searches what ads to serve you. Predict your location, interest, mood, ....

 

To top it off Google uses deep neural networks in their search algorithm now. I don't think you understand what I said by programmer beat AI any expert with access to the data set used and program code can beat the AI by exploiting a bug in the programming. No software is perfect and the AI cannot self patch its own code (yet).

http://www.wired.com/2016/02/ai-is-changing-the-technology-behind-google-searches/

 

I turned on Google location history and it decided to go back so far that even before I had a cellphone it predicted where I was visiting just based solely on my search inputs and asked if it was right and although far less accurate than GPS/Wifi positioning the predictive location based on search history worked pretty well.

 

Compared to Go the job of the search engine AI is far more complex with a literally infinite no clear cut rules or game pieces. Which is why the Go system is just a tech demo of how good the same AI engine works on a simple game.

 

1 hour ago, Curufinwe_wins said:

Are you kidding me? You have no idea how insanely impressive this is...

 

NO ONE IN THE ENTIRE AI COMMUNITY THOUGHT THIS WAS POSSIBLE FOR ANOTHER 10 YEARS!

 

The entire AI community is clearly retarded why would a game of such simplicity be impossible. Not only that development in simplified Go games was progressing smoothly and I don't see how it was unexpected to eventually work on a full game vs a human. Another 10 to 15 years is code for never and I doubt that was the case given existing success.

Link to comment
Share on other sites

Link to post
Share on other sites

Keep watching this space, we all know what its like playing vs the computer. Give it some time.

 

I would expect that this comp would kick off with AlphaGo stringing together a number of solid wins, and look completely untouchable.

 

The brute force computation that it can bring to bear must seem pretty overwhelming, even for a grand champ. Its a different style of play when you are up against a machine.

 

But I would also expect that over a longer term set of games, the human grand champ would start to get a feel for AlphaGo, and slowly get on top of it. With enough practice, beating AlphaGo may even be trivial, once the human "got in the zone" so to speak.

 

Lets see what happens.

 

PS: Please, someone tell me that AlphaGo is actually written in Go - the Language, now that would be twice as cool :)

Link to comment
Share on other sites

Link to post
Share on other sites

That's super impressive.

 

Just a wild guess here: given that Alpha Go learned to play Go by learning from earlier matches, wouldn't it be possible to beat the computer by using very uncommon moves?

 

Also, are there other thinking games left where the AI doesn't beat a humanoid player?

Why is SpongeBob the main character when Patrick is the star?

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Roawoao said:

snip

You are quite mistaken on what is and isn't difficult for an AI to deal with.

The more complicated the rules are for a game, the easier it is for an AI to win/be coded.  (exploiting rules and/or confining game-play is a large reducer of branching) 

The more clear it is to know who is winning/losing at any given point in time the easier it is for an AI to win/be coded. (self-evident)

The more definite the win condition the easier it is for an AI to win. (a game that is almost never played to completion is obviously harder to deterministically evaluate in the first place. This is more of an up-front coding challenge.)

The less possible game combinations and turn permutations the easier it is to for an AI to win. (self-evident).

The less temporally constrained (in general, with some obvious counter examples like shooting reflex actions), the easier it is for an AI to win.

 

Go is in many ways the ultimate challenge in this fashion. The game has only two real rules, and effectively infinite combinations. It has definite time limits that force temporal management criteria that are not self-evident (you don't just allow it to do 1/200th of the total allowable time per turn). It is almost impossible in many games to know in real time who is winning or losing (and games almost always end in forfeit without official counting taking place). In fact if you watch the games, you will notice that live commentary for the two games often overestimates Lee Sedols position relative to AlphaGo whereas the after-the-fact commentary had AlphaGo almost always in the lead.

 

Deepmind has already shown itself to be in many fashions the worlds most sophisticated AI. Its base code (of intertwined deep neutral networks) is going to be repurposed for different applications (already contracted for a UK health care system) while an iteration will be released for continued Go study by a different group.

 

 

Indeed, programming an impossible to beat AI for a rpg (or any more conventional computer game) for example is almost trivial because the AI can be allowed perfect information while the human interacting is only allowed a specific surface of information. Indeed, the most human-like AI's work to induce human errors (and delays) into their AI because if they were allowed to make perfect actions at every interaction and had perfect information, bots would be unbeatable and not interesting. It is obviously a much less trivial task to code an AI if internally you don't give it perfect information but only allow it to see the same information as human players (but even this is only done rarely and more effort is made to make it appear as if it was acting on human apparent information.) 

 

It's laughable to compare to the other efforts that were made in Go. The next best Go programs can compete only at a 4-5d level and got tromped by even low level professionals (hence it was ground-breaking that AlphaGo won 5-0 against a 2p player without handicaps). AlphaGo on a single personal computer has won over 99.8% of its games against other Go AIs (even those on distributed networks). The distributed version of AlphaGo then has won 72% of its games against the single computer AlphaGo variation (which is a huge testament to it's efficient use of computing power.)

 

Indeed, this was why it was believed Go was such a far future endeavor. No other program had near the level of computational power it needed to compete on a high level, so it was generally believed that it just wasn't possible with current supercomputers. Deepmind's novel approach (others have used neutral networks and machine learning, but obviously these things are not created equal) has broken the entire barrier in a fashion no one previously understood.

 

50 minutes ago, Steveoc64 said:

snip

AlphaGo really can't compute that deep relative to the length of the game. And indeed, a 9p Go player evaluates, albeit heuristically, much deeper than AlphaGo can due to well-established orthodoxy (which together with intuition dramatically reduces branching factors). It is generally held that 9p players are evaluating 40-60 turns in advance. Even with it's huge computational power, it can only go to a relatively low depth of 20 or so moves (according to Deepmind). The real difficulty is understanding that the AI does not always seek to further its standing in the match (humans naturally try to fight for larger and larger margins of victory for a variety of reasons), instead the AI will make moves that seem to help the relative position of its opponent but allow it to have a higher probability of winning (even if by a .5 stone margin).

 

Consider in the NFL when teams get ahead they tend to take their foot off the gas, now consider an AI that is dynamically on the throttle because it sees future situations were it may or may not be ahead (and who is actually winning at the moment is not apparent.) 

 

This was a huge issue for Lee Sedol (and likely would be against any 9p player) in the second game. In the times when AlphaGo breaks established orthodoxy, it forces Lee to completely reevaluate his heuristic methodology. 

 

It should be noted however that AlphaGo DOES NOT evaluate based on shock factor or any other psychological/physiological difference between itself and its opponent. It assumes it is playing against itself at all times.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, patrick3027 said:

That's super impressive.

 

Just a wild guess here: given that Alpha Go learned to play Go by learning from earlier matches, wouldn't it be possible to beat the computer by using very uncommon moves?

 

Also, are there other thinking games left where the AI doesn't beat a humanoid player?

It has also studied thousands of human games. It also doesn't follow past games in deciding its own actions, it used it previously as a narrowing mechanism to create its own sort of tree branching algorithms (policy network).

 

Here is the paper about it if you would like to read.

 

http://www.nature.com/nature/journal/v529/n7587/full/nature16961.html

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

27 minutes ago, Curufinwe_wins said:

You are quite mistaken on what is and isn't difficult for an AI to deal with.

The more complicated the rules are for a game, the easier it is for an AI to win/be coded.  (exploiting rules and/or confining game-play is a large reducer of branching) 

The more clear it is to know who is winning/losing at any given point in time the easier it is for an AI to win/be coded. (self-evident)

The more definite the win condition the easier it is for an AI to win. (a game that is almost never played to completion is obviously harder to deterministically evaluate in the first place. This is more of an up-front coding challenge.)

The less possible game combinations and turn permutations the easier it is to for an AI to win. (self-evident).

The less temporally constrained (in general, with some obvious counter examples like shooting reflex actions), the easier it is for an AI to win.

 

Go is in many ways the ultimate challenge in this fashion. The game has only two real rules, and effectively infinite combinations. It has definite time limits that force temporal management criteria that are not self-evident (you don't just allow it to do 1/200th of the total allowable time per turn). It is almost impossible in many games to know in real time who is winning or losing (and games almost always end in forfeit without official counting taking place). In fact if you watch the games, you will notice that live commentary for the two games often overestimates Lee Sedols position relative to AlphaGo whereas the after-the-fact commentary had AlphaGo almost always in the lead.

 

Deepmind has already shown itself to be in many fashions the worlds most sophisticated AI. Its base code (of intertwined deep neutral networks) is going to be repurposed for different applications (already contracted for a UK health care system) while an iteration will be released for continued Go study by a different group.

 

 

Indeed, programming an impossible to beat AI for a rpg (or any more conventional computer game) for example is almost trivial because the AI can be allowed perfect information while the human interacting is only allowed a specific surface of information. Indeed, the most human-like AI's work to induce human errors (and delays) into their AI because if they were allowed to make perfect actions at every interaction and had perfect information, bots would be unbeatable and not interesting. It is obviously a much less trivial task to code an AI if internally you don't give it perfect information but only allow it to see the same information as human players (but even this is only done rarely and more effort is made to make it appear as if it was acting on human apparent information.) 

 

It's laughable to compare to the other efforts that were made in Go. The next best Go programs can compete only at a 4-5d level and got tromped by even low level professionals (hence it was ground-breaking that AlphaGo won 5-0 against a 2p player without handicaps). AlphaGo on a single personal computer has won over 99.8% of its games against other Go AIs (even those on distributed networks). The distributed version of AlphaGo then has won 72% of its games against the single computer AlphaGo variation (which is a huge testament to it's efficient use of computing power.)

 

Indeed, this was why it was believed Go was such a far future endeavor. No other program had near the level of computational power it needed to compete on a high level, so it was generally believed that it just wasn't possible with current supercomputers. Deepmind's novel approach (others have used neutral networks and machine learning, but obviously these things are not created equal) has broken the entire barrier in a fashion no one previously understood.

You dont seem to understand the difference between rule complexity and being well defined. A complex game in my mind can include open world no ending massive multiplayer games. To date no ai can effectively play these games in a realistic fashion for increased immersion and practical applications of ais in gaming instead of tech demos. How can you train an ai to win a game where there exists no winning condition or even score or rank. 

 

A cheating ai can always beat a human i want an ai that can make a games npcs react realistically and for dynamic story generation and non cheating opponents which do not rely on perfect knowledge to make modern games better and more enjoyable. A cheating human can always beat an ai even at go. Its a null solution to say modern game ais are simple to make by just cheating.

 

It is laugable you think that ais must win. I want ais to make games better aimbots, maphacking ais are not even remotely along my line of thought.

 

Go is a simple game and ancient. I want dnn to be applied to open world games to make them more immersive not flood it with cheating bots.

 

Finally dnns are not novel they have been around for many years so predicting it being used in go is unsuprising and not some groundbreaking revelation.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Roawoao said:

You dont seem to understand the difference between complexity and being well defines. A complex game in my mind can include open world no ending massive multiplayer games. To date no ai can effectively play these games in a realistic fashion for increased immersion and practical applications of ais in gaming instead of tech demos. How can you train an ai to win a game where there exists no winning condition or even score or rank. 

 

A cheating ai can always beat a human i want an ai that can make a games npcs react realistically and for dynamic story generation and non cheating opponents which do not rely on perfect knowledge to make modern games better and more enjoyable. A cheating human can always beat an ai even at go.

 

It is laugable you think that ais must win. I want ais to make games better aimbots, maphacking ais are not even remotely along my line of thought.

 

Go is a simple game and ancient. I want dnn to be applied to open world games to make them more immersive not flood it with cheating bots.

Complexity vs constraint? I understand it quite well, and as my post was getting at (to a first approximation) the more complex a game's rules are the more constrained it is.

 

Go is a game of perfect information (and perfectly non-random IE no luck). You cannot cheat Go in the fashion a perfect information AI "cheats" a shooting game (for example).

This is a key function of games of perfect information... Go is a game that is perfectly constrained and yet as simple as any game comes. Open world games are often the exact opposite, with convoluted rules that make specific instances trivial to code, but with an overarching game that is effectively non-constrained. Both of these are hard to deal with for opposite reasons.

 

 

Your concept of an AI for gaming is to deliberately hamstring it for your own pleasure.

 

How then do you appropriately hamstring it for players of different skill as what a top player would consider useful/equivalent AIs would only function to carry a weak player? On the contrary, many current AIs (Dragon Age games are a great example) are a very good simulation of bad-mediocre gamers (who blatantly fail to understand mechanics) and only get in the way of good ones (although, DA:O is perhaps the best game I have ever seen in allowing for customization for AI intelligence by assigning convoluted AI rules, which as I mentioned makes sense as the added rules constrain the AI, making it more effective)...

 

 

 

So the important distinction for an open world game is that the majority of the time the AI doesn't have to be "gaming". It can follow (improving following characteristics is always a good idea) the majority of the time (if it led this would be merely hand-holding and ruin the game itself) while only interacting in "fight events" or scripted behaviors.

 

Unless you want the AI to do the game for you?

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, Curufinwe_wins said:

Complexity vs constraint? I understand it quite well, and as my post was getting at (to a first approximation) the more complex a game's rules are the more constrained it is.

 

Go is a game of perfect information (and perfectly non-random IE no luck). You cannot cheat Go in the fashion a perfect information AI "cheats" a shooting game (for example).

This is a key function of games of perfect information... Go is a game that is perfectly constrained and yet as simple as any game comes. Open world games are often the exact opposite, with convoluted rules that make specific instances trivial to code, but with an overarching game that is effectively non-constrained. Both of these are hard to deal with for opposite reasons.

 

 

Your concept of an AI for gaming is to deliberately hamstring it for your own pleasure.

 

How then do you appropriately hamstring it for players of different skill as what a top player would consider useful/equivalent AIs would only function to carry a weak player? On the contrary, many current AIs (Dragon Age games are a great example) are a very good simulation of bad-mediocre gamers (who blatantly fail to understand mechanics) and only get in the way of good ones (although, DA:O is perhaps the best game I have ever seen in allowing for customization for AI intelligence by assigning convoluted AI rules, which as I mentioned makes sense as the added rules constrain the AI, making it more effective)...

 

 

 

So the important distinction for an open world game is that the majority of the time the AI doesn't have to be "gaming". It can follow (improving following characteristics is always a good idea) the majority of the time (if it led this would be merely hand-holding and ruin the game itself) while only interacting in "fight events" or scripted behaviors.

 

Unless you want the AI to do the game for you?

Complexity does not equal more constrainted. Say for example I have a flame simulution game the rules are increidibly complex and the goal is to make pretty flames judged by a panel of humans. This would have no clear winning condition and the rules dont help.

 

Perfect knowedge of the game state as it appears to you is different than what you say for an fps or rpg using maphacks and aimbotting which accesses information not avaiable to players. If a human expert programmer had perfect knowledge of the state, code, dataset inside alphago they could expoit its bugs and always win. 

 

Open world games being hard to deal with is the problem in after not cheating in an fps.

 

Again you are stuck in the ai must win and there is some scoring function. I have a question have we made ais that can make game more fun and immersive.

 

It is a far more difficult ai problem and alphaGo is is useless for this general purpose ai problem of making say an npc naturally react to your gameplay and isnt even an active participant just background npcs acting dynamically.

 

Basically the core problem with exisiting ai dnns is that they need very good feedback to work i want ais that need no feedback and just work with no well defined goal or end point in extremely complex games.

 

It would break immersion if npcs asked you if their response was fun enough on a scale of 0 to 9. I also doubt that number will be very useful even if it was asked.

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, Roawoao said:

Complexity does not equal more constrainted. I have a flame simulution game the rules are increidibly complex and the goal is to make pretty flames judged by a panel of humans. This would have no clear winning condition.

 

Perfect knowedge of the game state as it appears to you is different than what you say for an fps or rpg using maphacks and aimbotting which accesses information not avaiable to players. If a human expert programmer had perfect knowledge of the state, code, dataset inside alphago they could expoit its bugs and always win. 

 

Open world games being hard to deal with is the problem in after not cheating in an fps.

 

Again you are stuck in the ai must win and there is some scoring function. I have a question have we made ais that cane make game more fun.

 

It is a far more difficult ai problem and alphaGo is is useless for this general purpose ai problem of making say an npc naturally react to your gameplay and isnt even an active participant just background npcs acting dynamically.

 

Did you not even read what I said? "to a first approximation". I even concurred that an openworld RPG for example was very complex in rules but generally non-constrained.

 

I also mentioned three posts ago the obvious issue of what happens when you don't know what the winning conditions are (or even if you know what they are, being able to evaluate them). Even a beauty pageant has a winning condition, but that condition is not well defined (as you yourself stated.)

 

No actually a human expert using AlphaGos source code does not automatically win (probabilistic behavior being impossible to predict accurately enough for full path knowledge). Deepmind has tried. Besides, there simply isn't enough time. You get 2 hours to play the game, end of story.

 

Sure IT IS POSSIBLE that a fatal flaw exists but A that is not likely from a machine learning perspective (this is not a deterministic program... do you honestly believe the 1000s of games against itself were all repetitions of each other), and B the sheer amount of time you would spend trying break the program you would have lost over and over again.

 

BTW the term perfect knowledge in game theory (which clearly you have no experience in, no offense meant) does not mean you know everything about your opponent and how he thinks. It means all of the Shannon information entropy for the GAME is known or can be known (the full current and past states of the game with no sub-apparent knowledge invisible) to all players.

 

The obvious example of what is not a perfect knowledge game is literally every card game ever, which basically always rely on your knowledge of the game pieces being incomplete (and generally different than your opponents knowledge).

 

 

Fun is a non-determinate utility. It is far beyond the useful realm to try to make an AI that is fun. You can program an AI to do things that are generally considered fun, but that cannot and will not apply to everyone and machines will not do it because it is fun.

 

The problem you want solved is (just as you say) beyond the realm of current possibility and dramatically off topic to the point of this being an amazing achievement for mankind (although thanks for the deflection).

 

 

EDIT: For obvious reasons, the real problem you put forth it is the true final problem of AI (conventionally speaking). I would personally consider sentience to be only at self-awareness and the desire for self-preservation, but that is a discussion for another time.

 

Your statement that this is "behind the times" is like being unimpressed by CP-1 because the sun has been fusing hydrogen for billions of years...

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Next: AI dominating in StarCraft in every way, ass that being AIs simulation test for actual world domination ;)

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Ducky One 3 TKL (Cherry MX-Speed-Silver)Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Curufinwe_wins said:

Did you not even read what I said? "to a first approximation". I even concurred that an openworld RPG for example was very complex in rules but generally non-constrained.

 

I also mentioned three posts ago the obvious issue of what happens when you don't know what the winning conditions are (or even if you know what they are, being able to evaluate them). Even a beauty pageant has a winning condition, but that condition is not well defined (as you yourself stated.)

 

No actually a human expert using AlphaGos source code does not automatically win (probabilistic behavior being impossible to predict accurately enough for full path knowledge). Deepmind has tried. Besides, there simply isn't enough time. You get 2 hours to play the game, end of story.

 

Sure IT IS POSSIBLE that a fatal flaw exists but A that is not likely from a machine learning perspective (this is not a deterministic program... do you honestly believe the 1000s of games against itself were all repetitions of each other), and B the sheer amount of time you would spend trying break the program you would have lost over and over again.

 

BTW the term perfect knowledge in game theory (which clearly you have no experience in, no offense meant) does not mean you know everything about your opponent and how he thinks. It means all of the Shannon information entropy for the GAME is known or can be known (the full current and past states of the game with no sub-apparent knowledge invisible) to all players.

 

The obvious example of what is not a perfect knowledge game is literally every card game ever, which basically always rely on your knowledge of the game pieces being incomplete (and generally different than your opponents knowledge).

 

 

Fun is a non-determinate utility. It is far beyond the useful realm to try to make an AI that is fun. You can program an AI to do things that are generally considered fun, but that cannot and will not apply to everyone and machines will not do it because it is fun.

 

The problem you want solved is (just as you say) beyond the realm of current possibility and dramatically off topic to the point of this being an amazing achievement for mankind (although thanks for the deflection).

 

 

EDIT: For obvious reasons, the real problem you put forth it is the true final problem of AI (conventionally speaking). I would personally consider sentience to be only at self-awareness and the desire for self-preservation, but that is a discussion for another time.

 

Your statement that this is "behind the times" is like being unimpressed by CP-1 because the sun has been fusing hydrogen for billions of years...

Even your first approximation is wrong as complexity does not even remotely equal it being constrained. While you could have a complex game with extremely constrained results you could also have a similarly complex game with no few constraints (complex linear game vs. complex open ended game). There is no relation between game complexity and it being easier to "solve". 

 

You didn't exactly read what I was writing if a computer expert had perfect knowledge of AlphaGo's entire state, code, training data, .... they would always be able to beat it because they just need to find a bug, simulate it offline, and then win in the real game as it would never know that you know of all its bugs nor would it be able to fix them.

 

When you say it is possible a fatal flaw exists you clearly show your lack of basic computer programming knowledge. It is not just possible that there is a fatal bug it is 100% certain there is at least one unpatched fatal bug maybe even some that would compromise the operating system as well.

 

A fatal bug always exists in software it is impossible to code a perfect bug free system especially one that relies on dnns. Bug finding doesn't care about how good your AI or even what program it is. Heck having perfect knowledge of the operating system is enough or even processor hardware knowledge. It would take less than 2 hours to win, you just have to crash the AlphaGo through a gameplay related bug and you win by default.

 

What you don't consider is that your selectively applying conditions. In an FPS or RPG using a maphack is cheating and accessing more information than is possible by exploiting non-game functions. So if you can do this then I can cheat via the same methods and the AI will always lose. So your mixing things up and claiming two different things at once. Making an AI that uses cheat engines is like cheating in a card game by looking at everyone's cards secretly. Can't have it both ways, on one hand you say AIs in complex modern games can just cheat but then use old games to say you can't cheat in the same way expect you can.

 

Is it really not useful to make an AI that is fun. That is an odd concept you have there, why not make AIs that are fun. The whole point of general purpose intelligence is this intangible aspect not beating a simple game of Go. Advanced AIs development for the purposes of being fun are woefully neglected.

 

It is certainly not beyond possibility why do you think it is impossible to achieve DNNs model the human brain to a degree and with more development you can probably get a system that is a decent emulation or better with say using a biological neural processor (brain cells on a chip, already been done).

 

These tech demos are far behind the times as there are far more practical and fun applications especially in gaming. I don't see how it is off topic to point out that this doesn't really solve the fact game AIs suck in general and it would be great if we could apply it to modern titles to make them better and more fun (not filled with perfect cheating bots that is not even remotely fun). Being difficult doesn't mean impossible.

 

Also it isn't like you can't be impressed with two things at once why does everything have to be so clear cut. Why can't the CP-1 and fusion be impressive at the time. While AlphaGo is impressive it isn't really ground breaking at all in the state of AI research and is just another application of DNNs that was predicted to solve such simple games for quite some time. AlphaGo is most certainly not unbeatable or bug free which you seem to imply.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Doobeedoo said:

Next: AI dominating in StarCraft in every way, ass that being AIs simulation test for actual world domination ;)

http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/custom-ai-programs-take-on-top-ranked-humans-in-starcraft

 

They lost. But as I've been saying but the other guy keeps brushing off why not apply DNNs to more complex game applications and less boring games. One other key concept is not to make the best AI either just make realistic AIs that can make games more immersive in general. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Roawoao said:

snip

This is an extremely circular argument, and you have no idea what you are talking about.

 

Enjoy!

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Curufinwe_wins said:

This is an extremely circular argument, and you have no idea what you are talking about.

 

Enjoy!

Which ideas? You don't have anything to go on and it shows in your post. 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×