Artificial Intelligence Plays Video Games

Artificial Intelligence
Artificial intelligence is moving into the future one pixel at a time through video games. Imagine a computer that can conceptualize like a child making patterns with colored blocks. Then, within hours be able to put together complex shapes. Then, imagine a computer with no coding instruction “learning” to play a game. It starts at random, because it does not have a preexisting concept of how the game works. Once the artificial intelligence has an understanding of how to receive “rewards” from the game it can then develop the capacity to win the game.

Demis Hassabis, Volodymyr Mnih, Koray Kavukcuoglu and David Silver debuted at the “First Day of Tomorrow” technology conference, April 2014. At the conference the artificial intelligence (A.I.) was presented with the classic Atari game, Breakout for the first time. After 30 minutes of play time, the A.I. is attempting to get the paddle to the ball, showing it is understanding the basics of the game.

Thirty more minutes and 200 rounds and the artificial intelligence only misses the ball once in a while. Then the A.I. comes to the 300th game, the program is no longer missing the ball, then the A.I. drills a hole through the wall, makes an awesome bank shot into the hole so the ball bounces from one side wall to the other winning the game. The A.I. was introduced to the game, won the game and discovered a way to win the creators did not consider.

The artificial intelligence can also play several other different style and skill level Atari games that it taught itself. 30 to be exact. The name of the company is DeepMind, and a year after announcing it had arrived, Google bought the company for $650 million in January 2014, not long after Hassabis first revealed his A.I.’s capabilities.

DeepMind’s artificial intelligence is modeled after the human brain. There is a neural network and a rewards-driven learning algorithm. The human brain has nodes, neural networks on layers of different kinds of connections. This is used by the brain to go through new material and make sense of its meaning. The artificial intelligence operates the same way.

The artificial intelligence’s programming allows it to remember past performances and what it did to get higher scores. It “learns” so it can change future performances. This ability combined with the neural networks allow the artificial intelligence to become a gamer, complete with a drive to win.

In an Atari game there is a byte of information in each pixel. There are also hundreds to thousands of turn options as the game goes on. This all-purpose code created by DeepMind can be used for a much wider range of games and is closer to the real world than previous artificial intelligence programming.

DeepMind’s next step is to design an artificial intelligence program that plays video games from the 1990’s. Even though DeepMind’s next goal seems simple and modest, there are much bigger plans for future A.I. program. Hassabis has partnered with satellite operators as well as financial institutions to see if this A.I. program could ‘play’ their data sets, make weather predictions or trade oil futures.

There were some games the artificial intelligence was not able to reach human-level playing. These games need longer-term planning or more advanced way to find path-ways: Pac-Man, Private Eye, and Montezuma’s Revenge. Hassabis could broaden the artificial intelligence’s programming to take risks and be bolder when it makes decisions.

By Jeanette Smith

Sources:
The New Yorker
The Economist
PC Magazine
Photo courtesy of Christophe Richard Flickr Page- License
Photo courtesy of Moparx Flickr Page – License

Your Thoughts?