Indian Magazine in Melbourne Australian indian magazine in Sydney indian community magazine in melbourne Indian Newspaper Magazine in Melbourne Indian Magazine in Sydney Bollywood News Magazine Australian indian News in melbourne Image Image Image

Made in India Magazine | February 27, 2021

Why Choose Us We Are An Audited Publication. Advertise With Us.

Select a Page
Scroll to top


No Comments


Jason Lee
  • On April 13, 2015

Google is at the cutting edge of technology at the moment. With its interest in AI and machine learning, its newest AI system, DeepMind, has managed to beat scores of professional gamers in 23 classic Atari 2600 games. Read on to find out more.

If you belong to the generation that grew up on Atari 2600 consoles, you will know the classic game Breakout, which involves bouncing a ball on a plate at the bottom edge of the screen to clear out bricks that are clumped in various patterns at the top edge. If you were a ‘pro’ at the game, you would have figured out pretty quickly that the best way to play is to tunnel through a narrow column of bricks and then watch the ball bounce on its own between the layer of bricks and the top edge of the screen.
DeepMind, an AI system developed by Google, learned to do just this without being taught.
Many AI programs have been used to play video games before, but a majority of them have been fed game data from the source code. But in this case, DeepMind was not told any of the rules, and was expected to learn how to play the game just by ‘watching’, much like a human would.
What it uses to perform these learning tasks is a deep neural network, which, like a human brain, contains networks of processing units called neurons, which are connected by weights which either strengthen or weaken depending on the reward or punishment that happens at the end. Over a period of learning, then, certain network paths become stronger than before. This is how learning and pattern matching happens in the human brain as well.
DeepMind has been tested on 49 of Atari’s games, so the network is robust and is able to handle more games than one. It does not ‘unlearn’ the patterns that it has learned with one game, for instance. On 23 of these games, DeepMind has managed to trounce the top scores of professional gamers.
The point of this research is believed to be Google’s interest in being able to process and extract patterns from vast amounts of picture and video data, which is available on its Google and Youtube servers. Learning an Atari game on one’s own requires a processing rate of 2 million pixels per second. This will also help in autonomous cars and other long-term applications that Google is interested in.
The age of the robots may just be upon us.

You may also like:
Covid Safe App
The COVID-19 tracer app: All you need to know before installing it
Shopping in China
Adventurous Gadgetry Shopping in China
Ads Coming to WhatsApp
Ads Coming to WhatsApp: 7 Things to Know
nespresso barista machine review
Nespresso Barista Machine Review: Thumbs Up or Down?
The Best Desktop Computers of 2018
Best PC 2018: The Best Desktop Computers of 2018
The Top 5 Most Bizarre Weapons Of World War
The Top 5 Most Bizarre Weapons Of World War

Submit a Comment

    Enquire About Advertising With Us

    Read previous post:
    Indian Film Festival of Melbourne 2015 – All geared up for ‘Equality’

    ‘Equality’ is the theme of the 2015 Indian Film Festival of Melbourne (IFFM), the southern hemisphere’s greatest annual celebration of...