Game Playing AI: From Early Programs to DeepMind's AlphaGo

Main Article Content

Aravind Kumar Kalusivalingam

Abstract

This paper represents the evolution significantly from its early programs to the groundbreaking advancements seen in DeepMind's AlphaGo. Initially, early programs focused on simple heuristic-based strategies, utilizing predefined rules to make decisions within games like chess and checkers. These programs lacked adaptability and struggled to compete against human experts. However, with the emergence of machine learning and neural network technologies, the landscape changed dramatically. DeepMind's AlphaGo, for instance, revolutionized the field by employing deep reinforcement learning and Monte Carlo tree search algorithms, enabling it to surpass human performance in the complex game of Go. By analyzing vast datasets and learning from self-play, AlphaGo achieved unprecedented levels of proficiency, demonstrating the power of AI in mastering intricate strategic games. This progression highlights the remarkable journey from basic rule-based systems to sophisticated neural network-driven approaches, ushering in a new era of AI-powered game-playing.

Downloads

Download data is not yet available.

Article Details

Section
Articles