There has been massive growth and advancement across all aspects of Artificial Intelligence over recent years, and a lot of them have been in the public eye – so you’ve probably been aware of them. What you might not have realised is just how long the journey has been. Take a look at this article from the BBC on the history of Machine Learning; this journey has been going on for more than 200 years. I've also summarised the articles key dates and points below...

Sourced from: https://www.bbc.com/timelines/zypd97h#zx2r3k7 

  

"Today, machine learning powers tools such as self-driving cars, voice-activated assistants and social media feeds. 

However the ideas behind machine learning have a long history, and rely on maths from hundreds of years ago and enormous developments in computing in the last 70 years."

 

Key Dates: 

Pre-1940:

Major breakthroughs that underpin the mathematical aspects of machine learning include:

  • The work of Thomas Bayes in the 18th Century. His work led Pierre-Simon Laplace to define the Baye's Theorem in 1812
  • Adrien-Marie Legendre developed the Least Squares method for data fitting in 1805
  • Andrey Markov describe analysis techniques later called Markov Chains in 1913

ALL of these techniques were fundamental to the development of, and still are central to modern machine learning. 

1948: 

The modern computing revolution began, as stored-program computers were developed that could hold their instructions in the same memory they used for data.

The first computer of its type the Manchester Small-Scale Experimental Machine - given the nickname Baby

 

1950: 

Alan Turing the creator of modern computing published Computing Machinery and Intelligence. In this work he asks the question Can machines think? and proposed the Imitation Game, a test that could determine whether a computer was intelligent. 

 

1951: 

Marvin Minsky and Dean Edmonds build the first artificial neural network - The Stochastic Neural Analog Reinforcement Computer (SNARC) was able to learn how to navigate a maze. 

 

 1974: 

Breakthroughs in AI didn't happen, and people became disillusioned with the concept. This led to the first AI winter, and a reduction in funding after the Lighthill Report to Parliament noted the failure to deliver on grandiose objectives

  

1996:

AI came back into the public eye when Deep Blue an IBM computer beat World Chess Champion Gary Kasparov in the first game of a match. Ultimately Kasparov won the 1996 match. 

However, when an upgraded Deep Blue took part in a rematch in 1997, the computer won the match 3.5 games to Kasparov's 2.5. 

 

2006:

Backpropagation is brought back into the light by Geoff Hinton using fast modern processors. Backpropogation is a core technique used to train deep neural networks. These Deep Learning nets are key to modern machine learning. 

 

2014: 

DeepMind Technologies, a British company founded in 2010 is acquired by Google

 

2016:

AlphaGo was developed by DeepMind researchers and trained at Go by playing against both computers and humans. The computer beat Lee Sedol the worlds 2nd best Go player in 2016

In 2017AlphaGo beats the No.1 Go player Ke Jie

 

Looking to the future... 

Some computer scientists believe that a singularity will be reached... if this does happen, the resulting evolution in AI will quickly outpace and outmatch the human brain. 

The likelihood of this singularity is studied by organisations like the Cambridge Centre for the Study of Existential Risk, as despite the singularity being deemed very unlikely, the threat it could pose would be great. 

 

 

If you would like to look at this topic more deeply or read the entirety of the article, you can do so here https://www.bbc.com/timelines/zypd97h#zx2r3k7

 

 

 

Recent Blogs