History of Artificial Intelligence: AI

In ancient times, craftsmen imagined that artificial objects had been endowed with artificial intelligence. The genesis of artificial intelligence can be traced to the philosophical thinking that intelligence emerges from mechanical manipulation of symbols. This led to the invention of a digital computer in 1940s. It was programmable and was based on abstract mathematical reasoning. Many with fertile imagination thought of developing an electronic brain in future inspired by the computer.

Alan Turing took the lead in doing research in the field. He called this Machine Intelligence. It was in 1956 that the term artificial intelligence was first used at a workshop held at Dartmouth College, US. It inspired artificial intelligence research for decades. There was a prediction that a machine that equals human intelligence would emerge soon, and more money was poured into research.

The project was not so easy. There were funding problems in the 1970s Those years were called the years of AI Winter. There was a silver lining in early 1980s when Japanese government inspired other governments to fund artificial intelligence. However, by late 1980s, funding again dried up.

AI bloomed in 2020s after machine learning showed the potential to be useful in many fields. New methods, powerful hardware and availability of big data — all this was conducive to the development of artificial intelligence. In the 21st century, the study of mathematical logic provided the necessary breakthrough to make AI a reality.

Recent research has been inspired by neurology which has shown that human brain is an electric network of neurons which are fired by pulses. In 1943, Pitts and McCulloch networks of idealized artificial neurons, showed how they can perform certain logical functions. They in fact first described a neural network. Marvin Minsky was inspired by Pitts and McCulloch in 1951. He built the first neural net machine in 1951.

Turing used the term ‘machine intelligence’. The same was later called ‘artificial intelligence’ after his death (1954). In 1955, Herbert Simon created Logic Theorist. Simon worked on the body-mind problem and claimed he has solved it. A system consisting of matter can acquire properties of mind.

As such, artificial intelligence (AI) was formally introduced by John McCarthy in 1956 during Dartmouth Workshop.

Inspired by McCulloch and Pitts (1944) paper, neural networks were translated into hardware. Perception machines (1957-1962) were built. MINOS was built by Alfred Brain in 1960. Though multi-layered neural networks emerged, most had only one layer of adjustable weights.

Back propagation emerged in neural network training in 1980s.

AI research led to the emergence of communication with computers in natural language. Joseph’s Eliza could carry out communications as if there is interaction with a human being.

Corporates boarded the bandwagon of AI in the 1980s. Many expert systems answering questions about a domain of knowledge were developed.

Geoffrey Hinton and David Rumehart in early eighties popularized a method for training neural networks called backpropagation. In 1986, Rumehart and McClelland published Parallel Distributed Processing. That provided new momentum to neural network research.

Between 1993 and 2011, AI came to be established. In 1997, Deep Blue computer defeated Gary Kasparov in chess. Computer speed and capacity increased in 1990s. The concept of intelligent agent which perceives the environment to maximize the chances of success came into vogue. Probability and decision theory were brought into AI by Judea Pearl. Math concepts like Markov models, stochastic modelling and classical optimization became handy for this field.

AI was found useful in robotics, logistics, speech recognition, banking software, medical diagnosis and search engines.

However, all this was attributed to advances in computer science.

At the beginning of new millennium, big data emerged. There was faster computing. And there were advanced machine learning techniques.

By 2016, AI came to be recognized as a distinct market. There were advances in deep learning, especially convolutional neural networks and recurrent neural networks (CNNs and RNNs).

In 2017, Google researchers (Vasawani et al) proposed transformer architecture. It exploits attention mechanism. This led to the large language models (LLMs).

Foundation models are LLMs trained on vast quantities of unlabeled data (2018).

OpenAI released GPT-3 in 2020. In 2023, Microsoft tested GPT-4, an early version of artificial general intelligence.

print

Leave a Reply

Your email address will not be published. Required fields are marked *