Cade Matz has written a book Genius Makers which traces the history of Artificial Intelligence (AI) and how it reached its presence status.
What is currently known as AI originated from the idea of neural networks in the 1940s. It was studied how the neurons in the brain functioned. It was studied whether its electronic version could be created.
Frant Rosenblatt, psychology professor, Cornell, demonstrated how a computer could learn to distinguish simple patterns in mid-1950s. This creation was called Perceptron, and was promoted in media. It, however, had little practical application.
Marvin Minsky, Rosenblatt’s contemporary, wrote a book. It proved self learning systems simulating neural networks were useless. He also explored the field of neural networks, but was convinced that this is not the way to go.
Minsky, John McCarthy and Rochester proposed the term Artificial Intelligence in a convention. They proposed Symbolic AI which could teach computer to do specific things by giving very specific instructions. This idea eventually prevailed, and variations of the Symbolic AI evolved over several decades.
Geoff Hinton, however, did not give up on neural network, and his research assisted by his many students led to the field of Deep Learning. Yuan Lecun, a French origin computer scientist who shifted to Silicon Valley contributed to many breakthroughs.
Google hired Hinton and his assistants. Facebook hired Lecun. MS too spent a lot on AI research. It fell behind, as it had initially backed Symbolic AI. It had to catch up with Deep Learning.
GPUs gave AI research a huge boost. Deep Mind was working on AI. Google bought it.
Google Brain and Deep Mind had a rivalry. Google Brain concentrated on using Deep Learning for technologies that could be introduced quickly. Deep Mind worked on abstract work. It was UK-based. Deep Mind beat the world’s best Go Player.
Open AI, an artificial intelligence lab set up by Musk gave away, its research free.