What if AI systems communicate to each other verbally or in writing to learn new tasks? It is just like the human beings talking to each other and understanding each other. A mother telling a fish curry recipe to a daughter, and the daughter follows what the mother has instructed to turn the dish out. University of Geneva research makes AI to AI integration possible.
An artificial neural model is connected to a pre-trained language model. In other words, a pre-trained model is integrated to a simpler network. It copies human brain areas responsible for language perception, interpretation and production.
This innovation opens up opportunities in robotics — development of humanoid robotics that communicate with each other and understand each other, and also communicate and understand with human beings.
This is a cognitive process. An AI learns a series of basic tasks and performs them. This AI communicates with sister AI what it is doing. Sister AI repeats the performance. The results are very promising. It is performing a new task (without prior training) solely on the basis of instructions. This dual capacity distinguishes human beings from other species. Other species learn new task by reinforcement signals (positive or negative).
NLP seeks to replicate the human faculty with machines. It is based on artificial neural networks.
Researchers used S-Bert (300 million neurons) model which is pre-trained to understand language. It was connected to simpler network of a few thousand neurons.
First researchers trained this network to simulate Wernicke’s area (part of brain’s area that enables to perceive and interpret language). Secondly, the network was trained to reproduce Broca’s area (under the influence of Wernicke’s area) which is responsible for producing and articulating words. The whole experiment was conducted on laptops.
This model opens new horizons for understanding the interaction between language and behaviour. It is useful in robotics. It is important in robotics that machines talk to each other.