Emergent Abilities in LLMs

Through the fifties the journey of artificial intelligence has started. To begin with we dealt with, Artificial Narrow Intelligence (ANI) restricted to a specific skill. Later we reached the stage of generative AI where large language models generated output by using their ability to learn from datasets they were trained on. We are on our way to have artificial general intelligence (AGI) where the models will perform as well as the human beings and at times even surpass them. When AI matches the intelligence of humans, we reach singularity and when AI surpasses singularity, it is called superintelligence.

Let us learn against this background that LLMs have started showing some surprising unpredictable behaviours, and these behaviours are referred to as ’emergent abilities’. Some of these pertain to basic math skills, some to computer coding and some other to decoding movies based on emojis.. It is interesting to learn about these emergent abilities and why and how they arise.

By emergent abilities, I mean these are the abilities for which the model has not been programmed. These emerge from the way the model processes and generates language. They arise from the model’s ability to learn from the data.

The examples of emergent abilities are the question answering ability using search engines and keeping the model aligned with search results. Another emerging ability is the ability to summarize text into smaller concise pieces. Then there is ability to translate from languages different from each other. LLMs also create beautiful poems, code, scripts, musical pieces etc.

It is a moot point to what extent the LLMs show emergent abilities. Some say these are only pattern-matching models, and their abilities cannot be truly called emergent.

It is to be noted that emergent abilities in LLMs are not on par with intelligence. In intelligence, we acquire the ability to apply knowledge and skills. Some of the behaviuors of LLM border on intelligence, but still lack the level of understanding or reasoning the humans have.

We are not able to predict these emergent abilities of LLMs. It could develop an ability that was unforeseen by its designer. This makes LLMs fascinating, and at the same time risky.

print

Leave a Reply

Your email address will not be published. Required fields are marked *