DeepMind’s Framework for AGI

The race is on to achieve AGI or artificial general intelligence. There are various views among the research workers about the level of AGI achieved. Some say we are very far away from it, whereas some say that there are ‘sparks of AGI’ visible in the present-day LLMs.

Shane Legg along with other research scientists at Google’s DeepMind throw new light on the concept of AGI.

It is necessary to be clear about what we call AGI, and its attributes. These attributes could be performance, generality and autonomy.

The scientists defined AGI in nine different ways, ranging from Turing Test, Coffee Test, levels of consciousness, capabilities with regard to tasks to economic measures. Each such definition is not perfect. The present-day LLMs do pass Turing Test, but generating elegant text is not enough for AGI. It is a moot point whether machines possess awareness and consciousness. Machiness cannot do everything, e.g. making good tea.

Researchers have suggested six criteria for measuring AGI.

Capabilities : AGI must have capabilities. There should not be focus on sentience and consciousness.

Performance and Generality : Both performance and generality must be confirmed so that these systems can perform a range of tasks as well as are good at execution too.

Cognitive and Metacognitive Levels : These traits must be present. However, there should not unnecessary focus on embodiment and physical tasks.

AGI-level Tasks : The system should have this potential. It is not necessary for this trait to be deployable. Deployment has legal and social issues.

AGI – Not End-point But a Path : AGI is not an end-point. It is a path. There are different levels of AGI along the path.

Five Levels of Performance and Generality

DeepMind researchers have made a matrix to measure ‘performance’ and ‘generality’ across five levels.

At level O, there is no AI. At level 1, there is emerging AI. At level 2, there is competent system. At level 3, there is expert system. At level 4, the system is virtuoso. At level 5, we are dealing with a superhuman system that outperforms 100 per cent of humans.

At each level, the system could be narrow or general.

ChatGPT, Bard and Llama-2 are competent (level 2) in some narrow tasks — text generation, simple coding and so on. They are emerging ( level 1) in other tasks, e.g. reasoning abilities, planning and mathematical abilities. Mostly, our models represent emerging AGI till they become proficient for a broader set of tasks.

Models are rated according to their performance. On deployment, the system may not show the same level in practice.

AGI should include a broad suite of cogntive and metacognitive tasks. It is not possible to enumerate all such tasks of general intelligence. There are always some new tasks.

Autonomy and Risk

In AI systems, scientists we a seperate matrix of autonomy and risks. At level O, there is no autonomy, say for example, a driver has to drive the car all by himself/herself. At level 1 autonomy, AI is used as a tool. At level 2, AI acts as a consultant. At level 3, AI collaborates. At level 4, AI is an expert. At level 5, AI is an agent. It is fully autonomous, with no need to for human intervention.

Depending on the level of autonomy, we assign risks to the system. DeepMind , thus, has created a framework for AGI.

print

Leave a Reply

Your email address will not be published. Required fields are marked *