It was reported when Sam Altman was fired as CEO of OpenAI it was on the brink of a breakthrough — a new algorithm Q* which can solve math problems of high-school standards with great accuracy, though GPT-4 could do this with 70 per cent accuracy. Q*’s. perfect scores vested it with logical reasoning, thus deviating from identification and replication of patterns learnt during training.
If true, we are one step closer to what is being described as AGI — artificial general intelligence. In fact, here there is absorption, deciphering, and replication of various patterns learnt in the training phase, and in addition reasoning ability. This power could improve in subsequent iterations. AGI then could be equated with high intelligence.
AI, as we know it today, is narrow. Its algorithms are designed to perform a narrow range of tasks, though LLMs are more versatile. Generative AI is good at writing and language translation. It works by statistically predicting the next likely word, and by logging the contextual association of words to each other. Even while solving math or writing a code, they are working through statistical association. In order to solve novel problems of math, they must have greater reasoning capabilities.
Real AGI will perform a lot of tasks and tackle problems far better than humans can. By definition, AGI will perform new tasks without instructions. This model could be self-aware or conscious. It may possess traits such as curiosity, self-will or a desire for self-preservation. All these traits which we associate with the living beings.
Could such a model be ethical or altruistic? Such concepts have variations across the cultures. However, AI that is not aligned with the goals good for humans could be dangerous.
Sam uttered a sentence a day before he was fired — ‘push the veil of ignorance back and the frontier of discovery forward’. Was he hinting at Q*? Many such rumours float around.