LLMs and AGI

There are certain misgivings about the LLMs. Some feel they have been overhyped. Some consider them as an impediment in achieving AGI. In fact, LLM have limitations. There is lack of trust of the users. LLMs are not that accurate and reliable. The discussion is cliche, since the current LLMs do have reasoning and logical problems.

Despite the skepticism, big tech has not stopped building the best LLMs. Each company tries to demonstrate one-upmanship.

Yan LeCun believes LLMs will not lead to AGI. In fact, OpenAI has slowed the achievement of AGI by 5-10 years. It is necessary to try some novel approaches to AGI through abstraction and reasoning corpus (ARC). LeCun advises to attain animal-level intelligence first.

LLMs at times struggle with the apparently simple tasks. Youshua Bengio, one of the godfathers of AI, says in AI, some ingredients of human intelligence are missing.

LLMs fail to play common games such as tic-tac-toe. GPT- 4 struggled with Sudoku puzzles. DeepMind contends LLMs lack genuine understanding. Consequently, they cannot auto correct or adjust their responses. LLM-based chatbots are poor in math problems.

Still, the time is not ripe to write them off. Though at present, we have not reached human-level intelligence, it does not rule out the possibility of reaching it in future.

GPT-4 understands complex emotions. It beats human psychologists. GPT-5, according to Mira Murati, OpenAI CTO, will have Ph.D.-level intelligence.

According to Sutskever, text is the projection of the world. LLMs build cognitive architecture from scratch. They trace the evolution of learning and resonate with real-time learning.

There is research on LLMs too. In future, they can understand cause-and-effect relationships. LLMs can use neurosymbolic AI.

LLMs are worth another shot on the road to AGI.

print

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *