In a recent interview in Geneva, Sam Altman, CEO, OpenAI spoke about AI safety. He, however, was reticent about how the GPT model works. According to him, they have not solved interpretability or explainability of the model. It means how AI and ML systems make decisions.
Of course, if there is lack of understanding about the working of LLMs, is it right to release new and powerful models? Altman dodged the question. He finally answered by saying that even in the absence of full cognition, AI systems are generally considered safe and robust. He further elaborated taking the example of human brain. A neuron-to-neuron level process in the brain is not understood fully by us. Yet, there are some rules we follow, and can ask others what is behind this thinking.
Altman referred to a Blackbox presence or a sense of mystery behind the functionality. Generative AI (like human brains) create new content based on existing datasets. They supposedly learn over time. GPT may not have emotional intelligence or human consciousness. It is difficult to understand how algorithms or human brain arrive at the conclusions they draw.
Early in May 2024, OpenAI released GPT-4o and announced that they are working on a new next model. It is anticipated that it will take us closer to AGI.
While doing its iterative development, OpenAI poses issues of safety, especially after its safety team has been disbanded. Altman has indicated the formation of a new safety and security committee.