AI: The Road Ahead

Surprising Altman compares OpenAI and the Manhattan Project. Both are treated as projects which require protection against catastrophic risks. Many scientists are skeptical about AI gathering world-ending capacity anytime soon, or for that matter ever.

Instead, attention should be focused on AI bias and toxicity. Sutkever believes that AI, either from OpenAI or some other can threaten humanity. At OpenAI, 20% computer chips are available for superalignment’s team research.

The team is currently developing the framework for AI’s governance and control.

It is difficult to define superintelligence and whether a particular AI system has reached that level. The present approach is to use less sophisticated models such as GPT-2 so as to guide the more sophisticated models towards the desired direction.

Research will also focus on a model’s egregious behaviour. Human beings are trading off between weak models and sophisticated models. But can a lower class student direct a college student? The weak-strong model approach may lead to some breakthroughs, as far as hallucinations are concerned.

Internally, a model recognises its hallucination — whether what it says is fact or fiction. However, the models are rewarded, either thumbs up or down. Even for false things, they are rewarded at times. Research should enable us to summon a model’s knowledge and to discriminate with such knowledge whether what is said is fact or fiction. This would reduce the hallucinations.

As AI is reshaping our culture and society, it is necessary to align it human values. The most important thing is the readiness to share such research publicly.

print

Leave a Reply

Your email address will not be published. Required fields are marked *