The progress of AI technology is expected to benefit all of humanity. At the same time, the technology has the potential to harm the world. Some working in the field can be whistleblowers warning about the harms a particular AI model can do. Who will listen to their whistle? However, there is promising shift towards better oversight. There is a British government backed safety group — AI Safety Institute (AISI). It has secured agreements from eight of the world’s leading tech companies to safety test their AI models before and after they are deployed to the public. Another watchdog is the US Federal Trade Commission.
The engineers-employees working in AI field could have expressed their concerns about the AI model — but they were gaged. They were given stock options if they stayed in the company for a few years. The agreement with them was to keep silent when they leave or else they lose the chance of being millionaires. The gag order kept them silent.
OpenAI has recently stated that it would free most of its past employees from non-disparaging requirements. It is necessary to decouple non-disparaging agreements from compensation. Anthropic has no such gag policy.
There could be an online portal where AI engineers can submit their concerns.
The mechanisms for whistleblowing have still to evolve.