A car is being driven by AI-assisted engine. In a particular situation, it predicts that killing the occupants of the car would save many more lives in the event of an upcoming crash. This decision must be made in seconds. Should the occupant be killed or saved?
Consider an AI-monitored weapon system. In a war, it decides to kill a thousand persons to save a million lives. AI has to make the choice.
It is not easy to figure out its preference ordering. Even those who program it cannot do it. It is not known what goes into the black-box of decision making.
A chess game AlphaZero purely trained on ML was simply fed the rules of the game, and to maximise the wins. It was not trained by humans. It did the rest. It breached the limits of the game. It sacrificed unusual pieces including the Queen to win.
Halicin, MIT-designed AI programme identified a drug molecule out of many that worked against target bacteria. It worked like a chess game. It found a new antibiotic.
It is uncertain how these programmes achieved their goals.