AI Is Not Perfect

AI has evolved into generative AI where it can generate new content — as text, images, audio, video etc. Further, the latest generative AI algorithms are based on deep reinforcement learning. What does it mean? It means that unlike previous generation algorithms, we do not have to teach the strategies to accomplish a task. What we feed are only the basic rules, and a historical data set. The algorithm learns on its own the most optimal strategy. Google’s AlphaGo beat the champion of the game Go, a board game. However, the system was not trained for the particular strategies to achieve this. It learnt by studying older matches, and played thousands of games against itself.

The current generation of AI algorithms work like a blackbox. They produce the desired outcomes, but there is no explanation of how they arrived at these outcomes. Even the developers who build these models do not know the exact decision making process. A facial recognition software recognises a face correctly, but it is difficult to say how exactly it does it.

An AI system may predict that a person is likely to suffer from a stroke based on the scans, but we do not know the exact process behind this conclusion. An autonomous vehicle may choose to collide with a person in order to avoid a collision with a truck. The logic is obscure.

An AI system may not be free from bias. A facial recognition system have shown poor recognition of black females on the younger side. Certain words in a resume may make an algorithm used in selection biased. Bias enters into algorithms through training data. It can be based on historical or social inequities.

We assume that AI decisions are objective and fair. Unless we know how they are arrived at, we cannot accept them without an informed discussion.

AI systems are known to hallucinate. AI hallucinations are defined as generation of nonsensical or unfaithful output. These hallucinations can be intrinsic or extrinsic. The intrinsic hallucinations are a result of the contradiction between input and output. Extrinsic hallucinations occur when the output is not supported by the input. Hallucinated output is presented with confidence and fluency.

There are reasons for such flaws. Encoders make wrong correlations. There could be biases. Decoders attend to wrong inputs or with higher randomness. Some inaccuracies cannot be attributed to the knowledge and intention of AI or its developers.

AL technologies can be classified from low-risk to those which are prohibited. AI legislation should deal with technology accordingly.

print

Leave a Reply

Your email address will not be published. Required fields are marked *