Socially Beneficial AI

A couple of decade’s AI-enhanced growth in the world will make the world a changed place. AI is the most transformative innovation, and hence disruptive too. Still, all the exuberance over AI is still premature. We can get swept away by the excitement it has generated and the intellectual achievement that accompanies it.

AI’s full implications are yet to be understood. The idea is to make AI models that express as well as the humans. All this works well for marketing AI. There are two issues — AI as a tool that facilitates decision making and AI as a decision maker itself. These two are worlds apart.

It is to be seen how far it proves itself to be a good decision-maker, but the problem is that AI models hallucinate. Even those who design these models do not understand why they act stupid.

AI cannot make factual judgements. It all is a matter of values. AI handles complications by attaching values to actions and/or consequences. However, the model infers these from consensus (based on the information it is trained on). Alternatively, it infers on the instructions issued by its users/designers. Both these do not have any ethical authority.

AI’s arrival is ill-timed, as there is no distinction these days between facts and values. It is difficult to define objectivity.

These ideas corrode what AI claims to know. There is further push by designers to achieve cultural realignment.

AI models, so the companies think, must be socially beneficial, despite their reasoning power or instructions. A model has to choose between what is true and what is socially beneficial. AI’s ‘truth’ must be ‘gospel truth’ since it is smart. However, models such as Gemini do not follow this maxim.

print

Leave a Reply

Your email address will not be published. Required fields are marked *