Amazon has used a recruitment algorithm in 2015. It was reported that it had a bias in favour of men and against women. It had used resumes submitted over a decade, and mostly they were from men. Thus automatically such a.bias entered.
Biases are not just related to gender. AI used by lower courts in the US called COMPAS was used to determine the probability of an offender to commit a crime. Researchers noted that this was biased against African Americans.
A healthcare algorithm in October 2020 had shown favourable bias to Caucasians rather than African Americans.
These examples indicate the need to have some form of ethical training of the AI software. India’s NITI Ayog has released a draft document on creating Responsible AI mechanisms. The adoption of AI/ML is encouraged but the discussian on ethical aspects is muted.
There could be an oversight body, though self-regulation is the best. There could be sector-specific regulation. Some issues such as black-boxing fall outside the domain of regulation. There should be unrestricted research to avoid human prejudices affecting AI products. We need to rope in ethicists and social researchers.