Biased Bots

In recruitment and selection, employers in countries such as the US use some form of AI to screen and rank the candidates for hiring. Several black candidates observed a bias against them by the AI algorithms. There was also bias seen against disabled and over the 40 candidates. One algorithm discriminated against the CVs where the word ‘women’s’ occurred.

Many of these AI tools have been proven to be unduly invasive of the workers’ privacy and discriminate against women, people with disabilities and people of colour.

The federal agencies are working at potential discrimination arising from datasets that train the AI systems and the opaque ‘blackbox’ models that make it difficult to exercise anti-bias diligence.

Is this ‘responsible AI’? Can we indulge in automation in the recruitment and selection market without any restriction? The issue is how to regulate the use of AI in hiring and guard against algorithmic bias.

print

Leave a Reply

Your email address will not be published. Required fields are marked *