ImageNet Roulette: Palgen and Crawford

AI is not free of bias. The above photomatching application uses a decade old data base to train machine learning systems. The uploaded photos are analysed by the AI that is trained on the most widely used image-recognition database.

The parameters used are skin colour, gender and race. Some results are outrageous — wrongdoer, offender for a dark-skinned man and Jihadist for an Asian woman. These are bizzare categories. This is by design to show what happens when technical systems are trained using problematic training data. Such results are not shown to people being classified.

There are 14 million images on the ImageNet database. They are classified into 22,000 categories.

There are other systems from MS and Google. They fail to identify Serena Williams and Obama.

The bias provides unpalatable results.

By perfecting the database, the bias can be removed. However, it is a pious statement. Here, a system ‘trains itself’. How the bias can be addressed?

print

Leave a Reply

Your email address will not be published. Required fields are marked *