AI/ML has found applications across different organisations. However, AI is a double-edged sword. AI could be poisoned or there could be data poisoning.
Such poisoning occurs in the ML sub-set of AI. Here the information that is used to train the machines is corrupted. It then misleads the ML algorithms.
Computers are trained to categorise information from voluminous data, say images of animals from which it has to recognise a rabbit. The system might not have seen a picture of a rabbit. However, when it is given enough images of different animals, it gets the capability to recognise a rabbit’s image.
An accurate prediction relies on huge number of samples. The volume of data is diverse. It increases the chances of correct prediction.
Professional hackers manipulate the data by labelling it incorrectly. This tricks AI/ML. Such tampering of data used to train machines cannot overcome the AI-powered defences.
The threat actors corrupt the data by introducing a malicious code. It labels the data incorrectly. Organisations must use clean data by checking that all labels being fed into the machine are correct. There could be a second layer of AI/ML algos to detect errors in data training. One should be careful about the sample size. The fewer the samples, the more are the chances of the data being clean.
AI cybersecurity is an important area and organisation must be proactive in pursuing it.
Leave a Reply