AI is being deployed in customer service, therapy, clinical documentation, span detection, HR, coding, sentiment analysis, content generation, image generation and so on.
With increasing use of AI, we do face threats — misinformation and copyright violation. Bad actors can spread misinformation or fake news. Such news can create conflicts and can destabilise societies. Natural language processing that these models possess make them extremely good at spreading high-quality misinformation.
The AI models can confidently spew non-sensical and factually incorrect answers when appropriate prompts are used. These models can imitate the writing style of desirable people to lend the writing credibility. Of course, there are some guard-rails but these could be bypassed.
Under the copyright laws, the creators are entitled to exclusive benefits of the creative work. In training the AI model uses copyrighted data. It is said that the doctrine of fair dealing allows this. However, when billions of pages of content are scraped from the web for commercial benefit, it is doubtful whether the fair dealing doctrine is applicable.
Getty Images, a stock images website, and a trio of artists have filed suits against a generative AI company in the US for scraping their content without permission and using it for profit motive. Moreover, there is an issue of whether the output of AI model is copyrightable. There is human involvement in training the model and fine turning it.
Legislation is a way out. There could be litigation. However, legislators have not understood the full implications of this technology. Litigation is lengthy, cumbersome and costly.
The whole environment surrounding. AI is full of uncertainties. There are guidelines and strategy papers but these are non-binding.
At present, we rely on self-regulation of AI industry. Generative AI dehumanizes civilization. It can be tackled by a right combination of legal devices and self-regulation.