Overcoming Bad Chat

OpenAI was founded in 2015 by Sam Altman and Elan Musk and others. Elon Musk resigned from the board in 2018. Microsoft and Google are its partners. In 2019, OpenAI released a text-generating ML model called GPT-2 which was trained on eight million web pages containing 1.5 billion parameters. In 2020, the company OpenAI released an ML model called GPT-3 which was more powerful than GPT-2. GPT-3 was trained on 175 billion parameters. OpenAI released in 2022, ChatGPT to produce natural language responses.

In 2021, Open AI raised funding of $1 billion. ChatGPT has been trained on a wide variety of data — books, articles, and journals. Its output may carry copyright material or may violate intellectual property rights (IPRs). It also does not attribute writing to its original sources. There are no citations. Thus it is vulnerable to infringement of the legal rights of others.

ChatGPT has been trained on 570 GB of text, one GB can hold approximately 1000 books of 100 pages each.

The models are vulnerable to attacks that aim to manipulate or deceive them. The model can receive adversarial inputs and could make incorrect predictions. It could be problematic in safety-critical applications.

It is challenging to identify and fix problems or vulnerabilities in them.

There are issues of cybersecurity, spread of misinformation and malware. It is difficult to fix responsibility. Who is responsible — the user, developer or AI?

To overcome the limitations of AI-assisted chats, may be a human being is not enough. AI itself can be used to safeguard threats from generative AI. If there is a hacking tool powered by AI, we can use AI to comprehend its actions.

print

Leave a Reply

Your email address will not be published. Required fields are marked *