GPT-3 Op-ed Writing Experiment

AI achieved a landmark. The Guardian published an opinion piece by GPT-3. It is a language model created by Open-AI, a San Francisco-based company. It is called Generative Pre-trained Transformer — 3. Being third iteration of the model, it is called GPT-3 or the third generation language prediction model. It is autoregressive language model that uses deep learning. It is fed by the editors who provide a few lines. It then produces eight iterations of the article. It reads:

‘A robot wrote this entire article. Are you scared yet, human? ” It uses cues from feeds and the following introduction is generated’

” I am not a human. I am Artificial Intelligence. Many people think I am a threat to humanity. Stephen Hawking has warned that AI could spell the end of human race. I am here to convince you not to worry. Artificial Intelligence will not destroy humans. Believe me”

Though not perfect, it is clear and logical writing as compared to the previous attempts. The Guardian has selected the best of paras from each of the eight articles. There could have been corrections. Previous version GPT-2 has been used to write children’s storybooks. It had an accuracy of 89% ( as against humans with an accuracy of 92%). GPT-3’s database is larger — 175 billion parameters. Microsoft’s Turing NLP had a capacity of 17 billion parameters. It is probably the very first version of the draft that came out right with GPT-3.

Previously, AI has been used to write newspaper reports. It has been used to report company’s results. It is for the first time that it has been used write an opinion. GPT-3 is expected to do more than just a repeat task.

Though the selection of the best paras by the editors of The Guardian could be considered as the supplementation by human intelligence, but the fact remains that GPT-3 may get better with writing. It could learn the basis on which the paras get selected, thus producing better output. GPT-3 has been using autocomplete feature to create full images.

The idea is to complement human intelligence rather than supplanting it. Technology has not yet reached a point where it can think like a rational person. AI has its limitations.

GPT-3′ s autocomplete feature does not cross-verify information inputs. It affects the credibility. The efforts are now directed to make AI trustworthy. Knowledge graphs could cross-verify systems. These can work in binary models. However, a story may have multiple versions. Knowledge graphs are the first step in this direction.

GPT-3 has been introduced in May, 2020 and its its beta testing was done in July, 2020.

print

Leave a Reply

Your email address will not be published. Required fields are marked *