In a recent study of medical research, the ChatGPT generated abstracts sailed through the plagiarism checker. The AI-output detector spotted 66 per cent generated abstracts. Human reviewers could correctly identify only 68 per cent of generated abstracts. They identified 32 per cent of generated-abstracts as being real, and 14 per cent of genuine abstracts as being generated.
Experts thus could not measure up to the task. There are ethical issues for the researchers after the arrival of ChatGPT. It is difficult to identify much of its output from human-written text.