The healthcare industry is seeing remarkable technological progress. Several new tools have emerged that could significantly influence scientific research. One notable tool is RTutor, which combines the R programming language with GPT-3. This integration allows users to create R code and conduct analyses using plain language explanations of data and desired procedures. This means that even without coding skills, users can swiftly transition from raw data to comprehensive analyses within seconds.

 

In particular, the latest version of OpenAI's generative pre-trained transformer model, GPT-3, exemplified by ChatGPT, can produce convincingly well-written and plausible content. ChatGPT can create an entire manuscript. All you need to do is engage with it by asking basic topic questions and requesting expansions on its generated statements, thus gradually building up content until it gives you a complete research article. After generating the text, you can organise responses and remove redundancy. The output is a coherent, grammatically accurate, and unexpectedly insightful article, ready in under 30 minutes. ChatGPT can even produce citations, and though they might be fabricated, they appear quite genuine. MidJourney Inc., an AI image generation model, can create figures and graphs, thus making your manuscript even more credible. But the question is: is that research article really authentic and credible?

 

The rapid progress in AI and natural language processing offers valuable possibilities for increased scientific efficiency. Nonetheless, this advancement has a worrying side, as it enables the creation of convincingly fraudulent text. This raises concerns about a potential surge in scientific fraud, akin to a "pyrite" era of deceit. To counter this, researchers, publishers, and reviewers must remain vigilant and implement measures to uphold the credibility of published research.

 

Journals must take a practical and direct approach by promptly screening all submitted manuscripts for AI-generated material. This can be accomplished using existing tools like GPTZero or Originality.AI, similar to current plagiarism checks. Acknowledgment of AI involvement in manuscript creation should be required, akin to how software tools and editing services are recognised.

 

People in healthcare and healthcare publishing must understand that AI tools should enhance human work but not replace it entirely. While AI can make tasks, more efficient, qualities like creativity, critical thinking, and ethical judgment remain essential aspects of research.

 

By harnessing AI advantages and staying cautious about possible drawbacks, healthcare can facilitate a fresh phase of scientific progress. Reduced time on formatting, statistical complexities, and administrative tasks allows for deeper research consideration, result analysis, and enhanced patient care. But false research and fraudulent text are something that we need to be vigilant about, as with so many AI tools now at people's disposal, there is no saying who can claim what in their manuscripts.

 

Source: CHEST

Image Credit: iStock

«« Awake Prone Position in Acute Hypoxaemic Respiratory Failure


The Promise of Artificial Intelligence in Critical Care »»

References:

Kammar MN (2023) A Case Study in Artificial Intelligence-Generated Manuscripts. Chest. 164(2):478-480.




Latest Articles

Artificial Intelligence,Manuscripts Artificial Intelligence-Generated Manuscripts