There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.
Abstract
Artificial intelligence (AI) has recently made a nearly abrupt entrance into our routine
daily life. Freely accessible, it makes up a disruptive technology steadily shaking
established fundaments of our society and is thought to be here to stay.
ChatGPT, the prominent free chatbot that uses natural language processing, was released
in November 2022 by OpenAI, swiftly storming the internet and prompting users to apply
for presumed unlimited purposes. AI specialists consider that ChatGPT has the potential
to revolutionize how users interact with chatbots and AI as a whole[1,2].
Artificially intelligent computer systems are used extensively in medical sciences.
Currently, the most common roles for AI in medical settings are clinical decision
support and imaging analysis. Common applications for AI include disease detection
and
diagnosis, personalized disease treatment, accelerated drug discovery and development,
telemedicine, improving patient safety with error reduction, improving communication
between physician and patient, transcribing medical documents, remotely treating
patients, and others[3].
Not surprisingly, ChatGPT has been used by researchers in generating content for academic
publishing. The AI tool has received recommendations for its use in scientific writing,
as stated by the International Committee of Medical Journal Editors (ICMJE)[4], and
additionally for manuscript peer
review, as indicated by the World Association of Medical Editors (WAME)[5].
Consistent with the announcement, at submission, the authors are required by the journal
to disclose whether AI-assisted technologies were used in the production of the
submitted manuscript. The authors should describe how chatbots have been used and
they
should not be included in the authorship because they cannot be responsible for the
work’s accuracy, integrity, and originality, making humans entirely accountable for
the
submitted material. A special mention for carefully reviewing the submitted content
is
stressed, given the possible incorrect, incomplete, or biased information generated
by
the chatbot.
The statements from the ICMJE[4] and
WAME[5] recognizes the
potential of AI language models in scientific manuscript writing. It may indicate
a
shift in how scientific writing is approached and the recognition of AI language models
as valuable tools in the research process.
Using AI language models in scientific manuscript writing can offer several advantages.
These models can assist researchers in generating high-quality drafts and offering
suggestions for content organization, grammar, and style. They can help streamline
the
writing process by providing a starting point or helping overcome writer’s block.
Additionally, AI language models can potentially improve manuscript quality by
identifying inconsistencies, errors, or gaps in the content.
However, AI models may inadvertently reproduce biases or inaccuracies present in the
training data. Researchers should be cautious and critically evaluate the content
generated by these models to ensure scientific accuracy, consistency, and adherence
to
ethical standards. AI models are tools and should not replace the expertise and judgment
of human researchers[6,7].
Likewise, reviewers should disclose to journals if and how AI technology has been
used to
facilitate their review. Reviewers are reminded that AI can generate possible incorrect,
incomplete, or biased material, reinforcing the human factor as still essential for
completing the reviewing process[8].
Additionally, ethical concerns apply to AI. Massive amounts of data must be gathered
to
effectively instruct and use AI, which may come at the cost of patient privacy in
most
cases. Bias is another concern since AI makes decisions solely on the data it receives
as input; this data must represent accurate information.
It is reasonable to expect that the processes involving AI language models for scientific
manuscript writing will continue to evolve, being refined, and improved over time,
representing an opportunity for scientists to simplify their research process and
produce high-quality and impactful articles. The field of AI research is rapidly
advancing, and even more advanced language models like ChatGPT are being developed
and
optimized.
However, the need for skilled researchers and proficient scientific writers is critical
to advance science and research ensuring that new discoveries and findings are
disseminated effectively. To achieve this goal, it is essential to invest in new
researchers’ training and education and afford them the necessary tools and resources
to
succeed in their endeavors. This includes providing opportunities for research
experience using AI as a supporting tool. Therefore, while qualifying researchers
and
writers for scientific writing using AI can be an innovative and effective approach,
it
is important to ensure that they have a solid understanding of the principles and
conventions of scientific writing. This includes knowledge about types of study design,
proper text structuring, peer review, citations, references, and other norms of
scholarly style.
The announcement by the ICMJE signals an upheaval in recognizing the potential of
AI
language models in scientific manuscript writing, provided they are used wisely and
in a
way that complements human knowledge and skill, considerably advancing scientific
understanding.
Artificial intelligence tools represent an exciting opportunity for scientists to streamline their research and write impactful articles. Using artificial intelligence tools like ChatGPT can greatly improve writing review articles for scientists, by enhancing efficiency and quality. ChatGPT speeds up writing, develops outlines, adds details, and helps improve writing style. However, ChatGPT's limitations must be kept in mind, and generated text must be reviewed and edited to avoid plagiarism and fabrication. Despite these limitations, ChatGPT is a powerful tool that allows scientists to focus on analyzing and interpreting literature reviews. Embracing these tools can help scientists produce meaningful research in a more efficient and effective manner, however caution must be taken and unchecked use of ChatGPT in writing should be avoided.
Background The emergence of systems based on large language models (LLMs) such as OpenAI’s ChatGPT has created a range of discussions in scholarly circles. Since LLMs generate grammatically correct and mostly relevant (yet sometimes outright wrong, irrelevant or biased) outputs in response to provided prompts, using them in various writing tasks including writing peer review reports could result in improved productivity. Given the significance of peer reviews in the existing scholarly publication landscape, exploring challenges and opportunities of using LLMs in peer review seems urgent. After the generation of the first scholarly outputs with LLMs, we anticipate that peer review reports too would be generated with the help of these systems. However, there are currently no guidelines on how these systems should be used in review tasks. Methods To investigate the potential impact of using LLMs on the peer review process, we used five core themes within discussions about peer review suggested by Tennant and Ross-Hellauer. These include 1) reviewers’ role, 2) editors’ role, 3) functions and quality of peer reviews, 4) reproducibility, and 5) the social and epistemic functions of peer reviews. We provide a small-scale exploration of ChatGPT’s performance regarding identified issues. Results LLMs have the potential to substantially alter the role of both peer reviewers and editors. Through supporting both actors in efficiently writing constructive reports or decision letters, LLMs can facilitate higher quality review and address issues of review shortage. However, the fundamental opacity of LLMs’ training data, inner workings, data handling, and development processes raise concerns about potential biases, confidentiality and the reproducibility of review reports. Additionally, as editorial work has a prominent function in defining and shaping epistemic communities, as well as negotiating normative frameworks within such communities, partly outsourcing this work to LLMs might have unforeseen consequences for social and epistemic relations within academia. Regarding performance, we identified major enhancements in a short period and expect LLMs to continue developing. Conclusions We believe that LLMs are likely to have a profound impact on academia and scholarly communication. While potentially beneficial to the scholarly communication system, many uncertainties remain and their use is not without risks. In particular, concerns about the amplification of existing biases and inequalities in access to appropriate infrastructure warrant further attention. For the moment, we recommend that if LLMs are used to write scholarly reviews and decision letters, reviewers and editors should disclose their use and accept full responsibility for data security and confidentiality, and their reports’ accuracy, tone, reasoning and originality. Supplementary Information The online version contains supplementary material available at 10.1186/s41073-023-00133-5.
Artificially intelligent computer systems are used extensively in medical sciences. Common applications include diagnosing patients, end-to-end drug discovery and development, improving communication between physician and patient, transcribing medical documents, such as prescriptions, and remotely treating patients. While computer systems often execute tasks more efficiently than humans, more recently, state-of-the-art computer algorithms have achieved accuracies which are at par with human experts in the field of medical sciences. Some speculate that it is only a matter of time before humans are completely replaced in certain roles within the medical sciences. The motivation of this article is to discuss the ways in which artificial intelligence is changing the landscape of medical science and to separate hype from reality.
This is an Open Access article distributed under the terms of the
Creative Commons Attribution License, which permits unrestricted use,
distribution, and reproduction in any medium, provided the original work is
properly cited.
scite shows how a scientific paper has been cited by providing the context of the citation, a classification describing whether it supports, mentions, or contrasts the cited claim, and a label indicating in which section the citation was made.