5
views
0
recommends
+1 Recommend
0 collections
    0
    shares
      • Record: found
      • Abstract: found
      • Article: not found

      Chatting about ChatGPT: how may AI and GPT impact academia and libraries?

      ,
      Library Hi Tech News
      Emerald

      Read this article at

      ScienceOpenPublisher
          There is no author summary for this article yet. Authors can add summaries to their articles on ScienceOpen to make them more accessible to a non-specialist audience.

          Abstract

          Purpose

          This paper aims to provide an overview of key definitions related to ChatGPT, a public tool developed by OpenAI, and its underlying technology, Generative Pretrained Transformer (GPT).

          Design/methodology/approach

          This paper includes an interview with ChatGPT on its potential impact on academia and libraries. The interview discusses the benefits of ChatGPT such as improving search and discovery, reference and information services; cataloging and metadata generation; and content creation, as well as the ethical considerations that need to be taken into account, such as privacy and bias.

          Findings

          ChatGPT has considerable power to advance academia and librarianship in both anxiety-provoking and exciting new ways. However, it is important to consider how to use this technology responsibly and ethically, and to uncover how we, as professionals, can work alongside this technology to improve our work, rather than to abuse it or allow it to abuse us in the race to create new scholarly knowledge and educate future professionals.

          Originality/value

          This paper discusses the history and technology of GPT, including its generative pretrained transformer model, its ability to perform a wide range of language-based tasks and how ChatGPT uses this technology to function as a sophisticated chatbot.

          Related collections

          Most cited references19

          • Record: found
          • Abstract: not found
          • Article: not found

          A Survey on Transfer Learning

            Bookmark
            • Record: found
            • Abstract: found
            • Article: not found

            BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

            We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).
              Bookmark
              • Record: found
              • Abstract: not found
              • Article: not found

              A review on the attention mechanism of deep learning

                Bookmark

                Author and article information

                Journal
                Library Hi Tech News
                LHTN
                Emerald
                0741-9058
                0741-9058
                February 14 2023
                February 14 2023
                Article
                10.1108/LHTN-01-2023-0009
                50a60002-1498-47fc-b599-3e877da65ad8
                © 2023

                https://www.emerald.com/insight/site-policies

                History

                Comments

                Comment on this article