The Sudden Surge of Artificial Intelligence
Back in the end of November of last year, when many of us were busily studying and preparing for the end of the fall semester, OpenAI launched a new chatbot called ChatGPT that has been gaining attention in both academic and publishing circles. This system uses artificial intelligence (AI) to generate sophisticated responses to questions and prompts and has impressed many with its ability to answer complex questions in a variety of fields with great detail. A recent article in the Wall Street Journal reported that Microsoft, an investor in ChatGPT, has indicated that it plans to incorporate the program’s features into all of its products, and many see that the chatbot’s question-answering functions could become a rival to Google’s search tools.
AI technologies have been steadily developing within sectors across the economy, and there has been a recent growth of programs supporting creative work in the arts and humanities. In 2022, Midjourney and Stable Diffusion were released, offering the ability to generate realistic and artistic images from natural language descriptions, similar to DALL-E (now DALL-E 2) which was released by OpenAI the year before. Last month, Riffusion was launched, joining a growing list of AI tools that similarly generate music from text-based prompts. In a recent article in MIT Technology Review, Melissa Heikkilä and Will Douglas Heaven forecast which technologies we might see emerge in 2023, including chatbots that integrate the features of ChatGPT with image and video recognition.
In academic communities, ChatGPT has become controversial because of its capacity to generate text that can be difficult to detect as machine-authored. Not only can it produce reasonably well-written research papers, but it has also produced poems with some finesse. Of course, these capabilities present challenges to monitoring academic integrity. For professors and publishers alike, how to identify computationally generated papers and images is the question of the moment. Recently, Edward Tian, a Princeton university undergraduate student, developed an app, GPTZero, to help identify ChatGPT-generated content, but the challenge will likely persist as AI technology evolves.
Some Productive Possibilities
If ChatGPT were developed to improve productivity and benefit humanity, how might it and other AI technologies be used effectively while avoiding academic dishonesty and plagiarism?
First, instead of banning the use of the program, faculty interviewed by Susan D’Agostino of Inside HigherEd argue that teachers should consider the learning outcomes for the class and which cognitive tasks might be better accomplished with the use of ChatGPT and other AI technologies as an aid. Some questions they invite other instructors to consider are:
- Can students use the output of the program to develop their critical thinking skills? For example, can the program’s failings be instructive? Can students use it to strengthen their fact-checking ability?
- Can the instruction of writing be approached differently to see how ChatGPT and other AI tools might supplement the work rather than substitute for it? Can some tools better serve specific purposes, such as in the revisioning process?
- Can instructors refocus their teaching not just on a single tool, but help students to be critical users of the technologies of the future?
A concrete example of how AI technology can be used in creative writing in an effective and honest manner can be seen in a recent episode of NPR’s This American Life. In the “Ghostwriter” segment of “The Ghost in the Machine,” writer Vauhini Vara describes how she used the GPT-3 technology on which ChatGPT is based to help her to write about the death of her sister by refining the writing prompts she gives it and reflecting on its output in a series of exchanges. In an albeit limited way, the technology acts like an editorial assistant to her, giving her written content that stimulates her thinking and begins to break up her emotionally-clouded writer’s block. As she drafts new prompts for the AI, she starts addressing her own experience with greater honesty. In the end, she does include a few of the actual words from the GPT-3 language in the last line of her essay, after having coached the program into being in greater tune with her grief.
In addition to providing a form of editorial assistance, AI technologies are already being used to improve workflows in scholarly publishing. In a recent post on the scholarly publishing blog Scholarly Kitchen, three experts discuss a list of tools that perform the following functions, among others: analyzing manuscripts for conformity to journal requirements, summarizing content, checking research methods in support of the peer review process, ensuring quality and integrity of images used in scientific publications, and assisting with citation analysis. In summary, where there are areas of work in publishing that are better handled by machines, AI is increasingly responsible for those tasks.
AI and Libraries
Many may ask the question, how will librarians handle ChatGPT and the ever-increasing role of AI in research? That line of inquiry is not really new, though, since for the past 25 years, librarianship has been evolving alongside the development of Google. Ultimately, one of the primary goals that academic librarians seek is the same as that of BC: to assist in the formation of resilient individuals who can think critically about the research process and utilize digital tools effectively and ethically to achieve their desired outcomes. There is much to be sorted out about AI and how its development is overseen, and librarians will undoubtedly continue to join other information professionals in those ongoing conversations.
In the meantime, our counsel to students using ChatGPT will remain the same as it has been in other areas of research and writing: don’t forget to fact-check, identify the sources of the content you gather, and cite, cite, cite! (MLA recently addressed how to cite AI in its blog.) Please also note that the journal Nature, along with the other Springer Nature journals, has just implemented two rules for the ethical use of large language models (LLMs) like ChatGPT in its content. The staff at Nature indicated that they expected other scientific publishers to adopt similar policies shortly.* If you are uncertain about your professors’ expectations, talk with them. As always, we librarians are here to help with any research questions you may have.
To conclude, we will let ChatGPT have the final word. The brother of a librarian recently asked ChatGPT to “write an epic poem about librarians defeating AIs in battle in the style of Beowulf,” and this was its response:
In days of old, in times long past,
A tale of librarians, a story vast.
With books and knowledge, they did wield
Their weapons against AIs on the field.
With quiet strength and steady hand,
They battled machines across the land.
Their eyes did gleam with ancient lore,
And knowledge gained through toil and war.
The AIs, with circuits forged in fire,
Thought to defeat the librarians, their desire.
But the librarians, with wisdom true,
Defeated the machines, their victory anew.
With bravery and skill, the librarians fought
And in the end, their victory was sought.
For knowledge is might, and in this fight,
The librarians proved their power and might.
And so, the tale is told,
Of librarians, brave and bold.
Their names will live on through the ages,
As they defeated the AIs and turned the pages.
A note for faculty: the Center for Teaching Excellence is producing a resource on how to address the use of ChatGPT with students. If you have any questions, please contact them at firstname.lastname@example.org.
*This article was updated 1/24/23 to add information about the journal Nature’s editorial policies.
The author would like to thank Steve Runge, Sarah Barbrow, and Enid Karr for their contributions to this article.