Generative Artificial Intelligence (GAI) refers to technology that can interpret general human semantic inputs such as written questions or “prompts,” and return meaningful responses that mimic human intelligence. GAI is made possible by a structured “training” process whereby a learning algorithm is exposed to vast amounts of text and other data and builds models to predict what symbols are most likely to follow previous symbols to create a meaningful output.
GAI may be used in mundane contexts such as an auto-complete feature to an email client or text editor, or in startlingly novel applications such as a chatbot that engages in human-like conversation in real time or a virtual “research assistant” that interprets data critically or identifies relevant studies in ones area of research. While GAI holds promise to greatly increase the capabilities of those who make good use of the technology, it also presents novel problems and dilemmas. Among other things, there is reasonable concern that GAI may subvert educational goals by providing a “shortcut” to real learning. The reliability of the technology is a concern as well, as GAI models are prone to “hallucinations” or generating false or misleading content. Finally, it raises moral, ethical, and legal concerns, for example, that the training processes may embed biases that will become even more entrenched as a result, or that the technology inevitably trains on and reproduces copyrighted material without consent or attribution.
This annotated bibliography is intended to provide information to help Adelphi faculty, staff, and students learn about this emerging technology and understand its implications for research and higher learning.
This section is adapted from the Jisc Generative AI - a primer (Jisc, 2023) under conditions of the CC-BY-NC-SA license.
AI text generators such as ChatGPT are trained on a large amount of data scraped from the internet, and work by predicting the next word in a sequence.
All AI text generators can, and often do, produce plausible but false information, and by their nature will produce output that is culturally and politically biased.
Microsoft Copilot and Google Gemini (formerly called Bard) work in a similar way to ChatGPT, but can access information from the internet, and are aimed more at being search tools.
Image generators such as Midjourney and DALL-E are trained in a similar way, on data scraped from the internet.
Many other applications are being developed that make use of generative AI technology.
Whilst we do not need a detailed technical understanding of the technology to make use of it, some understanding helps us understand its strengths, weaknesses, and issues to consider.
This is a fast-moving space, and the information here is likely to age quickly!
The following resources are available to the Adelphi community (some may require login).
Responsible AI: Principles and Practical Applications (LinkedIn Learning online course)
Ethics in the Age of Generative AI (LinkedIn Learning online course)
Lee, T. B. (2023, May 31). Large language models, explained with a minimum of math and jargon.
Martineau, K. (2023, August 22). What is retrieval-augmented generation? [Blog]. IBM Research Blog.
Van Noorden, R., & Perkel, J. M. (2023). AI and science: what 1,600 researchers think. Nature, 621(7980), 672–675. [Adelphi Library link]
This following reading list is adapted from the Jisc Generative AI - a primer (Jisc, 2023) under conditions of the CC-BY-NC-SA license. While reflecting sources and policies of the United Kingdom, the works may be of interest more widely.
Generative artificial intelligence in education. Department For Education [UK] (March 2023)
Generative AI in education call for evidence: summary of responses Department For Education. Department For Education [UK] (Nov 2023)
Maintaining quality and standards in the ChatGPT era: QAA advice on the opportunities and challenges posed by Generative Artificial Intelligence. The Quality Assurance Agency for Higher Education (QAA) [UK] (May 2023)
Reconsidering assessment for the ChatGPT era: QAA advice on developing sustainable assessment strategies. The Quality Assurance Agency for Higher Education (QAA) [UK] (Jul 2023)
ChatGPT and artificial intelligence in higher education. UNESCO (April 2023)
Guidance for AI in education and research. UNESCO (Sep 2023)
Artificial intelligence (AI) use in assessments: protecting the integrity of qualifications. Joint Council for Qualifications [UK] (March 2023)
Considerations on wording when creating advice or policy on AI use. (Feb 2023)
Navigating terms and conditions of generative AI. (Sept 2023)
Licensing options for generative AI. (Dec 2023)
AI detection – latest recommendations. (Sept 2023)
A short experiment in defeating a ChatGPT detector. (Jan 2023)
Hidden workers powering AI. (March 2023)
Exploring the potential for bias in ChatGPT. (Jan 2023)