Skip to Main Content

Generative Artificial Intelligence Bibliography: Home

Generative Artificial Intelligence (GAI) refers to technology that can interpret general human semantic inputs such as written questions or “prompts,” and return meaningful responses that mimic human intelligence. GAI is made possible by a structured “training” process whereby a learning algorithm is exposed to vast amounts of text and other data and builds models to predict what symbols are most likely to follow previous symbols to create a meaningful output. 

GAI may be used in mundane contexts such as an auto-complete feature to an email client or text editor, or in startlingly novel applications such as a chatbot that engages in human-like conversation in real time or a virtual “research assistant” that interprets data critically or identifies relevant studies in ones area of research. While GAI holds promise to greatly increase the capabilities of those who make good use of the technology, it also presents novel problems and dilemmas. Among other things, there is reasonable concern that GAI may subvert educational goals by providing a “shortcut” to real learning. The reliability of the technology is a concern as well, as GAI models are prone to “hallucinations” or generating false or misleading content. Finally, it raises moral, ethical, and legal concerns, for example, that the training processes may embed biases that will become even more entrenched as a result, or that the technology inevitably trains on and reproduces copyrighted material without consent or attribution.

This annotated bibliography is intended to provide information to help Adelphi faculty, staff, and students learn about this emerging technology and understand its implications for research and higher learning. 

Key Points

This section is adapted from the Jisc Generative AI - a primer (Jisc, 2023) under conditions of the CC-BY-NC-SA license. 

  • AI text generators such as ChatGPT are trained on a large amount of data scraped from the internet, and work by predicting the next word in a sequence.

  • All AI text generators can, and often do, produce plausible but false information, and by their nature will produce output that is culturally and politically biased.

  • Microsoft Copilot and Google Gemini (formerly called Bard) work in a similar way to ChatGPT, but can access information from the internet, and are aimed more at being search tools.

  • Image generators such as Midjourney and DALL-E are trained in a similar way, on data scraped from the internet.

  • Many other applications are being developed that make use of generative AI technology.

  • Whilst we do not need a detailed technical understanding of the technology to make use of it, some understanding helps us understand its strengths, weaknesses, and issues to consider.

  • This is a fast-moving space, and the information here is likely to age quickly!

Related Links

The following resources are available to the Adelphi community (some may require login).

Selected Bibliography

General

Implications for Instruction in Higher Education

Implications for Librarianship and Information Literacy Instruction

Implications for Research and Scholarship

Adelphi Scholarship on Artificial Intelligence

Further Reading

This following reading list is adapted from the Jisc Generative AI - a primer (Jisc, 2023) under conditions of the CC-BY-NC-SA license. While reflecting sources and policies of the United Kingdom, the works may be of interest more widely.

Selected relevant Jisc Blog Posts

Staff and student use: 

Institutional policy: 

AI detection: 

Bias and other ethical considerations: 

Using generative AI: