Skip to Main Content

Guide to Using Artificial Intelligence for Research : Home

What is Artificial Intelligence?

Artificial Intelligence

Generative AI refers to a type of Artificial Intelligence that utilizes deep learning models that sort through large amounts of data to create various forms of content, such as text, graphics, video, code, and music.  This content is supplied, based on prompts provided by the user of the AI Model.

Large Language Models (LLMs) are Artificial Intelligence Models that are able to process and generate human language text.

Artificial Intelligence Models

Best Practices for using Artificial Intelligence for Research

In generative AI, grounding is the ability to connect model output to verifiable sources of information. If you provide models with access to specific data sources, then grounding tethers their output to these data and reduces the chances of inventing content. This is particularly important in situations where accuracy and reliability are significant. https://cloud.google.com/vertex-ai/generative-ai/docs/grounding/overview

Hallucinations-  Many Large Language Model AI tools that are NOT grounded "hallucinate" and create information and citations that are not factual in content. Using a model that is grounded in a database of peer reviewed literature reduces the likelihood of hallucinations.  Often these grounded models offer citations which link directly to the source of the content which increases reliability. 

Below is a list of AI tools that are grounded in established research sources.

Prompt Engineering-  The process of composing and inputting relevant information into an AI tool in order to retrieve pertinent responses.  

While language models reflect the beliefs and information in the data they are trained on, they do not necessarily tell the truth (aka hallucinations). A critical approach to evaluating information has always been an essential information literacy concept made more complex due to AI. The following emerging methods address ways to evaluate the information output by LLMs with a critical eye:

 

1. SIFT (Four Moves): The method includes investigating the source, finding better coverage, and tracking claims to evaluate the veracity of AI-generated information. 

Free resources include:

*Mike Caulfield's SIFT (Four Moves) is licensed under a Creative Commons Attribution 4.0 International License.

2. ROBOT test: Includes the following questions to begin thinking critically about AI resources and information:

Reliability

Objective

Bias

Ownership

Type

*Hervieux, S. & Wheatley, A. (2020). The ROBOT test [Evaluation tool].
ROBOT test is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

 

Contact Subject Librarians For Help with Prompt Engineering, Citation Assistance and AI Research Literacy