Skip to Main Content

Guide to Using Artificial Intelligence for Research : Home

What is Artificial Intelligence?

Artificial Intelligence

According to ChatGPT 3.5, Generative AI refers to AI models that create new content, like images, text, or audio, based on patterns learned from existing data. It uses techniques such as neural networks to generate realistic outputs that mimic human creativity.  

 

OpenAI. (2024) ChatGPT (July 2, version 3.5)[Large Language Model]. https://chatgpt.com/

According to ChatGPT 3.5, Large language models specifically focus on generating and understanding human-like text. They are a subset of generative AI, utilizing deep learning techniques like transformers to process and produce coherent sentences or paragraphs based on extensive training data.

 

OpenAI. (2024) ChatGPT (July 2, version 3.5)[Large Language Model]. https://chatgpt.com/

 

 

Artificial Intelligence Models

Gemini is an AI language model created by Google, designed to understand and generate text in a wide range of contexts and styles.

Microsoft Copilot AI is an advanced AI assistant integrated into productivity tools like Microsoft Office. It enhances productivity by assisting users with writing, data analysis, and content creation tasks. Copilot AI utilizes natural language processing and machine learning models to understand context and provide intelligent suggestions, aiming to streamline workflows and enhance user efficiency.

Claude: Claude is another AI language model, known for its ability to generate text based on input prompts, developed by Anthropic.

Best Practices for using Artificial Intelligence for Research

In generative AI, grounding is the ability to connect model output to verifiable sources of information. If you provide models with access to specific data sources, then grounding tethers their output to these data and reduces the chances of inventing content. This is particularly important in situations where accuracy and reliability are significant. https://cloud.google.com/vertex-ai/generative-ai/docs/grounding/overview

Hallucinations-  Many Large Language Model AI tools that are NOT grounded "hallucinate" and create information and citations that are not factual in content. Using a model that is grounded in a database of peer reviewed literature reduces the likelihood of hallucinations.  Often these grounded models offer citations which link directly to the source of the content which increases reliability. 

Below is a list of AI tools that are grounded in established research sources.

Prompt Engineering-  The process of composing and inputting relevant information into an AI tool in order to retrieve pertinent responses.  

While language models reflect the beliefs and information in the data they are trained on, they do not necessarily tell the truth (aka hallucinations). A critical approach to evaluating information has always been an essential information literacy concept made more complex due to AI. The following emerging methods address ways to evaluate the information output by LLMs with a critical eye:

 

1. SIFT (Four Moves): The method includes investigating the source, finding better coverage, and tracking claims to evaluate the veracity of AI-generated information. 

Free resources include:

*Mike Caulfield's SIFT (Four Moves) is licensed under a Creative Commons Attribution 4.0 International License.

2. ROBOT test: Includes the following questions to begin thinking critically about AI resources and information:

Reliability

Objective

Bias

Ownership

Type

*Hervieux, S. & Wheatley, A. (2020). The ROBOT test [Evaluation tool].
ROBOT test is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

 

Contact Subject Librarians For Help with Prompt Engineering, Citation Assistance and AI Research Literacy

*ChatGPT was used in the creation of this Libguide. 

OpenAI. (2024) ChatGPT (July 2, version 3.5)[Large Language Model]. https://chatgpt.com/