AI tools may point to licensed content that is behind a paywall. Follow the steps below to make it easier to see if AU library subscriptions or purchases provide you "free" access to the desired material.
You can make use of a "bookmarklet" in a browser to automate the conversion of any URL to an Adelphi University Libraries-proxied URL. A bookmarklet is a snippet of Javascript that is saved as a bookmark in the browser, preferably in the bookmark bar at the top. (If the bookmark bar is not visible you may need to configure your browser to not hide it.)
To install the Adelphi University Libraries' EZProxy bookmarklet in your browser:
[Text taken from https://libguides.adelphi.edu/ezproxy]
In generative AI, grounding is the ability to connect model output to verifiable sources of information. If you provide models with access to specific data sources, then grounding tethers their output to these data and reduces the chances of inventing content. This is particularly important in situations where accuracy and reliability are significant. https://cloud.google.com/vertex-ai/generative-ai/docs/grounding/overview
Hallucinations- Many Large Language Model AI tools that are NOT grounded "hallucinate" and create information and citations that are not factual in content. Using a model that is grounded in a database of peer reviewed literature reduces the likelihood of hallucinations. Often these grounded models offer citations which link directly to the source of the content which increases reliability.
Below is a list of AI tools that are grounded in established research sources.
Prompt Engineering- The process of composing and inputting relevant information into an AI tool in order to retrieve pertinent responses.
While language models reflect the beliefs and information in the data they are trained on, they do not necessarily tell the truth (aka hallucinations). A critical approach to evaluating information has always been an essential information literacy concept made more complex due to AI. The following emerging methods address ways to evaluate the information output by LLMs with a critical eye:
1. SIFT (Four Moves): The method includes investigating the source, finding better coverage, and tracking claims to evaluate the veracity of AI-generated information.
Free resources include:
*Mike Caulfield's SIFT (Four Moves) is licensed under a Creative Commons Attribution 4.0 International License.
2. ROBOT test: Consider the following to begin thinking critically about AI resources and information:
Reliability
Objective
Bias
Ownership
Type
*Hervieux, S. & Wheatley, A. (2020). The ROBOT test [Evaluation tool].
ROBOT test is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.