Verifying and vetting information is the most important step in effectively using generative AI. Sometimes, AI produces information that sounds convincing but is simply wrong. This is called a hallucination. Examples might include:
- Quoting articles or studies that don’t exist.
- Confidently giving incorrect statistics.
- Misrepresenting a theory or author’s position.
A human must stay in the loop. As a student, it is your role and responsibility to check every AI response before using it for a discussion or assignment. As a career professional, the same is true for any work you submit for your job. As a civic person, the same is true for any communications you share with others.
Is the information accurate? Always go to the source itself that generative AI is using to ensure it is an existing source, one that can be used for academic purposes, AND that the source includes the information you wish to use. If AI does not cite its source, you must locate a source that can verify the information. And you’ll then use and cite that as your paper’s source.
Does AI’s source actually exist? There are times when generative AI provides a reference to a “source” that does not exist at all. This is another reason to always go to any source that generative AI provides. You can only cite a source when you have located the information within that source yourself. And you should never cite AI as a research source. See information below and Verifying a Source.
Is it credible, unbiased, and up to date? Verifying information isn’t just about ensuring accurate information. Generative AI may provide information that is accurate, but it may be providing this information from a source that is not highly credible or one that is biased or too old and no longer relevant. Only highly credible sources, such as scholarly sources, can be used for any academic research and writing you do.
If you include incorrect information provided by AI in an assignment, you will be responsible for it. Including non-existing and unverified sources in a references list is known as falsification. Providing inaccurate or misleading information in your paper is known as misrepresentation. Both are academic integrity violations that can easily happen when attempting to gather information from generative AI without also going directly to the/a source to verify it.
How To Verify Information & Sources
Track information back to the original source. Go directly to any source provided by AI to confirm it exists and is appropriate. For example, use the link icons in Gemini or the numbered footnotes in Copilot to see the source, then go to that source. If it is a credible source, utilize that author’s information instead of the AI-generated content. You would cite that source.
Check the dates of sources when verifying information from AI to ensure it is up to date. This is especially important for topics that are rapidly evolving.
Check with an authoritative source if AI a.) cites a source that is outdated, b.) cites a source that is not an academic or government source or c.) simply does not cite a source. Authoritative sources include academic journals or government and university publications. To determine authoritative and credible sources, use the Library’s Hierarchy of Sources guide.
Consult multiple sources to verify information rather than a single source.
Use fact-checking tools. Websites like Snopes, FactCheck.org, and PolitiFact can help you identify misinformation or inaccuracies.
Remember, AI can be a good source for initial research on a topic, but it cannot be the single research method you use. Do not cite AI as your source of information—only cite a credible source and only after you have verified it.
Self-Check: Keeping a Human in the Loop
AI can generate ideas, structure information, and spark new directions but you are still the thinker and decision-maker. Before you use any of AI's outputs, pause and consider:
- Usefulness: Did AI's output help me understand the topic better—or just give me something to turn in?
- Currency: Is the information up to date?
- Relevance: Does the information address what I need?
- Accuracy: Did I verify the sources AI used as highly credible sources that did include the information that AI generated? See also Disclosing & Citing AI.
- Authenticity: Do I agree with the reasoning here, or do I need to challenge it?
- Originality: Where can I add my own interpretation, perspective, and conclusions I’ve reached? See Academic Voice and Including Your Originality When Collaborating with AI.
Using AI responsibly means supporting your own critical thinking, not outsourcing it. The best results happen when you stay in charge, letting AI expand your perspective while you provide the judgment and integrity.
Your instructor may not allow the use of AI for coursework. It is your responsibility to follow any policies posted within your course.