Lateral reading is leaving your initial source and consulting other sources to evaluate the initial sources' credibility.
With traditional resources online this can be done by searching the source's publication, funding organization, author or title. None of this information is available when you assess an AI output.
Here's how to fact-check something you got from an AI tool using lateral reading:
❏ Break down an AI-generated response into individual claims.
❏ Open a new tab and look for supporting pieces of information. Here are some good sources to start with:
❏ Next, think deeper about what assumptions are being made here
❏ Finally, make a judgment call.
Text from "Assess Content: Assessing AI-Based Tools for Accuracy" by the University of Maryland under the Creative Commons Attribution NonCommercial 4.0 International License 
When working with AI, keep in mind the following best practices for evaluation:
Text from "Ethics & Privacy - Artificial Intelligence (Generative) Resources" by Georgetown University under the Creative Commons Attribution NonCommercial 4.0 International License 
When AI Gets It Wrong: Addressing AI Hallucinations and Bias (MIT Management)
Currently a typical AI model isn't assessing whether the information it provides is correct. Its goal when it receives a prompt is to generate what it thinks is the most likely string of words to answer that prompt. AI cannot interpret or distinguish between correct and incorrect answers. It’s up to you to make the distinction.
Sometimes an AI will confidently return an incorrect answer. This could be a factual error or omitted information.
Sometimes, rather than simply being wrong, an AI will invent information that does not exist. This is known as “hallucination,” or, when the invented information is a citation, a “ghost citation.”
If you ask an AI to cite its sources, the results it gives may not be where it is pulling this information. Even an AI that provides footnotes may not provide the places information is from, just an assortment of webpages and articles that are roughly related to the topic of the prompt.
AI can accidentally ignore instructions or misinterpret a prompt. A minor example of this is returning a 5-paragraph response when it was prompted to give a 3-paragraph response. If you’re not familiar with the topic you’re asking an AI-based tool about, you might not realize that it’s interpreting your prompt inaccurately.
Text from "Artificial Intelligence: For Students" by Iona University under the Creative Commons Attribution NonCommercial 4.0 International License 
Text adapted from "The Robot Test" by The LibrAIry under the Creative Commons Attribution NonCommercial 4.0 International License 