Introduction
AI tools may present biases, inaccuracies, fabricated information (hallucinations) and unidentifiable sources. Students must critically evaluate AI-generated content and cross-check sources for validity. Additionally, AI models have environmental impacts, including energy consumption and carbon emissions, reinforcing the importance of mindful usage.
Limitations of GenAI
Potential for bias and misrepresentation
AI often reflects the biases present in the data it was trained on. The training datasets used by GenAI tools will contain assumptions, classifications, opinions, and biases, meaning that the GenAI content is also likely to include social, cultural, political or other biases and misrepresentations. You will need to use your critical thinking skills to decide if such bias or misrepresentation is present, and how that impacts your use of the content. Unlike humans, AI does not understand the content it generates; it simply predicts what might come next based on patterns.
Inaccuracy
GenAI tools can produce material which is factually inaccurate and misleading. This is particularly dangerous in fields requiring precision, such as law or medicine. Because of this, you should always verify references, facts, statistics, calculations, computer code and other data produced via GenAI. You should use Library OneSearch, library databases and peer-reviewed articles to increase the reliability and legitimacy of your findings.
Unidentifiable sources
GenAI tools don’t always reference their sources, making it hard to subsequently cite and reference the original authors. Not being able to trace the original sources also makes it harder to question the evidence presented and evaluate its overall quality. GenAI cannot access material behind a paywall, so the quality of resources may well be very limited. Further, Most AI tools don’t have access to university-specific databases, journals, or scholarly resources, which limits their usefulness for academic research.
Hallucinations
Generative AI “hallucinations” occur when the AI produces information that seems credible but is inaccurate or entirely fabricated. This is particularly problematic for students, as it can lead to false claims or fabricated references in academic work, undermining accuracy and integrity. Students are advised to use AI outputs as starting points, rigorously verifying and fact-checking any generated information to ensure reliability and maintain strong research standards.