NU LibChat Widget

Ask Us

Can't find what you need? Submit a ticket through email, and we'll get back to you as soon as possible.

Email Us

Why can't I find an article that ChatGPT recommended?

ChatGPT and some other generative AI tools have been known to hallucinate citations.

"Hallucinated citations refer to a phenomenon in which a language model, like GPT-4, generates fictitious or inaccurate references to academic papers, books, or articles that do not actually exist or are incorrectly attributed. This is often a concern when using AI models for research or academic purposes, as it can lead to the dissemination of false information and undermine the credibility of the generated content. Here are some key points about hallucinated citations:

Characteristics of Hallucinated Citations

  1. Fictional References: The citations may include made-up titles, authors, publication years, or journals. They sound plausible but do not correspond to real sources.

  2. Inaccurate Attribution: Sometimes, a model might inaccurately attribute a quote or idea to a real author or paper, distorting the original meaning or context.

  3. Contextual Irrelevance: The generated citations might not align with the content they are meant to support, leading to confusion or misinformation.

Causes of Hallucinated Citations

  • Model Training: Language models are trained on vast datasets containing a mixture of factual and fictional content. The model learns patterns in the data but does not inherently understand truth or factual accuracy.

  • Lack of Verification: Since models generate responses based on statistical patterns rather than accessing databases or live information, they may confidently provide information without verifying its accuracy.

Implications

  1. Trustworthiness: Hallucinated citations can diminish the trustworthiness of the model's output, particularly in academic and research settings.

  2. Misleading Information: Users may unintentionally cite these fictitious references in their own work, spreading misinformation and potentially damaging their reputation.

  3. Research Integrity: The use of hallucinated citations undermines the integrity of research and scholarship, making it crucial for users to fact-check any citations generated by AI.

Mitigating Hallucinated Citations

  • Verification: Always verify any citations provided by a language model against trusted sources or databases.

  • Contextual Understanding: Users should ensure that they have a strong grasp of the subject matter to identify any discrepancies or inaccuracies in citations.

  • Use of Reliable Tools: Employing academic databases and citation management tools can help ensure the accuracy of references.

Conclusion

Hallucinated citations are a significant challenge when using AI language models for academic purposes. Users should exercise caution, critically evaluate the generated content, and prioritize the use of verified sources to maintain the integrity of their work." To mitigate the risk of hallucinated citations, we advise not asking ChatGPT for a list of sources. Please don't hesitate to contact the library for assistance locating credible resources for your project. 

Reference

OpenAI. (2024). What are hallucinated citations? ChatGPT [Large language model]. https://chatgpt.com




Still need help?

Didn't get the answer you needed? Contact a librarian through email, phone, SMS, or chat for personalized assistance!