Misinformation in AI: The Hidden Risk of the Lack of Culture of Use in Large Language Models Among University Students

Authors

DOI:

https://doi.org/10.61604/dl.v17i31.495

Keywords:

IA, LLM, misinformation, ChatGPT

Abstract

One of the most well-known components of the Artificial Intelligence (AI) hypernym is large model languages (or LLMs), one of the most popular being ChatGPT, which is being widely used by university students for various tasks, from summarizing articles, solving problems and even writing complete papers. The most obvious problem is the impossibility of detecting with certainty this type of practices, however, an overlooked problem is that the responses generated by this type of models are not (to date) verifiable and their reliability is questionable. In this paper, a brief explanation of what AI is and how LLMs work will be presented as a first point. Subsequently, the results of a survey on the use of ChatGPT will be analyzed, as well as the information collected from use cases, concluding with recommendations on how to make students aware of what an LLM really does and how to use it correctly.

Downloads

Download data is not yet available.

Author Biography

Gibran Aguilar Rangel, Universidad Autónoma de Querétaro

He holds a degree in Public Accounting, a Master's degree in Technology Management, and a PhD in Technology Management and Innovation from the Autonomous University of Querétaro. He specializes in technological trends and their role in the education and finance sectors. He is a professor at the School of Accounting and Administration of the Autonomous University of Querétaro.

References

Abdelaal, E., Gamage, S. H. P. W., & Mills, J. E. (2019). Artificial intelligence is a tool for cheating academic integrity. AAEE 2019 Annual Conference, December, 1–7.

Barman, D., Guo, Z., y Conlan, O. (2024). The dark side of language models: exploring the potential of LLMs in multimedia disinformation generation and dissemination. Machine learning with applications, 16 (March).

Dakakni, D., y Safa, N. (2023). Artificial intelligence in the L2 classroom: Implications and challenges on ethics and equity in higher education: A 21st century Pandora’s box. Computers and Education: Artificial Intelligence, 5 (August).

Forero-Corba, W., y Negre Bennasar, F. (2024). Técnicas y aplicaciones del Machine Learning e Inteligencia Artificial en educación: una revisión sistemática. RIED-Revista Iberoamericana de Educación a Distancia, 27(1), 0–34.

Ihekweazu, C., Zhou, B. (2023). The use of artificial intelligence in academic dishonesty: Ethical considerations. Iscap.Us, 1–10.

Jiang, P., Sonne, C., Li, W., You, F., y You, S. (2024). Preventing the immense increase in the life-cycle energy and carbon footprints of LLM-powered intelligent chatbots. Engineering, 40, 202–210.

Kajiwara, Y., y Kawabata, K. (2024). AI literacy for ethical use of chatbot: Will students accept AI ethics? Computers and Education: Artificial Intelligence, 6(March), 100251.

Konieczny, P. (2021). From adversaries to allies? The uneasy relationship between experts and the Wikipedia community. She Ji, 7(2), 151–170.

Lavrinovics, E., Biswas, R., Bjerva, J., y Hose, K. (2025). Knowledge graphs, large language models, and hallucinations: An NLP perspective. Journal of Web Semantics, 85(December 2024), 100844.

López Espejel, J., Ettifouri, E. H., Yahaya Alassan, M. S., Chouham, E. M., & Dahhane, W. (2023). GPT-3.5, GPT-4, or BARD? Evaluating LLMs reasoning ability in zero-shot setting and performance boosting through prompts. Natural Language Processing Journal, 5(August), 100032.

Mao, Y., Ge, Y., Fan, Y., Xu, W., Mi, Y., Hu, Z., & Gao, Y. (2024). A Survey on LoRA of large language models. 1–124.

Martin-Crespo Blanco, M., Salamanca Castro, A., (2007). El muestreo en la investigación cualitativa. Nure Investigación, 27 (Marzo-Abril).

Salvagno, M., Taccone, F. S., & Gerli, A. G. (2023). Can artificial intelligence help for scientific writing? Critical Care, 27(1), 1–5.

Sun, Y., Sheng, D., Zhou, Z., y Wu, Y. (2024). AI hallucination: towards a comprehensive classification of distorted information in artificial intelligence-generated content. Humanities and Social Sciences Communications, 11(1), 1–14.

Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., y Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 2017-Decem(Nips), 5999–6009.

Yan, B., Li, K., Xu, M., Dong, Y., Zhang, Y., Ren, Z., y Cheng, X. (2024). On protecting the data privacy of large language models (LLMs): A survey. High-Confidence Computing, 100300.

Published

2025-12-01

How to Cite

Aguilar Rangel, G. (2025). Misinformation in AI: The Hidden Risk of the Lack of Culture of Use in Large Language Models Among University Students. Diá-Logos, 17(31), 25–35. https://doi.org/10.61604/dl.v17i31.495