Efecto Mandel-AI: propuesta de medición del Efecto Mandela inducido por la inteligencia artificial en redes sociales digitales
HTML Full Text (Inglés)
PDF (Inglés)
XML (Inglés)

Palabras clave

distorsión cognitiva
efecto Mandel-AI
inteligencia artificial
redes sociales digitales
psicometría

Cómo citar

Rangel-Lyne, L. ., & Salazar-Altamirano, M. A. (2025). Efecto Mandel-AI: propuesta de medición del Efecto Mandela inducido por la inteligencia artificial en redes sociales digitales. Universitas Psychologica, 24, 1-19. https://doi.org/10.11144/Javeriana.upsy24.maie
Almetrics
 
Dimensions
 

Google Scholar
 
Search GoogleScholar

Resumen

Este estudio desarrolla y valida un instrumento psicométrico pionero para medir el “Efecto Mandel-AI”, una manifestación contemporánea del Efecto Mandela amplificada por la inteligencia artificial (IA) en las redes sociales digitales. Tras un proceso de diseño teórico y validación por expertos, el instrumento fue sometido a análisis factorial exploratorio y confirmatorio con una muestra de 243 estudiantes universitarios de la generación centennial del sur de Tamaulipas, México, de entre 18 y 28 años, usuarios activos de plataformas digitales. La escala final comprende tres dimensiones interrelacionadas: Presencia de IA, Distorsión de la Realidad y Efecto Mandel-AI. Los resultados confirmaron una estructura factorial sólida, con alta consistencia interna (α de Cronbach ≥ 0.93), validez convergente y discriminante, e índices de ajuste óptimos (CFI = 0.980; TLI = 0.974; RMSEA = 0.073; SRMR = 0.041). Los hallazgos muestran que la IA no solo media la selección y viralización del contenido, sino que también reconfigura los marcos cognitivos y culturales, facilitando la formación de falsas memorias y la confusión epistémica. Más allá de su solidez psicométrica, el instrumento constituye una contribución metodológica innovadora para la psicología digital y ofrece aplicaciones prácticas en programas de alfabetización mediática, estrategias educativas y políticas orientadas al fortalecimiento de la resiliencia epistémica y el bienestar digital.

HTML Full Text (Inglés)
PDF (Inglés)
XML (Inglés)

Adriaansen, R. -J., & Smit, R. (2025). Collective memory and social media. Current Opinion in Psychology, 65, 102077. https://doi.org/10.1016/j.copsyc.2025.102077

Agha, A. M. (2025). Artificial intelligence in social media: Opportunities and perspectives. Cihan University-Erbil Journal of Humanities and Social Sciences, 9(1), 125–132. https://doi.org/10.24086/cuejhss.v9n1y2025.pp125-132

Allal-Chérif, O., Aránega, A. Y., & Sánchez, R. C. (2021). Intelligent recruitment: How to identify, select, and retain talents from around the world using artificial intelligence. Technological Forecasting and Social Change, 169, 120822. https://doi.org/10.1016/j.techfore.2021.120822

Anantrasirichai, N., & Bull, D. (2021). Artificial intelligence in the creative industries: A review. Artificial Intelligence Review, 55(1), 589–656. https://doi.org/10.1007/s10462-021-10039-7

Bartlett, M. S. (1950). Tests of significance in factor analysis. British Journal of Statistical Psychology, 3(2), 77–85. https://doi.org/10.1111/j.2044-8317.1950.tb00285.x

Braidotti, R. (2019). Posthuman knowledge. Polity Press.

Brown, T. A. (2015). Confirmatory factor analysis for applied research (2nd ed.). The Guilford Press.

Byrne, B. M. (2016). Structural equation modeling with AMOS: Basic concepts, applications, and programming (3rd ed.). Routledge.

Carmines, E. G., & Zeller, R. A. (1979). Reliability and validity assessment. SAGE Publications. https://doi.org/10.4135/9781412985642

Cattell, R. B. (1966). The scree test for the number of factors. Multivariate Behavioral Research, 1(2), 245–276. https://doi.org/10.1207/s15327906mbr0102_10

Chalmers, D. J. (2022). Reality+: Virtual worlds and the problems of philosophy. W. W. Norton & Company.

Chan, K. W., Septianto, F., Kwon, J., & Kamal, R. S. (2023). Color effects on AI influencers’ product recommendations. European Journal of Marketing, 57(9), 2290–2315. https://doi.org/10.1108/ejm-03-2022-0185

Clark, A. (2003). Natural-born cyborgs: Minds, technologies, and the future of human intelligence. Oxford University Press.

Comrey, A. L., & Lee, H. B. (1992). A first course in factor analysis (2nd ed.). Psychology Press. https://doi.org/10.4324/9781315827506

Cooke, D., Edwards, A., Barkoff, S., & Kelly, K. (2024). As good as a coin toss: Human detection of AI-generated images, videos, audio, and audiovisual stimuli. arXiv preprint arXiv:2403.16760. https://doi.org/10.48550/arxiv.2403.16760

Costello, A. B., & Osborne, J. (2005). Best practices in exploratory factor analysis: Four recommendations for getting the most from your analysis. Practical Assessment, Research & Evaluation, 10(1), 1–9. https://doi.org/10.7275/jyj1-4868

Couldry, N., & Mejias, U. A. (2019). The costs of connection: How data is colonizing human life and appropriating it for capitalism. Stanford University Press.

DeVellis, R. F. (20167). Scale development: Theory and applications (4th ed.). SAGE Publications.

Duan, J., Yu, S., Tan, H. L., Zhu, H., & Tan, C. (2022). A survey of embodied AI: From simulators to research tasks. IEEE Transactions on Emerging Topics in Computational Intelligence, 6(2), 230–244. https://doi.org/10.1109/tetci.2022.3141105

Dunn, T. J., Baguley, T., & Brunsden, V. (2013). From alpha to omega: A practical solution to the pervasive problem of internal consistency estimation. British Journal of Psychology, 105(3), 399–412. https://doi.org/10.1111/bjop.12046

Erafy, A. N. E. (2023). Applications of Artificial Intelligence in the field of media. International Journal of Artificial Intelligence and Emerging Technology, 6(2), 19–41. https://doi.org/10.21608/ijaiet.2024.275179.1006

Essien, E. O. (2025). Climate change disinformation on social media: A meta-synthesis on epistemic welfare in the post-truth era. Social Sciences, 14(5), 304. https://doi.org/10.3390/socsci14050304

Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods, 4(3), 272–299. https://doi.org/10.1037/1082-989x.4.3.272

Farinella, F. (2023). Artificial intelligence and the right to memory. Revista quaestio iuris, 16(2), 976-996. https://doi.org/10.12957/rqi.2023.72636

Floridi, L. (2019). The logic of information: A theory of philosophy as conceptual design. Oxford University Press.

Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18(1), 39–50. https://doi.org/10.2307/3151312

Fricker, M. (2007). Epistemic injustice: Power and the ethics of knowing. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780198237907.001.0001

Gerlich, M. (2023). Perceptions and acceptance of artificial intelligence: A multi-dimensional study. Social Sciences, 12(9), 502. https://doi.org/10.3390/socsci12090502

Ghiurău, D., & Popescu, D. E. (20254). Distinguishing reality from AI: Approaches for detecting synthetic content. Computers, 14(1), 1. https://doi.org/10.3390/computers14010001

Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2010). Multivariate data analysis: A global perspective (7th ed.). Pearson Education.

Hassan, A., & Barber, S. J. (2021). The effects of repetition frequency on the illusory truth effect. Cognitive Research Principles and Implications, 6(1), 1-12. https://doi.org/10.1186/s41235-021-00301-5

Henseler, J., Ringle, C. M., & Sarstedt, M. (2015). A new criterion for assessing discriminant validity in variance-based structural equation modeling. Journal of the Academy of Marketing Science, 43(1), 115–135. https://doi.org/10.1007/s11747-014-0403-8

Hollan, J., Hutchins, E., & Kirsh, D. (2000). Distributed cognition: Toward a new foundation for human-computer interaction research. ACM Transactions on Computer-Human Interaction, 7(2), 174–196. https://doi.org/10.1145/353485.353487

Horkheimer, M., & Adorno, T. W. (2002). Dialectic of enlightenment: Philosophical fragments (E. Jephcott, Trans.). Stanford University Press.

Hu, L. -T., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 1–55. https://doi.org/10.1080/10705519909540118

Hu, T. (2024). Analysis of the Mandela effect phenomenon and its propagation mechanism in the we-media era. Creativity and Innovation, 8(6), 165–170. https://doi.org/10.47297/wspciwsp2516-252726.20240806

Hussain, K., Khan, M. L., & Malik, A. (2023). Exploring audience engagement with ChatGPT-related content on YouTube: Implications for content creators and AI tool developers. Digital Business, 4(1), 100071. https://doi.org/10.1016/j.digbus.2023.100071

Hussein, N. EI. S. (2025). The spread of misinformation via digital platforms and its role in falsifying collective memories (Mandela Effect). The Egyptian Journal of Media Research, 2025(90), 405-475. https://doi.org/doi: 10.21608/ejsc.2025.405911

IBM Corp. (2013). IBM SPSS Statistics for Windows (Version 22). IBM Corp.

IBM Corp. (2016). IBM SPSS AMOS for Windows (Version 24). IBM Corp.

Ienca, M. (2023). On artificial intelligence and manipulation. Topoi, 42(3), 833–842. https://doi.org/10.1007/s11245-023-09940-3

Jang, E., Lee, H. M., Lee, S., Jung, Y., & Sundar, S. S. (2025). Too good to be false: How photorealism promotes susceptibility to misinformation. CHI EA '25: Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, Japan, 531, 1-8. https://doi.org/10.1145/3706599.3719796

Kaiser, H. F. (1974). An index of factorial simplicity. Psychometrika, 39(1), 31–36. https://doi.org/10.1007/bf02291575

Knell, M. (2021). The digital revolution and digitalized network society. Review of Evolutionary Political Economy, 2(1), 9–25. https://doi.org/10.1007/s43253-021-00037-4

Lewandowsky, S., Ecker, U. K. H., & Cook, J. (2017). Beyond misinformation: Understanding and coping with the “post-truth” era. Journal of Applied Research in Memory and Cognition, 6(4), 353–369. https://doi.org/10.1016/j.jarmac.2017.07.008

Likert, R. (1932). A technique for the measurement of attitudes. Archives of Psychology, 22 140, 55.

Lu, W. (2024). Inevitable challenges of autonomy: ethical concerns in personalized algorithmic decision-making. Humanities and Social Sciences Communications, 11(1), 1-9. https://doi.org/10.1057/s41599-024-03864-y

MacCallum, R. C., Widaman, K. F., Zhang, S., & Hong, S. (1999). Sample size in factor analysis. Psychological Methods, 4(1), 84–99. https://doi.org/10.1037/1082-989X.4.1.84

MacLin, M. K. (2023). Mandela Effect. In M. K. MacLin (Eds.), Experimental design in psychology: A case approach (pp. 267–288). Routledge. https://doi.org/10.4324/9781003378044-20

Makhortykh, M., Zucker, E. M., Simon, D. J., Bultmann, D., & Ulloa, R. (2023). Shall androids dream of genocides? How generative AI can change the future of memorialization of mass atrocities. Discover Artificial Intelligence, 3(1), 1-17. https://doi.org/10.1007/s44163-023-00072-6

Matei, S. (2024). Generative artificial intelligence and collective remembering. The technological mediation of mnemotechnic values. Journal of Human-Technology Relations, 2(1), 1-22. https://doi.org/10.59490/jhtr.2024.2.7405

McAvoy, E. N., & Kidd, J. (2024). Synthetic hHeritage: Online platforms, deceptive genealogy and the ethics of algorithmically generated memory. Memory Mind & Media, 3, e12. https://doi.org/10.1017/mem.2024.10

McDonald, R. P. (1999). Test theory: A unified treatment. Lawrence Erlbaum Associates Publishers.

Milfont, T. L., & Fischer, R. (2010). Testing measurement invariance across groups: Applications in cross-cultural research. International Journal of Psychological Research, 3(1), 111–130. https://doi.org/10.21500/20112084.857

Momeni, M. (2024). Artificial intelligence and political deepfakes: Shaping citizen perceptions through misinformation. Journal of Creative Communications, 20(1), 41-56. https://doi.org/10.1177/09732586241277335

Muralidhar, A., & Lakkanna, Y. (2024). From clicks to conversions: Analysis of traffic sources in E-Commerce. arXiv preprint arXiv:2403.16115. https://doi.org/10.48550/arxiv.2403.16115

Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory (3rd ed.). McGraw-Hill.

Origgi, G. (2018). Reputation: What it is and why it matters. Princeton University Press.

Pataranutaporn, P., Archiwaranguprok, C., Chan, S. W. T., Loftus, E., & Maes, P. (2025). Synthetic human memories: AI-edited images and videos can implant false memories and distort recollection. CHI '25: Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, Japan, 538, 1-20. https://doi.org/10.1145/3706598.3713697

Pataranutaporn, P., Danry, V., Leong, J., Punpongsanon, P., Novy, D., Maes, P., & Sra, M. (2021). AI-generated characters for supporting personalized learning and well-being. Nature Machine Intelligence, 3(12), 1013–1022. https://doi.org/10.1038/s42256-021-00417-9

Penfield, R. D., & Giacobbi, P. R., Jr. (2004). Applying a score confidence interval to Aiken’s item Content-Relevance Index. Measurement in Physical Education and Exercise Science, 8(4), 213–225. https://doi.org/10.1207/s15327841mpee0804_3

Prasad, D., & Bainbridge, W. A. (2022). The visual Mandela effect as evidence for shared and specific false memories across people. Psychological Science, 33(12), 1971–1988. https://doi.org/10.1177/09567976221108944

Purnama, Y., & Asdlori, A. (2023). The role of social media in students’ social perception and interaction: Implications for learning and education. Technology and Society Perspectives (TACIT), 1(2), 45–55. https://doi.org/10.61100/tacit.v1i2.50

Putnick, D. L., & Bornstein, M. H. (2016). Measurement invariance conventions and reporting: The state of the art and future directions for psychological research. Developmental Review, 41, 71–90. https://doi.org/10.1016/j.dr.2016.06.004

Rigdon, E. E. (1996). CFI versus RMSEA: A comparison of two fit indexes for structural equation modeling. Structural Equation Modeling a Multidisciplinary Journal, 3(4), 369–379. https://doi.org/10.1080/10705519609540052

Rodilosso, E. (2024). Filter bubbles and the unfeeling: How AI for social media can foster extremism and polarization. Philosophy & Technology, 37(2), 1-21. https://doi.org/10.1007/s13347-024-00758-4

Rüther, M. (2024). Why care about sustainable AI? Some thoughts from the debate on meaning in life. Philosophy & Technology, 37(1), 1-19. https://doi.org/10.1007/s13347-024-00717-z

Salazar-Altamirano, M. A., Martínez-Arvizu, O. J., Galván-Vela, E., Ravina-Ripoll, R., Hernández-Arteaga, L. G., & Sánchez, D. G. (2025). AI as a facilitator of creativity and wellbeing in business students: A multigroup approach between public and private universities. Encontros Bibli Revista Eletrônica De Biblioteconomia E Ciência Da Informação, 30, 1–30. https://doi.org/10.5007/1518-2924.2025.e103485

Shanmugasundaram, M., & Tamilarasu, A. (2023). The impact of digital technology, social media, and artificial intelligence on cognitive functions: A review. Frontiers in Cognition, 2, 1203077. https://doi.org/10.3389/fcogn.2023.1203077

Sireli, O., Dayi, A., & Colak, M. (2023). The mediating role of cognitive distortions in the relationship between problematic social media use and self-esteem in youth. Cognitive Processing, 24(4), 575–584. https://doi.org/10.1007/s10339-023-01155-z

Spring, M., Faulconbridge, J., & Sarwar, A. (2022). How information technology automates and augments processes: Insights from Artificial‐Intelligence‐based systems in professional service operations. Journal of Operations Management, 68(6–7), 592–618. https://doi.org/10.1002/joom.1215

Sun, Y., Sheng, D., Zhou, Z., & Wu, Y. (2024). AI hallucination: towards a comprehensive classification of distorted information in artificial intelligence-generated content. Humanities and Social Sciences Communications, 11(1), 1-13. https://doi.org/10.1057/s41599-024-03811-x

Swart, J. (2021). Experiencing algorithms: How young people understand, feel about, and engage with algorithmic news selection on social media. Social Media + Society, 7(2). https://doi.org/10.1177/20563051211008828

Tabachnick, B. G., & Fidell, L. S. (20189). Using multivariate statistics (7th ed.). Pearson.

Theodorakopoulos, L., Theodoropoulou, A., & Klavdianos, C. (2025). Interactive viral marketing through big data analytics, influencer networks, AI integration, and ethical dimensions. Journal of Theoretical and Applied Electronic Commerce Research, 20(2), 115. https://doi.org/10.3390/jtaer20020115

Torous, J., Bucci, S., Bell, I. H., Kessing, L. V., Faurholt‐Jepsen, M., Whelan, P., Carvalho, A. F., Keshavan, M., Linardon, J., & Firth, J. (2021). The growing field of digital psychiatry: Current evidence and the future of apps, social media, chatbots, and virtual reality. World Psychiatry, 20(3), 318–335. https://doi.org/10.1002/wps.20883

Trigka, M., & Dritsas, E. (2025). The evolution of Generative AI: Trends and applications. IEEE Access, 13, 98504-98529. https://doi.org/10.1109/access.2025.3574660

Trizano-Hermosilla, I., & Alvarado, J. M. (2016). Best alternatives to Cronbach’s alpha reliability in realistic conditions: Congeneric and asymmetrical measurements. Frontiers in Psychology, 7, 769. https://doi.org/10.3389/fpsyg.2016.00769

Wilcox, R. R. (1980). Some results and comments pn using latent structure models to measure achievement. Educational and Psychological Measurement, 40(3), 645–658. https://doi.org/10.1177/001316448004000308

Williamson, S. M., & Prybutok, V. (2024). The era of Artificial Intelligence deception: unraveling the complexities of false realities and emerging threats of misinformation. Information, 15(6), 299. https://doi.org/10.3390/info15060299

Wu, X., Zhou, Z., & Chen, S. (2024). A mixed-methods investigation of the factors affecting the use of facial recognition as a threatening AI application. Internet Research, 34(5), 1872–1897. https://doi.org/10.1108/intr-11-2022-0894

Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. Profile Books.

Creative Commons License

Esta obra está bajo una licencia internacional Creative Commons Atribución 4.0.

Derechos de autor 2025 Lucirene Rangel-Lyne, Mario Alberto Salazar-Altamirano