Artificial intelligence (AI) has swiftly transitioned from a concept

Artificial intelligence (AI) has swiftly transitioned from a concept of science fiction to a cornerstone of modern technology, permeating various aspects of daily life, from virtual assistants and autonomous vehicles to sophisticated decision-making systems in healthcare and finance. The integration of AI into these areas has led to significant advancements, promising increased efficiency, accuracy, and personalized experiences [1]. However, as AI continues to evolve and take on more roles traditionally performed by humans, it has raised critical questions about its potential impact on human intelligence.

1. Introduction

Human intelligence encompasses a range of cognitive abilities, including language, memory, problem-solving, creativity, and decision-making [2]. Historically, these abilities have been developed and honed through engagement with complex tasks and challenges. The advent of AI, especially large language models (LLMs), introduces a new dynamic, where many of these tasks are increasingly automated. This shift prompts a crucial inquiry: does the convenience and capability of AI come at the cost of diminishing our own cognitive faculties?

One significant concern is the phenomenon of "cognitive offloading," where individuals rely on external tools to perform cognitive tasks that they would otherwise handle themselves [3]. While cognitive offloading can free up mental resources for other activities, it also raises the possibility that over-reliance on AI could lead to a decline in essential cognitive skills, such as memory retention, critical thinking, and problem-solving abilities. A group of the authors [4] discussed this in the context of the "Google effect," where easy access to online information reduces individuals' likelihood of retaining that information. As AI systems become more integrated into educational environments, workplaces, and even our personal lives, understanding this potential trade-off becomes increasingly important.

Additionally, there is growing evidence to suggest that AI might contribute to skill degradation in both everyday tasks and specialized professional roles [5]. As AI takes over routine operations, individuals may become less adept at performing these tasks manually. This is particularly concerning in fields where manual dexterity, spatial awareness, and creative problem-solving are essential. For example, habitual use of GPS technology can negatively impact spatial memory, suggesting that reliance on AI for navigation could erode natural navigational skills over time [6]. The potential erosion of these skills could have far-reaching implications, not only for individual capabilities but also for societal resilience in the face of technological failures or limitations.

Another dimension of this debate involves the impact of AI on decision-making processes. AI systems, with their ability to analyze vast datasets and provide recommendations, are increasingly used in decision-making roles across various sectors [7]. While these systems can enhance the accuracy and efficiency of decisions, there is a risk that individuals may become overly reliant on AI-generated advice, leading to a decline in their own decision-making abilities. Further, individuals tend to prefer AI-generated advice over human advice, even in contexts where human judgment might be more contextually aware [8]. This raises concerns about the erosion of human agency, where individuals defer to AI without critically evaluating its recommendations.

Despite these concerns, it is essential to acknowledge the potential of AI to augment human intelligence. AI can complement human cognitive processes, providing tools and insights that enhance our ability to solve complex problems, learn new skills, and innovate [9]. The challenge lies in striking a balance between leveraging AI's capabilities and preserving the cognitive faculties that are central to human intelligence.

This narrative review aims to explore the multifaceted relationship between AI and human intelligence, focusing on the potential for AI to limit or enhance cognitive abilities. By synthesizing existing literature on cognitive offloading, skill degradation, decision-making, and the broader implications for human agency, this review seeks to provide a comprehensive understanding of the impact of AI on human cognition. To identify relevant literature, a non-systematic search was conducted using databases such as PubMed, PsycINFO, Scopus, and Google Scholar, employing keywords including “cognitive offloading,” “skill degradation,” “AI and decision-making,” “human agency,” and “artificial intelligence and cognition.” Additional references were gathered through manual screening of citations from key articles and recent reviews. It also considers how AI might be designed and implemented in ways that support, rather than undermine, our intellectual development.

2. Cognitive Offloading and Memory

The concept of cognitive offloading has gained significant attention in recent years, particularly in the context of AI. Cognitive offloading occurs when individuals use external devices or systems to store information or perform tasks, thereby reducing the cognitive load on their own memory and other cognitive functions. Sparrow, Liu, and Wegner first articulated this phenomenon in 2011, noting that the ease of accessing information online leads to poor working memory, and diminished memory retention [4] , a finding that has been supported by other researchers as well [18,19,20]. This effect is amplified by AI systems, which not only store information but also process it, making human intervention less and less necessary. However, studies suggest that increasing the cost of offloading information leads to a reduction in offloading behaviour, which in turn enhances memory performance [18].

An article titled ‘The Shallows: What the Internet Is Doing to Our Brains’ argues that the internet, and by extension AI, is reshaping our neural circuits, leading to a superficial understanding of information and a decline in deep thinking and memory retention [10]. This work aligns with the broader theory of neuroplasticity, which posits that the brain adapts to the demands placed on it. As AI takes over more cognitive tasks, there is a risk that our brains will become less capable of performing these tasks independently, leading to a decline in cognitive abilities such as memory and critical thinking(Bai et al., 2023; Zhai et al., 2024).

The implications of cognitive offloading are profound, particularly in educational contexts. A study [11] found that students who relied heavily on search engines to complete academic tasks were less likely to retain the information they found. This suggests that AI-driven educational tools, while beneficial in many respects, could potentially hinder the development of deep, long-term memory in students. This raises important questions about the design of educational technologies and the need to balance the convenience of AI with the necessity of cognitive engagement.

3. Skill Degradation

Skill degradation is another significant concern associated with the widespread use of AI. Automation and AI-driven systems like LLMs are increasingly performing tasks that were once the domain of human expertise. While this can lead to increased efficiency, it also poses the risk of eroding the skills necessary to perform these tasks manually. Parasuraman and Manzey [5] highlighted this issue in their research on automation, noting that as humans become more reliant on automated systems, their ability to perform these tasks without assistance diminishes.

This concern is particularly relevant in high-stakes professions. For instance, the aviation industry has seen an increasing reliance on autopilot systems, leading to concerns about pilots' manual flying skills. A study [12] found that pilots who frequently used automation were less proficient in manual flight operations. This has led to calls for more balanced training programs that ensure pilots maintain their manual skills even as they rely on automation.

Similarly, the medical field is experiencing a shift toward AI-driven diagnostics and treatment planning. While these technologies can improve accuracy and efficiency, there is concern that over-reliance on AI could lead to a decline in doctors' diagnostic skills. A study [13] on Deep Medicine argues that while AI can enhance medical practice, it is crucial to ensure that doctors continue to develop and maintain their diagnostic and clinical skills. This balance is essential to prevent skill degradation in a field where human judgment is often irreplaceable.

The implications of skill degradation extend beyond professional domains, significantly affecting everyday skills such as navigation. Research consistently demonstrates that habitual reliance on GPS technology negatively impacts spatial memory and cognitive mapping abilities (Dahmani & Bohbot, 2020; Hejtmánek et al., 2018). Regular GPS users often show poorer performance in self-guided navigation tasks and experience declines in hippocampal-dependent spatial memory over time (Dahmani & Bohbot, 2020). Eye-tracking studies further indicate that increased attention to GPS interfaces corresponds to less accurate spatial knowledge and longer, less efficient travel paths when navigation assistance is absent (Hejtmánek et al., 2018). Conversely, navigation instructions that integrate personally relevant landmarks or contextual information can facilitate incidental spatial learning and significantly improve spatial memory retention (Gramann et al., 2017). Additionally, employing alternative GPS navigation methods, such as auditory-based 3D spatial audio systems, actively engages users in spatial navigation tasks and fosters the development of more accurate cognitive maps compared to conventional turn-by-turn visual instructions (Clemenson et al., 2021). Collectively, these findings underscore the importance of thoughtfully designed navigation technologies that not only assist users but also actively support and enhance spatial cognitive skills [old 6]

fig1

AI's role in decision-making processes is another area of concern. As AI systems become more sophisticated, they are increasingly being used to make decisions in various domains, from finance to healthcare. While AI has the potential to improve decision-making accuracy, there is growing evidence that reliance on AI can lead to "automation complacency," where individuals become overly dependent on AI systems and fail to engage in critical evaluation of AI-generated decisions.

A study found that individuals tend to prefer AI-generated advice over human advice, even in situations where human judgment might be more appropriate [8]. This reliance on AI can lead to a form of cognitive miser, where individuals defer to AI systems without fully engaging in the decision-making process themselves. This phenomenon has been observed in various contexts, including finance, where traders increasingly rely on algorithmic trading systems (Packin, 2019), and healthcare, where doctors may rely on AI for diagnostic decisions(Jussupow et al., 2021; Panch et al., 2019).

Another study further explored this issue, finding that people tend to switch from human to algorithmic judgment after observing minor errors in human decision-making [14]. This shift in trust from human to AI judgment raises concerns about the erosion of human agency. As individuals become more reliant on AI systems, there is a risk that they may lose confidence in their own decision-making abilities, leading to a decline in critical thinking and problem-solving skills.

This erosion of human agency is particularly concerning in contexts where ethical considerations are paramount. As AI systems increasingly assume decision-making roles, there is a risk that human moral and ethical judgments may become marginalized [15]. For example, algorithmic decision-making in healthcare can obscure accountability, foster defensive medicine practices, and risk undermining patient autonomy by implicitly enforcing algorithm-driven values and treatment priorities(Grote & Berens, 2020). Consequently, important decisions may be made without adequate consideration of their ethical implications, raising significant concerns about transparency, fairness, and the role overall role of AI within society.

Moreover, the risks of "automation bias," where individuals are more likely to trust and follow decisions made by automated systems, even when they are flawed. This bias can lead to serious consequences, particularly in safety-critical environments such as aviation and healthcare. To mitigate these risks, it is essential to design AI systems that encourage human participation and critical evaluation rather than passive acceptance of AI-generated decisions [16].

4. Artificial Intelligence as a Cognitive Augmenter

While the concerns about AI's impact on human intelligence are valid, it is also important to recognize the potential for AI to enhance human cognitive abilities. AI can complement human cognition, particularly in areas where it provides tools and insights that enhance rather than replace human decision-making.

In education, AI-driven platforms have shown promise in personalizing learning experiences to cater to individual needs, thereby enhancing cognitive development. These platforms can adapt to the learning pace and style of individual students, providing targeted feedback that can improve learning outcomes. However, the challenge lies in ensuring that these platforms do not encourage cognitive miser or reliance, but rather foster active engagement and critical thinking.

In healthcare, AI has the potential to revolutionize diagnostics and treatment planning. AI systems can process vast amounts of data to identify patterns that may not be immediately apparent to human practitioners, thereby improving diagnostic accuracy and enabling more personalized treatment plans [13]. However, the integration of AI in healthcare must be carefully managed to ensure that doctors continue to develop and maintain their clinical skills, and that AI is used as a tool to augment rather than replace human judgment.

AI also holds promise in enhancing human creativity and innovation. It is argued that AI can take over routine tasks, freeing up cognitive resources for more innovative and strategic thinking. For instance, in the creative industries, AI tools are being used to assist in tasks such as content generation, design, and music composition. While these tools can enhance productivity, it is crucial to ensure that they do not stifle human creativity by taking over the creative process entirely [9].

Fig. 2

The interaction between Artificial Intelligence (AI) and human intelligence presents both remarkable opportunities and substantial challnges, emboding a complex relationship with potential to either enhance or undermine cognitive abilities. AI has notibaly advanced several fields by enabling sophisticated problem-solving, automating routine tasks, and providing highly personalized learning environments. For instance, AI-driven analysis of large datasets surpasses human capability, revealing hidden patterns and supporting decision-making processes across industries like healthcare, finance, and education. Additionally, AI-driven cognitive retraining methods have demonstrated significant promise in facilitating the recovery of cognitive functions in patients with acquired brain injuries.[17].

4.1 Some subheading

However, these advancements are accompanied by significant cognitive risks, notably the reduction in essential cognitive skills due to increased reliance on AI. Cognitive offloading, characterized by individuals delegating cognitive responsibilities such as memory and problem-solving to AI systems, presents a major concern. While offloading provides efficiency, it risks cognitive atrophy in abilities such as spatial navigation, memory retention, and critical thinking, potentially diminishing human independence in these domains.

Similarly, Skill degradation emerges as a prominent risk associated with extensive AI integration. Overreliance on AI systems for routine and complex tasks can weaken human expertise and diminish essential cognitive and professional competencies. This phenomenon particularly affects soft skills, including empathy and moral reasoning, which are critical in fields like healthcare, education, and leadership. Additionally, automated decision-making processes can inadvertently promote passive engagement, where individuals rely excessively on AI-generated decisions, thus limiting their own critical evaluation capabilities.

4.2 Another Subheading

Addressing these challenges requires strategic and proactive interventions. AI systems should be deliberately designed to encourage active human engagement, promoting cognitive stimulation rather than mere convenience. Educational environments, for example, could integrate AI tools in ways that foster active, collaborative problem-solving, rather than passive information retrieval. Transparency and interpretability of AI systems must also be prioritized, allowing users to critically assess and understand AI-driven decisions, thereby preserving and even strengthening human judgment.

Moreover, educational and professional development should emphasize cultivating uniquely human cognitive skills such as creativity, emotional intelligence, ethical reasoning, and complex problem-solving. These human-centric skills will complement AI’s capabilities and ensure continued human relevance and indispensability in roles demanding nuanced, empathetic, and innovative thinking.

Finally, robust ethical frameworks and policies are necessary to guide the responsible integration and equitable use of AI technologies. Policymakers and technology developers should collaborate to establish clear guidelines that protect privacy, promote fairness, and ensure that AI enhances rather than diminishes human cognitive faculties. Through such balanced and proactive measures, society can leverage AI’s full potential while safeguarding human intelligence and agency.

5. Conclusion

Artificial intelligence has the potential to both limit and enhance human intelligence. The key to maximizing AI's benefits while minimizing its drawbacks lies in striking a balance between leveraging AI's capabilities and preserving the cognitive faculties that are central to human intelligence. As we continue to integrate AI into our lives, it is essential to develop strategies that promote a synergistic relationship between humans and AI, ensuring that technology serves as a tool for cognitive enhancement rather than a crutch that leads to intellectual decline.

However research is needed to explore the long-term impacts of AI on human cognition and to develop strategies that promote a balanced and constructive relationship between AI and human intelligence. By fostering an environment where AI is used to complement rather than replace human cognitive processes, we can ensure that AI serves to enhance, rather than diminish, our intellectual capacities.

6. References

  1. Russell SJ, Norvig P. Artificial intelligence: a modern approach. Pearson; 2016.

  2. creativity, and decision-making Colom R, Karama S, Jung RE, Haier RJ. Human intelligence and brain networks. Dialogues in Clinical Neuroscience, 2010; 12(4), 489–501. https://doi.org/10.31887/DCNS.2010.12.4/rcolom

  3. Risko EF, Gilbert SJ. Cognitive offloading. Trends in cognitive sciences. 2016 Sep 1;20(9):676-88.

  4. Sparrow B, Liu J, Wegner DM. Google effects on memory: Cognitive consequences of having information at our fingertips. science. 2011 Aug 5;333(6043):776-8.

  5. Parasuraman R, Manzey DH. Complacency and bias in human use of automation: An attentional integration. Human factors. 2010 Jun;52(3):381-410.

  6. Dahmani L, Bohbot VD. Habitual use of GPS negatively impacts spatial memory during self-guided navigation. Scientific reports. 2020 Apr 14;10(1):6310.

  7. Floridi L, Cowls J, Beltrametti M, Chatila R, Chazerand P, Dignum V, Luetge C, Madelin R, Pagallo U, Rossi F, Schafer B. AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Minds and machines. 2018 Dec;28:689-707.

  8. Logg JM, Minson JA, Moore DA. Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes. 2019 Mar 1;151:90-103.

  9. Brynjolfsson E, McAfee A. Machine, platform, crowd: Harnessing our digital future. New York: WW Norton & Company; 2017 Jun 27.

  10. Carr N. The shallows: What the Internet is doing to our brains. WW Norton & Company; 2020.

  11. Storm BC, Stone SM, Benjamin AS. Using the Internet to access information inflates future use of the Internet to access other information. Memory. 2017 Jul 3;25(6):717-23.

  12. Casner SM, Geven RW, Recker MP, Schooler JW. The retention of manual flying skills in the automated cockpit. Human factors. 2014 Dec;56(8):1506-16.

  13. Topol E. Deep medicine: how artificial intelligence can make healthcare human again. Hachette UK; 2019.

  14. Dietvorst BJ, Simmons JP, Massey C. Algorithm aversion: people erroneously avoid algorithms after seeing them err. Journal of experimental psychology: General. 2015 Feb;144(1):114-26.

  15. Winfield AF, Jirotka M. Ethical governance is essential to building trust in robotics and artificial intelligence systems. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 2018 Nov 28;376(2133):20180085.

  16. Cummings ML. Automation bias in intelligent time critical decision support systems. In Decision making in aviation 2017 Jul 5 (pp. 289-294). Routledge.

  17. Choo YJ, Chang MC. Use of machine learning in stroke rehabilitation: a narrative review. Brain & Neurorehabilitation. 2022 Oct 31;15(3):e26.

  18. Grinschgl S, Papenmeier F, Meyerhoff HS. Consequences of cognitive offloading: Boosting performance but diminishing memory. Quarterly Journal of Experimental Psychology. 2021 Sep;74(9):1477-96.

  19. Morrison AB, Richmond LL. Offloading items from memory: Individual differences in cognitive offloading in a short-term memory task. Cognitive Research: Principles and Implications. 2020 Dec;5:1-3.

  20. Risko EF, Dunn TL. Storing information in-the-world: Metacognition and cognitive offloading in a short-term memory task. Consciousness and cognition. 2015 Nov 1;36:61-74.

Bai, L., Liu, X., & Su, J. (2023). ChatGPT: The cognitive effects on learning and memory. Brain-X, 1(3), e30. https://doi.org/10.1002/brx2.30
Clemenson, G. D., Maselli, A., Fiannaca, A. J., Miller, A., & Gonzalez-Franco, M. (2021). Rethinking GPS navigation: creating cognitive maps through auditory clues. Scientific Reports, 11(1), 7764. https://doi.org/10.1038/s41598-021-87148-4
Dahmani, L., & Bohbot, V. D. (2020). Habitual use of GPS negatively impacts spatial memory during self-guided navigation. Scientific Reports, 10(1), 6310. https://doi.org/10.1038/s41598-020-62877-0
Gramann, K., Hoepner, P., & Karrer-Gauss, K. (2017). Modified Navigation Instructions for Spatial Navigation Assistance Systems Lead to Incidental Spatial Learning. Frontiers in Psychology, 8. https://doi.org/10.3389/fpsyg.2017.00193
Grote, T., & Berens, P. (2020). On the ethics of algorithmic decision-making in healthcare. Journal of Medical Ethics, 46(3), 205–211. https://doi.org/10.1136/medethics-2019-105586
Hejtmánek, L., Oravcová, I., Motýl, J., Horáček, J., & Fajnerová, I. (2018). Spatial knowledge impairment after GPS guided navigation: Eye-tracking study in a virtual town. International Journal of Human-Computer Studies, 116, 15–24. https://doi.org/10.1016/j.ijhcs.2018.04.006
Jussupow, E., Spohrer, K., Heinzl, A., & Gawlitza, J. (2021). Augmenting Medical Diagnosis Decisions? An Investigation into Physicians’ Decision-Making Process with Artificial Intelligence. Information Systems Research, 32(3), 713–735. https://doi.org/10.1287/isre.2020.0980
Packin, N. G. (2019). Consumer Finance and AI: The Death of Second Opinions? Cyberspace Law eJournal. https://www.semanticscholar.org/paper/Consumer-Finance-and-AI%3A-The-Death-of-Second-Packin/5ab0bf447ca21f6b0c7731767e9f04e436b4b91c
Panch, T., Mattie, H., & Atun, R. (2019). Artificial intelligence and algorithmic bias: implications for health systems. Journal of Global Health, 9(2), 010318. https://doi.org/10.7189/jogh.09.020318
Zhai, C., Wibowo, S., & Li, L. D. (2024). The effects of over-reliance on AI dialogue systems on students’ cognitive abilities: a systematic review. Smart Learning Environments, 11(1), 28. https://doi.org/10.1186/s40561-024-00316-7