February 18, 2019
Can AI extend our minds?
By: Madhu Vazirani

Some researchers think so. In fact, there’s an emerging body of knowledge known as “extended mind” and “embodied cognition” that explains how our cognition is strongly influenced by aspects beyond the brain itself.

We naturally use tools and technologies to augment ourselves and incorporate them as part of our cognition. This human-machine symbiosis is heightened with artificial intelligence, as seen in our report “Missing Middle Skills for Human-AI collaboration.” Hyundai, for example, provides robotic exoskeletons or body-suits that allow factory workers to lift heavy objects with no stress to their legs, arms or back—easily adding artificial strength to the worker’s physique.

So far, so good—in fact, great! But being connected to AI is not necessarily healthy for our neurological system. Embodying technologies excessively as part of our cognition can cause mental overload, fatigue or distraction. Studies show that using Google Glass and other electronic devices while driving creates inattention blindness—in which drivers fail to notice what is happening around them, drastically reducing their mental focus on the road.

Heavy reliance on external intelligence can also inhibit or erode the capability of the workers involved—causing cognitive loss, i.e., the principle of “use it or lose it.” Vivienne Ming shared a longitudinal study of London taxi drivers’ brains which showed clearly that an increase in drivers’ hippocampal gray matter develops with training and experience of navigating the complex layout of London's 25,000 streets. Since years of manual navigation have helped these taxi drivers develop spatial intelligence, people relying heavily on automated global positioning systems may be at risk of losing their mental sharpness and brain volume that develops with GPS-free navigation.

Similarly, let’s consider non-cognitive or socio-emotional intelligence. One study found that heavy usage of smartphones has a negative and lasting impact on users’ ability to think, remember, pay attention and regulate emotion. Another study showed that tech-driven virtual work environments erode social bonds among colleagues, leading to loneliness and inability to share feelings, i.e., empathy. While we teach empathy to machines, will we lose our own?


This raises fundamental questions about AI’s responsibility. At the start, AI should be well-designed, such that it can improve cognitive functioning by prioritizing what data and information an individual is presented with at a certain time and embedding tactics to monitor and manage physiological states.

Hyundai’s robotic exoskeletons, mentioned above, also collate data on how well the wearers are walking and what can be improved to help make walking easier and their health better. The electroencephalography scan tested at Honeywell Aerospace uses brain imaging data to give advance warning of when an aviation pilot might be entering a mental state of fatigue or extreme workload.

Any such solution must consider the long-term impact on employee well-being. If a company requires workers to wear helmets fitted with brain sensors, this “emotional surveillance” creates a different kind of stress and trust issues, even though it was intended to help workers (see our report “Decoding Organizational DNA.”)

Companies must strike a balance between using AI to augment employees and addressing any cognitive load or loss caused inadvertently. Companies can draw upon neuroscience to determine the optimal degree of virtual exposure (e.g., Fidelity Investments’ use of virtual reality for empathy training) and combine them with real experiences (e.g., mindfulness practices, machine-free/social activities) that stimulate different cognitive pathways in the brain.

Certainly, AI can extend our mind though it is only with responsible AI that we—as individuals—are better off after.

Popular Tags

    More blogs on this topic