In 2017, I explored the various ways that human interaction was likely to change. Two years later, I shared predictions from Ray Kurzweil that included his thoughts on interacting in a world that is increasingly instrumented and machine-oriented. Ray envisions a deep transformation in the way we interact in a machine-oriented society, and that includes thought commands. The possibility of interacting with the world using our brains still feels like science fiction to most. Whether it is moving an object (like the racecar video in my earlier post) or communicating with another human brain-to-brain, it is hard to wrap our minds around that profound a change.
Yet, advances in brain science bring these opportunities and more into the realm of possibility. Much like enabling a paraplegic to drive a car in the referenced video, a recent article describes brain-computer interfaces (BCIs) that may one day help people with brain or spinal injuries move or communicate.
Stimulation is done with tiny electrical pulses that can activate neural activity. The stimulation is driven by the same hub that coordinates neural recording and could one day restore brain function lost to illness or injury, researchers hope.Brown University – The next-generation brain-computer interface system took a leap forward
This work provides insight into the brain and creates more possibilities for enabling the disabled, representing another example of purpose-orientation focused on solving some of our greatest challenges. Along the way, other applications materialize: some good, some bad. One of those applications contributes to the rapidly changing ways we interact with the world. Both what we interact with and how we interact is changing. The transition in how humans interact dates back to the 70’s. This transformational journey had three stages:
First transformation: typing in the mainframe era. In the 1970s, people typed on a keyboard and used text characters to communicate directly with computers.
Second transformation: clicking in the GUI and Internet era. The graphical user interface (GUI) was born, as Apple introduced the Macintosh in 1984, followed a few years later by Microsoft Windows. We added clicking with a mouse to our modes of interaction.
Third transformation: touching in the smartphone era. The third transformation came in 2007 with the introduction of smartphones. Touch transformed personal computing and became the primary interface.
We are now currently in the fourth transformation: conversational, gesturing, and eye interaction in the mixed reality era. In this era, people will interact by tapping, winking, blinking, talking, and gesturing their way through interactions. An evolution towards eye-based interaction seems inevitable, as technology ultimately understands intent and our eyes drive action without gesture or voice cues. The pandemic has accelerated this journey, and the fifth transformation as described above is on the horizon, driven by brain interaction in the brainwave era.