In recent post on the transformation of interaction, I talked about the changing ways in which we interact with our machines and each other. These changes combine with the emergence of ecosystems to complicate the experience journey. Well, if you saw my last post on our possible future, that journey could get very complicated. I’ve expanded my original visual to incorporate three additional categories of interaction, and some of what science and technology have in store (potentially) for our interaction paradigms.
I’ll cover the changes to the visual here briefly.
Machine-to-Human: we already have machines interacting with us. A sensor on our water pipe notifies us if it detects a leak, while devices allow us to monitor our health, notifying us if our blood sugar is low. Our GPS asks us if we would like to change to a faster route.
Machine-to-machine: as the Internet of Things ramps up, many additional scenarios emerge that take the human out of the equation. You leave work and your car communicates with your home that you will arrive in 30 minutes. Your home turns on the lights, sets the heat to your preferred setting, and starts your favorite playlist as your car pulls in the driveway.
Human assistants: Siri, Google Now, Cortana, and Alexa are all intelligent personal assistants that get us answers, information, or perform simple tasks. Chatbots will increasingly perform a similar role, as personal assistants remove friction from our lives one task at a time. These assistants interact with whatever ecosystem is required to complete a task.
Human agents: in the next phase, the assistant becomes an agent, handling things without our supervision: Waze combines with a driverless car to take you home without your involvement. We interact with the agent to provide direction, but the agent takes it from there. Over time, access to our bio-metric data, DNA scans, medical records, and other personal data allow these agents to know us better than we know ourselves. That’s when agents begin to make decisions and interact for us – you really want this candidate in the next election? I know you well, and I can’t be distracted by campaign noise. The agent then votes on our behalf.
Autonomous agents: in this phase, agents interact with other agents. Our agent takes the job interview for us, interacting directly with the employer’s agent; or they talk to the agent of a potential lover to see if you should consider dating them.
DECENTRALIZED AUTONOMOUS ORGANIZATIONS
In this transformative step, entities begin to interact with one another in a shift to an entity-to-entity scenario: the driverless car owns itself and interacts with the charging station to fill up its battery. The platform that supports my mobility needs is decentralized and autonomous, providing a true sharing economy mechanism for value creation and capture for all stakeholders.
Interaction at this level involves direct brain-to-brain communication between two individuals (talk about having voices in your head).
Some of this may never come to pass. On the other hand, science and technology are converging in a way that makes even the most ridiculous sounding scenario viable – and at a speed that we could not have imagined even two years ago. One thing is certain, the way we interact will continue to change, and with it, the way we experience life.
2 thoughts on “How will we Interact in the Future?”
Perfect story. My fav: “BRAIN-TO-BRAIN — Interaction at this level involves direct brain-to-brain communication between two individuals (talk about having voices in your head).”
LikeLiked by 1 person
[…] these advances in brain science, How will we Interact in the Future? Is human brain-to-brain communication a realizable vision in the not to distant future – and […]