Add Your Heading Text Here
While some predict mass unemployment or all-out war between humans and artificial intelligence, others foresee a less bleak future. A future looks promising, in which humans and intelligent systems are inseparable, bound together in a continual exchange of information and goals, a “symbiotic autonomy.” If you may. It will be hard to distinguish human agency from automated assistance — but neither people nor software will be much use without the other.
Mutual Co-existence – A Symbiotic Autonomy
In the future, I believe that there will be a co-existence between humans and artificial intelligence systems that will be hopefully of service to humanity. These AI systems will involve software systems that handle the digital world, and also systems that move around in physical space, like drones, and robots, and autonomous cars, and also systems that process the physical space, like the Internet of Things.
I don’t think at AI will become an existential threat to humanity. Not that it’s impossible, but we would have to be very stupid to let that happen. Others have claimed that we would have to be very smart to prevent that from happening, but I don’t think it’s true.
If we are smart enough to build machine with super-human intelligence, chances are we will not be stupid enough to give them infinite power to destroy humanity. Also, there is a complete fallacy due to the fact that our only exposure to intelligence is through other humans. There are absolutely no reason that intelligent machines will even want to dominate the world and/or threaten humanity. The will to dominate is a very human one (and only for certain humans).
Even in humans, intelligence is not correlated with a desire for power. In fact, current events tell us that the thirst for power can be excessive (and somewhat successful) in people with limited intelligence.
You will have more intelligent systems in the physical world, too — not just on your cell phone or computer, but physically present around us, processing and sensing information about the physical world and helping us with decisions that include knowing a lot about features of the physical world. As time goes by, we’ll also see these AI systems having an impact on broader problems in society: managing traffic in a big city, for instance; making complex predictions about the climate; supporting humans in the big decisions they have to make.
Intelligence of Accountability
A lot of companies are working hard on making machines to be able to explain themselves — to be accountable for the decisions they make, to be transparent. A lot of the research we do is letting humans or users query the system. When Cobot, my robot, arrives to my office slightly late, a person can ask , “Why are you late?” or “Which route did you take?”
So they are working on the ability for these AI systems to explain themselves, while they learn, while they improve, in order to provide explanations with different levels of detail. People want to interact with these robots in ways that make us humans eventually trust AI systems more. You would like to be able to say, “Why are you saying that?” or “Why are you recommending this?” Providing that explanation is a lot of the research that is being done, and I believe robots being able to do that will lead to better understanding and trust in these AI systems. Eventually, through these interactions, humans are also going to be able to correct the AI systems. So they are trying to incorporate these corrections and have the systems learn from instruction. I think that’s a big part of our ability to coexist with these AI systems.
The Worst Case Contingency
A lot of the bad things humans do to each other are very specific to human nature. Behavior like becoming violent when we feel threatened, being jealous, wanting exclusive access to resources, preferring our next of kin to strangers, etc were built into us by evolution for the survival of the species. Intelligent machines will not have these basic behavior unless we explicitly build these behaviors into them. Why would we?
Also, if someone deliberately builds a dangerous and generally-intelligent AI, other will be able to build a second, narrower AI whose only purpose will be to destroy the first one. If both AIs have access to the same amount of computing resources, the second one will win, just like a tiger a shark or a virus can kill a human of superior intelligence.
In October 2014, Musk ignited a global discussion on the perils of artificial intelligence. Humans might be doomed if we make machines that are smarter than us, Musk warned. He called artificial intelligence our greatest existential threat.
Musk explained that his attempt to sound the alarm on artificial intelligence didn’t have an impact, so he decided to try to develop artificial intelligence in a way that will have a positive affect on humanity
Brain-machine interfaces could overhaul what it means to be human and how we live. Today, technology is implanted in brains in very limited cases, such as to treat Parkinson’s Disease. Musk wants to go farther, creating a robust plug-in for our brains that every human could use. The brain plug-in would connect to the cloud, allowing anyone with a device to immediately share thoughts.
Humans could communicate without having to talk, call, email or text. Colleagues scattered throughout the globe could brainstorm via a mindmeld. Learning would be instantaneous. Entertainment would be any experience we desired. Ideas and experiences could be shared from brain to brain.
We would be living in virtual reality, without having to wear cumbersome goggles. You could re-live a friend’s trip to Antarctica — hearing the sound of penguins, feeling the cold ice — all while your body sits on your couch.
Final Word – Is AI Uncertainty really about AI ?
I think that the research that is being done on autonomous systems — autonomous cars, autonomous robots — it’s a call to humanity to be responsible. In some sense, it has nothing to do with the AI. The technology will be developed. It was invented by us — by humans. It didn’t come from the sky. It’s our own discovery. It’s the human mind that conceived such technology, and it’s up to the human mind also to make good use of it.
I’m optimistic because I really think that humanity is aware that they need to handle this technology carefully. It’s a question of being responsible, just like being responsible with any other technology every conceived, including the potentially devastating ones like nuclear armaments. But the best thing to do is invest in education. Leave the robots alone. The robots will keep getting better, but focus on education, people knowing each other, caring for each other. Caring for the advancement of society. Caring for the advancement of Earth, of nature, improving science. There are so many things we can get involved in as humankind that could make good use of this technology we’re developing