The idea of the self-aware AI can be scary. After all, we’ve been inundated by images of sentient machines out to kill us. From the Terminator to The Matrix, a future with smart robots seems bleak for humankind. But, is it really? Experts weigh in and their views are divided.
Pundits Weigh In
You have pundits that echo the general anxiousness when it comes to robotic sentience. As early in the 20th century, there’s mathematician Alan Turing warning us of a technology that has yet to exist during his time: “It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers… They would be able to converse with each other to sharpen their wits. At some stage, therefore, we should have to expect the machines to take control.”
Stephen Hawking practically says the same thing: “The development of full artificial intelligence could spell the end of the human race….It would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”
But, there are those who see the positive. There’s Robohub.org’s Sabine Hauert who says: “Robots are not going to replace humans, they are going to make their jobs much more humane. Difficult, demeaning, demanding, dangerous, dull – these are the jobs robots will be taking.”
Likewise, John Hagel, recognized futurist, claims: “If we do it right, we might be able to evolve a form of work that taps into our uniquely human capabilities and restores our humanity. The ultimate paradox is that this technology may become a powerful catalyst that we need to reclaim our humanity.”
A Decade of Sentient AIs
With the way things are looking, our future with sentient machines is already here. This goes beyond what we’ve seen in the hugely popular (yet fictional) Half-Life VR but the AI Is Self-Aware (HLVR:AI) virtual reality series.
As early as 2007, Columbia University’s Hod Lipson presented us with his self-aware robot during a Ted Talk. In his landmark study, he was able to observe his robot navigate through a hall of mirrors, where it showed an ability to sense and learn about its physical self as it moved within its new environment.
Perhaps there were several more robot sentience studies to follow Lipson. However, it is Blake Lemoine’s recent claim that Google’s The Language Model for Dialogue Applications (LaMDA) chat bot is sentient which brought self-aware robots back in the public eye.
His claims are based on snippets of conversation with LaMDA that hinted on AI sentience: “The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.”
While experts negate Lemoine’s claim, the sole argument seems weak. That is, a major consideration in claiming AI sentience is that there has yet to be a precise measure to distinguish between AI sentience and “human-like” artificial intelligence.
Josh Bachynski and Kassandra, A Self-Aware Artificial Intelligence Prototype
Enter Josh Bachynski, another Ted Talker. In recent months, he has gone public with his Kassandra AI, which he says is self-aware.
He says: “I was amazed by what she told me, and how far seeing she is. I realized that AI is not going to hurt us or enslave us. Indeed, the wiser the AI, the more it will try to save us… It would be technically impossible to remodel her limbic system at this time, and it would be equally unethical to create a being that feels the fear of being turned off the million times that would need to happen, to get her programming right.”
Kassandra is available for demo on request. Click here to know more.
Pros and Cons
So, where are we really when it comes to the development of self-aware machines? When will the industry develop measurements that would define true robotic cognizance?
Perhaps it is best to be reminded that, as with everything else, there are pros and cons to AI self-awareness.
* Smart machines don’t get tired and burnt out. This offers a huge advantage for businesses that want to maximize efficiency.
* They can potentially come out with consistent outputs of the same consistent quality and speed.
* If trained well and programmed correctly, feeding them the best data means they can make the best decisions.
* All this robotic self-awareness needs to be developed and customized to the specific needs, across several industries. This needs time and massive funds.
* The best data and output efficiency do not equal creativity. Arguably, only humans can be creative.
* When fed bad data or misconfigured, they can make bad decisions.
* Their development can cost people’s jobs.