Artificial intelligence (AI) and automated systems have become the backbone of modern society. We encounter them in our daily routines and rely on them in more ways than we realize.
From smartphones and home appliances to industrial processes and decision-making in various sectors, these advanced technologies are here to stay. However, their pervasive use also brings forth a two-fold ethical conundrum: how can we build AI systems that reflect our values and prevent them from deviating from desired behavior?
A refreshing perspective on this issue comes from Dr. Eve Poole, a writer and academic who suggests that perhaps we’ve been missing a crucial element in our approach to AI: humanity itself.
Dr. Poole explores this idea in her upcoming book “Robot Souls,” set to be published in August by the Taylor and Francis Group. In the book, she proposes that the key to creating ethical AI may well lie within the essence of human nature.
Dr. Poole contends that in our pursuit of perfection, we’ve effectively eliminated the “junk code” from our AI models. This so-called “junk” comprises emotions, free will, and a sense of purpose – all elements that form the core of what it means to be human.
“This ‘junk’ is at the heart of humanity,” said Dr. Poole. “Our junk code consists of human emotions, our propensity for mistakes, our inclination to tell stories, our uncanny sixth sense, our capacity to cope with uncertainty, an unshakeable sense of our own free will, and our ability to see meaning in the world around us.”
What seems like unnecessary and flawed components from a machine point of view, Dr. Poole argues, are in reality vital to human flourishing. These human traits are instrumental in ensuring our safety and communal survival.
As AI continues to assume a more decisive role in our lives, coupled with growing concerns over biases and discriminatory practices, she suggests that the answer might be in these very elements we initially deemed unnecessary for autonomous machines.
“If we can decipher that code, the part that makes us all want to survive and thrive together as a species, we can share it with the machines. Giving them to all intents and purposes a ‘soul’,” said Dr. Poole.
Her book also presents a practical roadmap to making this concept a reality. Poole proposes robust regulation, an outright ban on autonomous weapons, and a licensing regime ensuring any ultimate decision about a human life remains in human hands.
Additionally, she asserts the need to define the criteria for legal personhood for AI and lay down a progressive pathway towards it. She explains that humanity’s imperfections led us to dismiss many vital traits when building AI. It was believed that robots possessing emotions, intuition, making mistakes, and seeking purpose would perform less efficiently.
“However, upon closer inspection of these so-called irrational properties, it becomes clear that they originate from the source-code of the soul. It’s this ‘junk’ code that makes us human and fosters the reciprocal altruism that keeps humanity thriving,” said Dr. Poole.
“Robot Souls” critically examines the recent developments in AI and reviews the evolving concepts of consciousness and soul. It positions our ‘junk code’ within this context and calls for a renewed focus on these often overlooked aspects. According to Dr. Poole, it’s high time we revisit how we’re programming AI, for it’s in our intrinsic human nature that we might find the key to creating ethical, reliable, and effective AI.
The intersection of robotics and consciousness is an exciting and complex frontier, and the concept is a subject of intense research and speculation.
The consciousness we experience is a product of our biological brains and involves self-awareness, understanding, and the ability to experience feelings. Giving robots consciousness would involve not only granting them awareness of their own state and surroundings, but also the capability to make subjective judgments and potentially to experience emotions and self-perception.
Various models of artificial consciousness have been proposed. One of them is the Integrated Information Theory, which suggests that consciousness arises when a system has a large amount of integrated information. Others propose that consciousness emerges from complex computation among brain neurons, suggesting similar computations in a robotic system could create a form of consciousness.
However, we’re still a long way from understanding human consciousness itself, let alone replicating it in robots. Building a conscious robot would not only involve technical challenges but also raises ethical and philosophical questions. If a robot were conscious, it would potentially have rights, and we’d have a moral obligation to treat it differently.
Moreover, consciousness in robots brings the potential for suffering. If a robot can feel, then it might experience pain, fear, loss, or other negative emotions. These considerations make it imperative that we approach the subject with caution.
Finally, it should be stressed that while some AI systems might exhibit behavior that seems “conscious” or “emotional,” this doesn’t mean that they truly feel or understand their experiences in the way humans do. AI operates based on programmed algorithms and learned patterns, without consciousness, emotions, or subjective experience.
With that said, the field of AI and robotics is rapidly evolving, so the coming years will undoubtedly bring new discoveries, advancements, and challenges that could reshape our understanding of robot consciousness.
—–
Check us out on EarthSnap, a free app brought to you by Eric Ralls and Earth.com.