# Who Has the Right to Instill Morality in Machines?
Written on
Chapter 1: The Challenge of Teaching Morality to Machines
The question of whether it's feasible to teach machines the difference between right and wrong is increasingly pressing. As technology evolves, the concept of artificial intelligence (AI) continues to grow, yet the challenge remains daunting.
Despite the advancements in AI, as depicted in films like I, Robot, where the nuances of humanity are explored, the task of instilling moral understanding in machines appears even more complex. This raises the critical query: How will artificial general intelligence (AGI) behave, feel, and reason in the future? As AI becomes more integrated into daily life, including applications like self-driving cars needing to recognize a stop sign, it becomes essential to determine how to impart morality to these machines to prevent potential harm.
Section 1.1: Defining Intelligence
One significant hurdle in teaching machines ethics is our own lack of clarity regarding intelligence itself. Humans possess a general sense of intelligent behavior; however, defining it scientifically is challenging. Psychologically, intelligence encompasses a variety of human cognitive abilities, motivation levels, and self-awareness.
What kind of intelligence should machines possess? In Western cultures, intelligence is often equated with speed — the quicker someone answers a question correctly, the smarter they are deemed. Conversely, in other cultures, a thoughtful, well-considered response is valued more highly, emphasizing a deep engagement with the problem before answering.
This divergence highlights a broader issue: not only is our understanding of intelligence murky, but there is also no universal agreement on how intelligence manifests. This underlines the necessity of diverse AI ethics committees, as addressing these challenges requires a variety of perspectives and lived experiences.
Subsection 1.1.1: The Pursuit of Ethical Machines
Section 1.2: Can Machines Make Ethical Decisions?
Is it possible that robots could someday function as ethical decision-makers? This question revolves around whether machines can operate based on ethical principles. James H. Moore proposed a classification of moral agents, ranging from the least capable (ethical impact agents) to fully ethical agents.
This framework categorizes the types of decisions made and the potential responses to those decisions. Ethical agents exhibit varying levels of moral competence, culminating in machines endowed with free will. Moore posits that these fully ethical agents possess "central metaphysical features" typically associated with human ethical agents, such as consciousness, intentionality, and free will.
Even before we achieve a reality where machines can make ethical decisions autonomously, we must grapple with the limitations of AI's actions based on the question: "What are we willing to permit them to do?" A worldwide consensus on fairness and morality is essential, and this understanding must be translated into accepted algorithms. The urgency for this alignment grows with each advancement in AI technologies.
Chapter 2: Insights from Experts
In the video "Wendell Wallach - Moral Machines: From Machine Ethics to Value Alignment," Wallach discusses the ethical frameworks necessary for developing machines that align with human values. He explores the implications of AI in moral contexts and the urgent need for well-defined ethical guidelines.
The second video, "Teaching Artificial Intelligence to be Moral | Will Lifferth | TEDxUTK," delves into the educational aspects of instilling morality in AI. Lifferth offers insights into the frameworks and methodologies that can be utilized to guide AI development toward ethical behavior.
As noted in my previous blog entry, The Moral Machine: Teaching Right From Wrong, we must urgently seek a common ground for defining what is just or unjust in the realm of AI. If we fail to do so, it may be that AI surpasses us in wisdom, ultimately teaching us invaluable lessons about morality and ethics on a global scale.