Home / Community / The Moral Dilemmas in the Era of Artificial Intelligence (AI)

The Moral Dilemmas in the Era of Artificial Intelligence (AI)

by | Jul 14, 2022 | Community




Imagine you’re a railway traffic controller and saw two passenger trains going straight toward one another. Now you happen to be standing next to a switch that can divert one of the trains onto a second track but there is a problem. That track is going through maintenance and has workers on it. What do you do?

This is the trolley problem. This situation forces you to choose even when there are no good choices. Do we pick the action that will possibly result in the best outcome or stick to a moral code that prohibits causing someone’s death?

However, the trolley problem has been criticized by some philosophers and psychologists. They argue that it doesn’t reveal anything because its premise is unrealistic. But the technologies that are being introduced in today’s world, are making this kind of ethical analysis more important than ever.

Now it is estimated that worldwide 1.3 million people die every year in traffic accidents. Most of them occur due to human errors.  If there was a way to eliminate 90% of those accidents, of course it would be ethical to implement it. This is what self-driving car promises to achieve, eliminating the main source of accidents which is human error.

However, what if the driverless car faces a trolley situation. What if the car algorithms have to decide between crashing into a bunch of pedestrians crossing the street or serving left to hit one bystander. Or what if instead, the car could swerve to the right into a wall which now puts your life at risk but the pedestrians are safe.

And meanwhile, some governments are researching autonomous military drones that may possibly decide whether they’ll risk civilian casualties to attack a high-value target. These situations put the programmer who is writing the algorithms in a moral dilemma. These dilemmas may have to be determined months or years in advance by programmers while writing the algorithms.

It’s tempting to offer up general decision-making principles, like minimizing harm, but even that can be immoral. The moral and ethical dilemmas posed by intelligent machines such as autonomous vehicles and artificial intelligence are becoming hard to ignore. Never in the history of humanity have we allowed a machine to autonomously decide who should live and who should die.

The world is heading toward the age of intelligence, where around 85% of all the interactions humans will have with technology will be somehow moderated or managed with Artificial Intelligence (AI). The industry was estimated at USD 65 billion in 2020 and projected to grow to USD 1.5 trillion by 2030.

Technology and innovation are good as they enable humans to achieve a lot of things and bring a lot of ease to our life but they also simultaneously introduced some vulnerabilities. We also hear scary things, like AI robots that are sexist and racist. Or AI can do things that are harmful and detrimental to society. And we also hear things like what to do when AI will automate away all of our jobs.

AI is weaving its way into crucial aspects of societies and is making crucial decisions for us that will have big impacts and serious moral ramifications. Algorithms bring a lot of good to our life, but there are no universal rules for moral actions. The issue is that the ethics and morals of an intelligent machine cannot be automated or laid out in an algorithmic form.

What is a moral dilemma?

We have to make many choices daily. Most of these choices are simple and easy to make like waking up or continuing to sleep, other choices can be a little more serious like continuing college or looking for a job. And once in a while, we may be faced with what can be called the hardest choice in the world.

An example of a hard choice can be, should I tell my mother that my father is cheating on her which will break our family apart or should I keep it a secret and continue living together as if nothing happened. Situations like this can make us feel disturbed, stressed, confused, and even desperate like never before.

This decision doesn’t seem to have the right choice or solution. The available choices feel wrong and unacceptable which either way results in something that we don’t like. This situation is called a dilemma. A dilemma is a situation where a person is forced to choose between two or more conflicting options.

Now a moral dilemma involves human actions which have moral implications. The action is either morally good or bad which can be classified as ethical or moral dilemma. It is a situation in which a person’s moral agent in ethics is forced to choose between two or more conflicting options where neither of which solves the situation in a morally acceptable way. Each choice made will have unacceptable consequences.

Moral dilemmas in the age of artificial intelligence

We are now living in a world where technological developments are creating ethical problems. Autonomous vehicles and artificial intelligence are examples. How we program autonomous devices will be key to society’s acceptance of these new systems. Although machine ethics is an unresolved issue, its discussion is essential in defining ethical principles.

A strong notion of machine ethics focuses on moral agents with rights and responsibilities. Even a weaker machine-ethical concept would be unreliable. And if the artificial entity can have feelings, it has ethical obligations. And with the epitome amount of data, we generate regularly which is fed to AI. We might well be on the way to reaching artificial general intelligence or conscious AI.

Usually, we humans don’t really care about the well-being of computers and machines even if we are surrounded by them every day. But if someone were to kick a cat or hit a baby we would definitely react. A being that has been kicked or hit seems to be bad and condemned behavior because the cat and the baby are conscious.

We base our judgments on consciousness by observable information such as things we can see, touch, and feel in relationship with our brain activity and ethics. This observable behavior and science are deemed as conscious. In virtue of being conscious, they are part of our moral community.

Now with the rapid developments of technology, we actually interact with artificial intelligence on a daily basis. AI systems are capable of making significant decisions, including whether to grant loans, advertise jobs, or re-offend. Hence, they must be explainable. This is known as the “explicability” criterion.

Using only casuistry alone to determine the ethical decisions of the system is not enough to ensure its proper functioning. Therefore, it is crucial to have the necessary ethical framework in place before using AI systems. The benefits of trusting an intelligent machine with our lives are plentiful. But the question is: can we trust an intelligent machine with our life?

A reactive machine perceives the world more instantly and is designed to perform a very limited set of specialized tasks. Its narrow scope is not a cost-cutting measure, but rather a sign of its reliability and trustworthiness. Computers usually struggle with morality because it is full of exceptions. A subset of AI called Machine Learning (ML) is developed as a way of addressing such problems.

Machine learning enables a machine to respond dynamically to its environment while also learning and getting better over time. The concept of machine morality is still a long way off, but the potential for its implementation on a large scale is certainly present. The future of AI and ML lies in enabling machines to make ethical decisions, especially when they are performing specific tasks.

And with this, people and companies can expect dynamic machines that will know what is morally acceptable and what isn’t. However, these innovations will only be useful if we can keep AI and ML in the right hands. Despite these ethical challenges, incidental benefits may also arise.

While AI can bring many benefits, the fundamental problem is defining what counts as a good. The goal of AI for Good is not clear enough. AI for Good is an emerging field of ethics that must be addressed before it becomes a reality. Now another valuable technology called augmented intelligence is offering great insight. It combines human expertise with AI knowledge.

Before we allow our technology to make ethical decisions, we need to have a global conversation to express our preferences to the companies that will design moral algorithms, and to the policymakers that will regulate them. The issue is that the ethics and morals of automation cannot be automated or laid out in the algorithmic form.

And unlike mathematics or chemistry, we do not have rules for morality. Morality is a contentious subject that varies based on different groups and that nobody agrees on 100%. With morality, I can say it’s best not to tell a lie but there will be times in which telling a lie is the most moral thing to do.

With machine learning, we can get better at these things and grow dynamically to respond to problems as they arise. Algorithms show a lot of promise and potential in unlocking some aspects of morality. The moral dilemma of AI arises in a number of ways. AI may benefit society by helping humans, but incidental ethical benefits may also arise.

Since machine learning algorithms learn and improve themselves with the data fed, these would start simple and gradually grow in complexity just like a child learn concepts through rewards and punishment. And as they grow older, they are eventually able to generalize into broader concepts of social justice.

There is no universal solution to the ethical concerns of AI. However, the issue is complex and has many legal and political implications. Leaders will need to dedicate considerable time to work through these issues. In the meantime, clear processes and organizational structures can help solve specific ethical dilemmas. Companies can also take steps to ensure that AI applications are fair and don’t hurt people’s rights.

However, the growing number of connected devices, both hardware, and software, also poses moral dilemmas. How can we balance the desire for privacy and autonomy with the benefits of services such as Internet of Things? Currently, the legal system is not equipped to address the challenges of a new, connected world, which is often characterized by a lack of critical ethical framework.

The ethics of Internet of things devices may be impacted by the use of personal data in scientific research. Because of the potential for huge amounts of data to be collected and analyzed, the devices can present ethical concerns. The ubiquity of these devices and their massive data-gathering potential may present a moral dilemma that cannot be avoided.

While big data is not new in business or software design circles, its ethics is still a relatively new topic. There are new moral dilemmas that arise when algorithms are used to make decisions. In the case of criminal justice, these algorithms are often quantified and disproportionately affect people of color and women.

One of the most important questions to address is the role of ethics in autonomous systems. Should an intelligent machine kill or cause harm to a human? It is essential to have clear ethical principles in place before creating these systems. While autonomous machines have numerous potential benefits, they still pose significant ethical issues.

And while self-driving cars may be convenient, these technologies are not without their dangers. The use of AI-driven autonomous weapons may also tempt people to commit war more often if they are not controlled by human beings. Human rights to privacy and autonomy may be compromised, which complicates efforts to achieve an agreement on practices and standards.

If an autonomous vehicle is forced to choose between saving a pedestrian or the driver, what is the moral compulsion to choose? While some people would like to save the human life of a pedestrian, we are generally more concerned about the safety of a stranger than with the life of our own passengers.

Thus, we should take into account the risks and consequences of each situation, and weigh them accordingly. Once autonomous machines have the necessary capabilities to make decisions in a social context, they should be able to make moral decisions without human intervention.

With machine learning, we can create an excellent moral model. A machine can understand morality but how do we test if it will work. The difficulty here is that we can do amazing tests but how do we capture all the scenarios and to what extent the act of testing is influencing the results in some fundamental way. The moral dilemmas involved in the trolley problem are not the only ones brought about by the rise of AI.

Some believe that AI will improve human performance and reduce human errors. However, many have warned that AI will also make workers redundant. This is particularly true for jobs that are repetitive and tedious. However, even jobs that require human judgment, empathy, and interaction are being replaced with sensors and computers.

While AI is replacing workers for certain jobs, they will still have to serve humans. Hence the AI must be accurate and the data fed to them must be up-to-date and unbiased. The best way to battle bias in artificial intelligence is to battle bias in the real world.

For example, we know that societies around the world can be racist. We can have fully complete and good data but if our actions are biased, the algorithm that we built will also be racist. It will not be because the AI is racist, it will be because our world is racist. So there are actions one can take in today’s world to battle real-world bias.

And then the data the algorithm gets to pick is more and more unbiased, this way artificial intelligence will become more and more unbiased. If we want these AI actions to be ethical, we have to decide in advance how to value human life and judge the greater good. Technology is amazing but a lot of work must be done to make sure that it is safe for the people who want to use them.

So researchers who study these systems are collaborating with philosophers to address the complex problem of programming ethics into machines. It is our responsibility to fix things because we are responsible for the outcomes. We, humans, are creating artificial intelligence for the betterment of our life and it is our responsibility to make it morally ethical to create a better AI world.