Home / Community / Ethical Dilemma in the Age of Machine Intelligence

Ethical Dilemma in the Age of Machine Intelligence

by | Jul 7, 2022 | Community

Advertisement

Contents

Advertisement

We are surrounded by a lot of cool technology in our everyday life. Technology is somehow making the world a great place and at the same time our lives better. However, there is also a lot of complexity surrounding technology today. There are a few constraints on the development and accessibility of technology.

We have advanced powerful technologies that are available to anyone who wants to buy them. Now since technology is making our world better, we do not want to regulate technology. Technology is saving lives in hospitals, making traveling faster and easy access to worldwide communication, and many more.

Technology is a good thing for humanity, but now we are moving to the age of artificial intelligence (AI). Along with it are the internet of things, algorithms, big data, and autonomous transportation. These technologies are being invented and regularly improved to bring a lot of ease to our life. But they also bring a lot of ethical dilemmas with them.

Take autonomous vehicles, for instance, it is predicted that they will dramatically reduce traffic accidents and fatalities by removing human error from the driving equation. There are also other estimated benefits such as eased road congestion, efficient use of gas, decreased harmful emissions, and minimized unproductive and stressful driving time.

But accidents can and will still happen. Let’s say at some point in the not-so-distant future, your car is approaching an intersection and planning to make a right turn. The sensors on the vehicle carefully monitor the biker on the right. Suddenly a toddler frees himself from his mother’s hand and jumps across the street.

Your car can’t stop in time to avoid the collision. The computer must decide in a split second whether to spare the biker or the child. There’s no time to collect more data or to make elaborate calculations about how to inflict the least damage, or to consider how to prioritize one life over another, the young child or the biker.

Now, what if the car can swerve left to meet a wall and save both lives by sacrificing yours. Should it prioritize your safety by hitting one of the two others or hitting a wall and sacrificing your life to save the two?

Now if you were driving the car manually, whichever decision you take would be understood as just a reaction and not a deliberate act. It would be considered an instinctual panicked move with no malicious intent. But if a programmer were to instruct the car to make the same move under given conditions it may look more like premeditated homicide.

But that’s not only for cars, many of our technologies nowadays are starting to make decisions on our behalf. We are moving into the age of intelligent machines. Our present technologies and forthcoming innovation are opening up many novel ethical dilemmas.

It’s fair to say that our life is being dictated by algorithms nowadays and the ethical dilemmas posed by automation, AI, and big data are becoming hard to ignore. For instance, which one would you choose, a car that would always save as many lives as possible in an accident or one that would save you at any cost?

Never in the history of humanity have we allowed machines to autonomously make decisions like today. Before we allow our intelligent machines to make decisions, we need to have a global set of rules designed by moral algorithms, and policymakers that will regulate them. The issue is that the ethical set of rules for intelligent machines cannot be automated or laid out in the algorithmic form.

What is an ethical dilemma?

An ethical dilemma is an altercation between options where any choice a person makes, some ethical principle will be compromised. Ethical decisions involve analyzing different possibilities by eliminating those with an unethical result and choosing the best ethical option.

Ethics are paradigms of rightful and wrongful principles that act as a guideline for what humans should do. Ethics lay out the code of conduct individuals or groups of individuals have to follow. They are typically engineered by virtues, duties, or socially acceptable actions and guided by laws, cultural norms, policies, personal experience, philosophical schools of thought, and religious traditions.

So doing the right thing is a combination of different ethical principles. The human brain is pretty good at applying ethical principles to critical situations. Most of us are taught what is “right” and “wrong” at a very early age. There are also situations where two rightful decisions arise and you have to choose between them.

Such an ethical dilemma consists of a situation where both decisions presented can be considered ethical, however, they stand in conflict and only one can be chosen. Choosing one “right thing” condemn the other “right thing”, and not choosing result in an ethical failure.

We face complex ethical dilemmas throughout our lives and they are governed by a written set of rules to decide if it was a rightful or wrongful act followed by the necessary repercussion. But what happens when an intelligent machine is involved?

The ethical concern with intelligent machine

The prevailing ethos of AI development is to build an artificial agent that is as human as possible. This is a highly controversial debate, but one that should be taken seriously. The creation of an intelligent machine raises a number of issues that are relevant to our society, including the ethics of AI.

As these technologies continue to grow in power and sophistication, it is important to remember that they should be as ethical as their creators or programmers who make them. As such, an ethical dilemma with intelligent machine development is inevitable. As artificial intelligence is used to improve efficiency, costs, and research and development, it also presents some ethical questions.

While AI improves our life, some people are concerned about the potential for societal harm from the technology. Usually, data are gathered to provide better information or experience to the user. With the large amount of data generated by humans and collected by machines, AI is used to process this information to seek useful patterns for better decision-making.

Now with the internet of things and other devices that we carry around all day long, personal data are generated. With that enormous amount of data, researchers are deploying Machine Learning (ML) which is a subfield of artificial intelligence to process that information. ML has the ability to process and seek patterns from data as well as it can learn and improve its algorithms from the patterns.

This way the AI can decide on its own with some degree of consciousness. One such issue is whether it’s right to make a machine conscious. As the ability to make a machine conscious becomes a reality, ethical issues will arise. And since they are learning from our personal data to be able to act as humans, they will develop so moral code just like us.

They can develop emotion to some extent. Now other issues arise, such as is it’s right to make a machine conscious and inflict pain on them with our tedious tasks. Assuming that the machines are not humans, this is an impossible problem, but some people are deliberately seeking to build such machines.

For example, successful engineer Kunihiro Asada set out to develop a robot that experiences pain and pleasure. He also wanted to build a robot that could engage in pre-linguistic learning. Also, there are ethical concerns regarding AI ranging from the use of the technology in deadly military drones to the risk of it upending global financial systems.

Other ethical concerns are the future of jobs as AI systems threaten to replace millions of workers. However, data scientists have legitimate concerns about bias, ethics, and the nature of humans and AI systems. These issues are not just theoretical, but also deeply felt. There is also an issue with the data being fed to AI, if it is outdated or contains bias this can leads to serious consequences.

Additionally, the ethical concerns related to AI include the potential for an increased digital divide, affecting all demographics, including rural areas. While AI can create many benefits, there are also risks associated with missed opportunities. For example, it might cause more unemployment than is currently the case and may lead to more crimes than a judge’s decision.

There are also ethical concerns associated with AI in the health sector. While artificial intelligence can improve the way we diagnose and treat patients, there are risks. While robotic systems have long been considered a potential solution to many health sector challenges, they can also poise major ethical dilemmas in the industry.

Hence, a human rights-based approach needs to be embedded when a business engages in the design, development, and regulation of AI. They also need to examine the individual and societal harm poise by AI. This way they can address different issues through dedicated policies, strategies, and potential regulations.  

The ethical dilemma with intelligent machine

The more serious issue is that advanced technology is not governed or powered by the government but by the commercial world. It is hard for policymakers to govern advanced technology in the commercial world. And technology is moving so fast that any peace of regulation may become useless or outdated within a couple of years.

Technology is amazing but it is going so fast that policymakers have fallen behind. Also, they have many other pressing issues to solve in a country that they cut short on this sector. As such, there is little oversight and accountability for AI programs developed by private companies. The ethical issues raised by these developments are particularly relevant in the context of corporate behavior.

This is why we need government oversight of AI programs. As the ethical debate on AI progresses, the metaphysical ethical issues are being ignored. As such, most policy documents on AI development do not take into account metaphysical issues. These issues are often stigmatized as science fiction and not worthy of policy development.

Another dilemma is that a large amount of data is needed to feed machine learning algorithms and train AI. Data is the new currency of the digital world. Every new technology is now driven by data. The problem here is that this dilemma has opened new ways for bad actors to earn money.

Data brokers are tracking and gathering lots of personal data about an individual to create a user profile. Which they can then sell to big organizations. Cybersecurity is becoming a hot topic nowadays. Cyberattacks have more than tripled during the past couple of years.

And the network of connected devices that track our data to deliver better experiences collectively called the Internet of things has a lot of vulnerable points. And with everything becoming smart like smart homes, smart buildings, smart offices, smart spaces, and smart cities, we are heading toward the internet of everything which further increases the risk of cyberwarfare.

We are generating so much data each day that cybersecurity is the utmost priority to protect the mental health of the user and maintain digital trust in the technology. But collecting a large amount of data is very important if we want to reap the benefits of AI and other technology.

In the case of self-driving vehicles, the start of an error-free transportation system that is also efficient and friendly to the environment might be enough to justify large-scale data collection about driving under different conditions and also experimentation based on AI applications.

When we use AI, we should ensure that our systems have moral agency, as humans do. This may be difficult to do if we don’t fully understand their behavior. The ethical decisions we make today can be taught to machines, just like we humans have learned from an early age and they will be able to make decisions based on moral values.

The ethical dilemma with intelligent machine development starts with our current ethics. In addition to autonomous vehicles, we also have many other systems that interact with the environment. As data-driven technologies continue to develop, the threats and opportunities also increase.

Ultimately, technology does a lot of good things in our lives and we don’t want to do anything to stop that, however, there are other applications of technology that we didn’t anticipate or aren’t able to govern. Reality may not play out exactly like our thought experiments. For now, the technology is evolving and will probably evolve to a point where it can discriminate.

Ethical dilemmas in our daily lives are becoming more complex. Ethics can help to exercise freedom of choice more rationally and accurately. Let’s hope the policy will evolve along with the technology. To avoid a world where unintended events and vulnerabilities that we didn’t take a moment to anticipate, we all have to pull together and start asking the hard questions. This way we can drive our world in a good direction for generations to come.

0 Comments