As practice shows — despite an array of benefits it brings — AI can significantly harm humanity. One way to avoid negative consequences is to create an AI code of ethics, for example, in accordance with the international human law.

A Code of Ethics for AI — Mission: (Im)possible
A Code of Ethics for AI — Mission: (Im)possible

Yana Yelina | Oxagile

Surveys testing the appetite for AI predict that buzz about this advanced tech will continue in the form of countless applications and huge investments. But despite its soaring popularity, AI oftentimes evokes mixed feelings: the linchpin of business success for many companies, it’s still frowned upon due to its faults, with lack of an AI code of ethics being one of the major public concerns.

Indeed, many experts raise the question about whether it’s possible to teach morality to machines. Many are secure in the knowledge that this mission can be completed. However, before trying to give AI a sense of morality, it’s a must for humans to define morality the way it should be processed by machines.

Now let’s take a more detailed look at long-hanging problems evolving around AI development.

 

Self-Driving Cars: Prioritizing People in Accidents

A self-driving vehicle is one of the AI’s achievements we applaud, but have you ever thought of its bias and the problems its moral decisions may cause? — especially those related to prioritizing lives in an accident.

This is akin to the perennial “trolley problem” that goes like this: you see a runaway trolley moving toward five people tied to a track. At the same time, you’re given access to a lever that can change the direction of this trolley toward a different track where it’ll kill just one person. What would you do in this situation? And how should a self-driving car act?

The Massachusetts Institute of Technology (MIT) and its “moral machine” give you a real opportunity to “safely” make that choice, i.e. decide for a driverless car what to do: change the route and kill one person sparing the lives of five or act likewise. MIT’s AI-based platform offers many other scenarios where you choose the most acceptable outcome, for example, prioritizing humans over pets, pedestrians over passengers, young over old, women over men, etc. and, what is also important, deciding whether to swerve (act) or follow the same course (inaction).

Source: media.mit.edu

MIT’s survey became viral and the results varied based on a country and its culture. To wit, individualistic nations tended to spare the young and save more lives.

The researchers claim these outcomes are not meant to dictate how particular countries should really act — they are rather set to gauge how the public would react to the ethics of different design and policy decisions, especially when technologists and policymakers have to override the collective opinion.

Such discussion should also move to risk analysis, becoming a foundation for an Ethics Commission on Automated and Connected Driving, like that created by Germany to focus on prioritizing a human life above all else.

 

AI Weapons: A new Warfare Danger

Another sphere where the use of AI is becoming a norm is military. Namely, intelligent machines, like drones, help to enhance the training of soldiers, improve supply lines, as well as increase the efficiency of data collection and processing.

However, there’s still a flip side about AI application in military and warfare. This is the development of AI weapons, which causes considerable ethical concerns as computer algorithms might be set to automatically select and eliminate people with some predefined criteria — without humans controlling the process.

Source: scmp.com

This translates into a real ethical problem, because machines don’t think the way humans do and may infringe the international humanitarian law as well as The Three Laws of Robotics by Isaac Asimov.

Moreover, as this issue is comparable with the use of weapons of mass destruction like gas clouds or nuclear explosive devices, everyone — from government officials to advocacy groups to data scientists and engineers — becomes involved in the debate about the ethics of AI weapons.

A case in point: in the spring of 2018, Google employees fiercely protested against the corporation’s participation in a Pentagon program that involved using AI to process video imagery and enhance drone strikes’ targeting.

It becomes feasible that if the humanity doesn’t timely prevent the AI arms race from starting, the era of autonomous weapons will get in full swing — with all its ruinous outcomes, like appearing in the hands of terrorists, dictators, warlords wanting to commit ethnic cleansing, etc.

 

AI surveillance: Violating People Privacy

AI and facial recognition have already proved their worth in detecting criminals and improving security in a number of countries. But China took it up another notch — in a questionable and worrisome direction. Specifically, the People’s Republic uses millions of AI-enabled surveillance cameras and billions of code lines to keep close tabs on its citizens.

All is fine and dandy, but with 1.4 billion people closely tracked even in their departments, China is harming democracy and is actively creating a high-tech authoritarian environment.

Source: nytimes.com

Apart from scanning people at train stations and airports, in the street, in the workplace, etc., surveillance cameras are being installed in certain neighborhoods to track members of Muslim minorities and map their relations with family and friends.

Besides, AI cameras are used to track jaywalkers and people who don’t pay debts, sometimes even displaying lawbreakers along with their names and government I.D. numbers on outdoor screens of police offices.

Going further, the country plans to expand its market for security and surveillance software and allocate more funds to fuel the research and development of technologies that would track not only faces, but also a person’s clothing and gait.

 

Any Solutions on the Radar?

As practice shows — despite an array of benefits it brings — AI can significantly harm humanity. One way to avoid negative consequences is to create an AI code of ethics, for example, in accordance with the international human law.

On top of that, we need more transparency in AI developments and critical thinking (similar to medical ethics) that will facilitate assessments of whether damage from AI application outstands its advantages in each particular situation.

 

 

About Yana Yelina
Yana Yelina is a Technology Writer at Oxagile, a provider of software engineering and IT consulting services. Her articles have been featured on KDNuggets, ITProPortal, Business2Community, UX Matters, and Datafloq, to name a few. Yana is passionate about the untapped potential of technology and explores the perks it can bring businesses of every stripe.

 

The content & opinions in this article are the author’s and do not necessarily represent the views of RoboticsTomorrow

Comments (0)

This post does not have any comments. Be the first to leave a comment below.


Post A Comment

You must be logged in before you can post a comment. Login now.

Featured Product

Harmonic Drive - Simplify with our New, Innovative Family of Compact Rotary Actuators with Integrated Servo Drive!

Harmonic Drive - Simplify with our New, Innovative Family of Compact Rotary Actuators with Integrated Servo Drive!

The SHA-IDT Series is a family of compact actuators that deliver high torque with exceptional accuracy and repeatability. These hollow shaft servo actuators feature Harmonic Drive® precision strain wave gears combined with a brushless servomotor, a brake, two magnetic absolute encoders and an integrated servo drive with CANopen® communication. This revolutionary product eliminates the need for an external drive and greatly simplifies wiring yet delivers high-positional accuracy and torsional stiffness in a compact housing.