With the rapid development of today’s technology, and particularly within the AI-sphere, ethical issues have followed as a result to the technology changing our daily lives. WeForum has listed nine ethical issues in artificial intelligence, as well as solutions to how we should address these in the future.
1: Unemployment – What happens after the end of jobs?
With the implementing of AI follows automation which in general sounds quite good. It basically enables people to spend more time doing “important tasks”, and less time analysing data. As our society is currently full of jobs that have the potential of becoming automated, how should we relate to this without forcing people towards unemployment?
Julia Bossman, President of Foresight Institute writes that it is essential in the future that we start questioning how we use our time. In today’s society, most people rely on “selling their time” to have enough income in order to fund themselves and support their family.
What happens if we are no longer required to do this? Bossman says that we can hope that this opportunity will enable people to find meaning in non-labour activities, spending their time more on things they want to do, rather than what they are obliged to do.
“If we succeed with the transition, one day we might look back and think that it was barbaric that human beings were required to sell the majority of their waking time just to be able to live”, Bossman writes.
2: Inequality – How do we distribute the wealth created by machines?
As today’s economic system is based on compensation for contribution to the economy (hourly wage), what will happen when AI can drastically cut down how much companies rely on human workforce, as essentially, revenues will go to fewer people?
Bossman emphasises the importance of acknowledging that if we are imagining a post-work society, we need to establish how to structure a fair post-labour economy to ensure there will not be a massive wealth-gap.
3: Humanity – How do machines affect our behaviour and interaction?
We are rapidly approaching the age where humans will frequently interact with machines as if they were humans, such as chatbots in customer sales, etc. “While humans are limited in the attention and kindness that they can expand on another person, artificial bots can channel virtually unlimited resources into building relationships”, Bossman writes.
Machines are now able to trigger the reward centres in the human brain, with such as click-bait headlines and video games. Methods like these are used to make numerous video and mobile games become addictive.
This ability to direct human attention and triggering certain actions can, when used appropriately, be very effective at directing human attention and certain actions. This could evolve into an opportunity to nudge our society towards a more beneficial behavior, Bossman writes. However, if misused the consequences can be detrimental too, so there is a fine line here.
4: Artificial stupidity. How can we guard against mistakes?
As machines learn from doing, systems usually go through a “training phase”where they learn to detect the correct patterns and act accordingly to the input they are given. Once a system is fully trained it can go to the test phase where it receives more examples in order to see its performance, Bossman writes.
However, what is important to keep in mind is that the training phase is not able to cover all possible scenarios that a system can come across, hence why systems can be fooled in ways that humans would not be.
Bossman emphasise the imporance of ensuring that the machines perform as planned and that people are not able to overpower it to use it for their own benefits.
5: Racist robots. How do we eliminate AI bias?
Althought AI bares massive potential that goes beyond the abilities of humans, it can unfortunately not always be trusted to operate fairly and netural. Bossman mentions that there are incidents where it can go wrong, such as when a camera missed the mark on racial sensitivity, or when a software used to predict future criminals showed bias against black people.
She emphasise that AI systems are after all developed by humans who are able to be biased an judgemental. “Once again, if used right, or if used by those who strive for social progress, artificial intelligence can become a catalyst for positive change”.
6: Security. How do we keep AI safe from adversaries?
The importance of cybersecurity is increasing alongside the development of AI as it can be used for nefarious reasons as well as good, Bossman argues.
“This applies not only to robots produced to replace human soldiers, or autonomous weapons, but to AI systems that can cause damage if used maliciously”, she writes.
This means that in the future we need to work hard towards ensuring cyber security, but if the right measures are taken, this can be controlled.
7: Evil genies. How do we protect against unintended consequences?
There is the inevitable question of AI potentially turning against us, Bossman writes, but she emphasise that there is no need to worry about a machine having a malice at play, only a lack of understanding the full context in which the wish was made.
8: Singularity. How do we stay in control of a complex intelligent system?
Should we fear the moment where humans are no longer the most intelligent beings on earth? Bossman argues that there is a serious question, whether AI would one day have the same advantage over us, as we have over animals.
“This is what some call the “singularity”: the point in time when human beings are no longer the most intelligent beings on earth”.
9: Robot rights. How do we define the humane treatment of AI?
According to Bossman we are currently building similar mechanisms of reward and aversion in systems of artificial intelligence, as we are with anything from humans to simple animals.
At the time being, these systems are fairly superficial, but they are becoming more complex, Bossman writes, asking if we could consider a system to be suffering, when its reward functions give negative input.
“Once we consider machines as entities that can perceive, feel and act, it’s not a huge leap to ponder their legal status”, Bossman says, asking whether we should treat theem like animals of comparable intelligence, considering the suffering of “feeling” machines.
Summarising these ethical issues, Bossman says: “While we consider these risks, we should also keep in mind that, on the whole, this technological progress means better lives for everyone. Artificial intelligence has vast potential, and its responsible implementation is up to us”.
This article was first published at: https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence?utm_content=buffer6189d&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer