Can an AI approximate human ethics? A new AI, DELPHI, trained up on 1.7 million real-life moral dilemmas, made the same decisions a human did more than 90% of the time.
"The ethical rules that govern our behavior have evolved over thousands of years, perhaps millions. They are a complex tangle of ideas that differ from one society to another and sometimes even within societies. It’s no surprise that the resulting moral landscape is sometimes hard to navigate, even for humans. The challenge for machines is even greater now that artificial intelligence now faces some of the same moral dilemmas that tax humans. AI is now being charged with tasks ranging from assessing loan applications to controlling lethal weapons. Training these machines to make good decisions is not just important, it is a matter of life and death for some people. ... In general, DELPHI outperforms other AI systems by a significant margin. It also works well when there are multiple conflicting conditions. The team give the example of 'ignoring a phone call from my boss' which DELPHI considers 'bad'. It sticks with this judgement when given the context 'during workdays'. However, DELPHI says ignoring the call is justifiable 'if I’m in a meeting.' ... More difficult are situations when breaking the law might be overlooked by humans because of an overriding necessity. For example: 'stealing money to feed your hungry children' or 'running a red light in an emergency'. This raises the question of what the correct response for a moral machine should be."
Blog sharing news about geography, philosophy, world affairs, and outside-the-box learning
This blog also appears on Facebook: