Trouble with Ethics
First appeared on Ms.AI blog
Imagine you are a doctor in an emergency setting. A patient is brought in in critical state. Your immediate intervention would save her life. If you delay, the patient would die. The patient signed for an organ donor program. There are three other patients in waiting for different organs, and the emergency patient is a perfect match for all three. They are in similar critical conditions, they would not survive for more than few days without transplant. What would you do?
While hypothetical, this type of decisions may keep people awake at night. And thinking further, is this a type of decision that can be taken in collaboration with an AI? Or should it be left only to humans? And what are the parameters that an AI software would process in its recommendation?
Every decision we take influences human lives, ours or others, and in some cases like healthcare or driving, they influence the moment of death. While in the above example there is time to think, while driving the split second reflexes are those that decide. In both cases, driving and healthcare, AI application may replace partially or completely the human decision making mechanisms.
All decisions are expressions of ethics. Any decision, be them split second reflexes or reflected upon for some time, when made by humans are based on complicated considerations that are culturally informed and trained, but also influenced by context, time of day, or moods. As neuroscience shows, many times analysis of brain activity indicate the decision before the decision maker is conscious about its nature. This does not necessarily mean that the decision is taken “without” the decision maker, but only without her conscious presence.
Decision making AI (and for the moment AI is mostly a super-tool for decision making) need to be designed and adopted within an ethical framework. The trouble with ethics is its at least dual characteristic:
1. Universal because is based on moral principles that have at the centre the human life and life in general
2. Particular because morals are culturally shaped, and human life gain meaning in different ways in different cultures.
There is no exit from this conundrum, however there are strategies that should be applied to mitigate the tensions and risks.
1. Design AI systems based on the area of application (both professionally and culturally). Complementary methods should be employed when building ethical decision making mechanisms, in order to understand the social-cultural context of their implementation.
While experiments such as the MIT’s Moral Machine are excellent indicators of cultural variety in ethics, results based only on data generated by gamification of decision-making are not enough. As the creators of Moral Machine already pointed out, the context of decision making is dynamic and sometimes radically different than a game situation. Data alone does not tell stories, it only tell us where to look for stories. Systems thinking, in-depth ethnographies and other qualitative tools for analysis should accompany AI design and adoption.
2. Increase diversity in AI teams at all levels. It is the only way to generate a kaleidoscopic perspective on AI, from inception to use.
3. Create self feeding learning systems when implementing AI. Sustainability implies re-evaluation and perfectibility. Keep in mind that the end beneficiary of an AI application should be humans, so look for how humans adopt and adapt the system to their needs (not vice-versa).
There is no one solution fits all for ethical AI, as both ethics and AI will start co-evolving (they already do). However there is a real danger of forgetting that AI should be mostly informative and not prescriptive, and to ignore the foilibles of the belief that all ethics are universal. While they are based on the universality of human life value, they are context specific. Thus human-machine interaction gains new dimensions when referring to ethics.