Search
  • Alec Balasescu

The Unequal Human

The disruption that smart technology and AI bring is comparable in recent history with the first phases of industrialization. Social movements, many times violent, and major cultural shifts ranging from food production and consumption to changes in family structure impacted all level of human existence. It happens again. How will we deal with change this time around?


One of the phenomena already observed is direct, physical resistance to introduction of new tech in a diversity of domains. Most recently, attacks against driverless cars on public roads have been reported in Arizona, near Phoenix, where Waymo is testing its products. From aggressive driving to throwing rocks and stones or slashing tires, the stories are confirmed by both local police and the “safety drivers” in the driverless vehicles.


While police intervenes and stops the attacks when it can, or give warnings to the perpetrators when they catch them on spot, the company itself refuses for the moment to press charges, or even to allow police to view the video recordings that may help identify the attackers.

Despite the decisions taken by the company, wise in my opinion because we are on an yet uncharted legal territory, there is one aspect of the legal matters in these incidents that point to an unequal position from which humans relate to AI: responsibility, legal or moral.


In the situation of conflict with a self-driven anything, humans are by default in an asymmetrical position, as both the creators and the sole responsible for the behaviour of the machine. In a legal situation, the inflict of harm towards an object, even self-driven one, transforms the attacker into a potential felon by virtue of property laws and the action of destroying property. The reciprocal is not valid. A driverless car harming a person will redistribute responsibility to a series of human agents or corporate entities in a legal framework that is yet to be designed.  


The legal space has very limited and highly disputed place for non-human agency, but non-human agency is a well-known term among anthropologists and philosophers – with Bruno Latour as its pioneer – and it may be important to revisit it when talking about AI. The decision making mechanism implying AI needs to be better understood in the process of legally regulating their functioning. If deep learning and neuronal networks hide the decision making process from the programmers, are they still responsible for the way the machine behaves? Will the responsibility be taken up by the corporation who builds it, or by the one that integrates it into use? And what happens when humans are tempted to suspend judgment in case of automation? – Interesting illustrations of this conundrum are revealed both in the smart cities case scenarios and in health and medical services. These are all questions that continue to be debated and answers to them will be as dynamic as the AI phenomena itself.


However, for the moment only one question has a clear answer: humans are by default responsible when machines are harmed (can we even talk about harm in the case of a machine?), while responsibility gets diffused and re-distributed when humans get hurt by technology. I think we need to take a second look at this.

About MIT’s Moral Machine in a future post.    

Paris by night - photo by Alec Balasescu

100 views

Recent Posts

See All

AI, Climate Change, and Healthcare.

Climate Change and Artificial Intelligence (AI) are the two main disrupting factors in today’s global economy, despite the fact that the first ‘benefits’ from armies of negationists, while the second

©2019 by Alec Balasescu