top of page

What Latour can teach us about AI and its moral implications




Last week, the renowned French philosopher, sociologist, and anthropologist Bruno Latour passed away at age 75 (1947-2022). Latour is considered to be one of the most influential thinkers of modern-day science. His Actor-Network theory (ANT) and mediation theory are known to provide an alternative perspective to the famous subject-object dichotomy, a dominant paradigm in science originating from Kant. The subject-object dichotomy is based on the idea that there exists a separation between us as the observing subjects, and all other entities placed outside of reality as the objects that are being observed. It not only determines our perspective on climate change as an object independent of human subjects, which Latour's latest work is focused on but also our relationship to Artificial Intelligence (AI) and its moral implications. In view of the current critical ethical issues with AI systems pervading our societies, reviewing Latour’s Actor-Network theory provides invaluable insights into the human network that can create or mitigate the threats of AI.

Actor-Network theory (ANT) According to Latour, modernity can give us the illusion that as we innovate and progress, a greater distance is realized between us as thinking and knowing subjects and the objects we create and control. Latour argues that we only become who we are in the interaction with other subjects and objects within a larger dynamic network. Most of the artifacts we use have become so commonplace that we do not even realize how they shape our relationship with the world and influence our actions. An example of this is the bicycle. The bicycle is only understood as a bicycle in relation to its user, and vice versa, the cyclist becomes a cyclist in relation to its bicycle. While we are cycling, we may not constantly perceive the bicycle as a distinct object: in our habit of cycling, the bicycle’s presence recedes into more subconscious thoughts. We act as one, or as philosopher Don Ihde would say (2010): ‘we embody the artefacts we use, and they become part of our actions.’ Only in the interaction with the cyclist is the bicycle used for cycling and recognized as a bicycle. Similarly, only in the interaction with human subjects are AI systems used with specific impacts, and recognized as objects with certain moral implications.

In On technical Mediation: Philosophy, Sociology, Genealogy (1994) Latour shows how thinking in the subject-object dichotomy is still deeply ingrained in our understanding of technology and its regulation. He uses the example of gun regulation. Proponents believe that “guns kill people”. From this point of view, the availability of weapons is enough to shoot someone, and citizens must be protected against this through gun regulation. Opponents of gun control believe that "people kill people." The weapon itself is neutral and does nothing. Given that someone is a good or bad person, will determine the shooting of another person.

How does the ANT relate to AI? Similarly, it is often said that either “AI is neutral” or that “AI harms people” -- regardless of the autonomous capacities of AI technology. The latter portrays AI as the exogenous force that happens to us and against which we must be protected. In the media especially, AI is

often described as sexist, racist, or something we are in conflict with: after all, it might take over our jobs. In this view, a certain agency is ascribed to AI without directly considering its developers, owners, and manufacturers -- and their responsibilities. On the other hand, there are those who believe that AI is in fact a neutral and isolated object that can only be made harmful by humans. A model’s shortcoming might only reflect human shortcomings -- a biased output is the direct result of a biased human. Garbage in, garbage out.

Both perspectives do not do full justice to our relationship with AI technology and create false expectations. According to Latour, technology can never be neutral because it always influences (or mediates) the way we carry out our actions. Technology plays an interactive part in our actions. As subjects, we project our goals and wills onto the objects to initiate certain actions. A bicycle is designed to move a cyclist through cycling. Also, designing an AI is by no means a neutral process. It is developed and used by people to initiate certain actions. Therefore, it is continuously mediated by social aspects, such as the goals we set and the reality we try to model into -- and influence through -- AI technology.

How does the ANT relate to AI and its moral implications? Asking whether AI can be moral is also a question that is often interpreted using the subject-object dichotomy. In What are the Missing Masses? The sociology of a Few Mundane artifacts (1992) Latour states that it is a nonsensical question to ask where morality takes place, and if the technology itself can be considered moral. We also don't ask whether ‘the alarm in a car is moral when it goes off when we're not wearing our seatbelts.’ The entire network including the seatbelt, car, driver, car manufacturers, traffic rules, police, policymakers, and more, together provides the context for recognizing the action of "putting on a seatbelt" as the effectuation of the collective and moral goal of safety. Hence AI itself could not be moral or not: it is only through the network around the AI -- and through its interactions with subjects -- that moral goals can exist. With Latour in mind, our research scope needs to shift from understanding AI in isolation to understanding AI as part of a larger socio-technical ecosystem. The Dutch child benefit affair, for instance, harmed minorities who were wrongly classified as fraudsters, and left with no redress mechanism as no other human subjects were fully overseeing or understanding what actions the fraud detection technology was mediating as an object. This shows us that AI systems are always part of a larger interactive network of many stakeholders with different incentives. If it remains unclear who is involved at what moment, and what everyone’s responsibility is, it is difficult to guarantee fair and transparent decision-making, or ethical AI. After all, being a fraudster is not a fixed property that someone may possess or lack, it is a social construct that we aim to identify through proxies such as the features of a person’s profile.

How can the ANT be used for our own research? The ANT provides a great starting point to understand how bias and unfairness can be investigated within the public domain. We aim to address bias and fairness issues through transparency of information exchange between stakeholders, and within the design decisions that are made to realize a collective goal through technological objects. Not only can we do this by focusing on the system itself but also by identifying stakeholder needs per social context and to investigate how information is exchanged between stakeholders with different technical backgrounds. Ultimately, we need to design the socio-technical ecosystems around AI, and make them more transparent, inclusive, and democratic. At the Civic AI Lab, we aim to do so by looking into different practical AI use-cases for eg. the domains of health, mobility, and education.

Comentarios


bottom of page