Acknowledgments:
This paper was written during a 1-month residence at the Paris Institute for Advanced Study under the "Paris IAS Ideas" program
In the spring of 2024, leaders of a private corporation asked me to speak about the ethics of Artificial Intelligence (AI) at a meeting of their employees who were interested in how AI would affect their work and what they could do about it. I was invited as an "ethics expert" together with a high-level civil servant from the Swiss government who was responsible for developing policies to govern AI applications and development. During a preparatory conversation to coordinate our interventions, the civil servant said he expected I would speak about the set of ethical principles that are prominent in relation to AI. Such principles include transparency, justice/fairness/equity, non-maleficence, responsibility and accountability, and privacy (Jobin et al., 2019). Specifically, the civil servant expected me, as the "ethics expert," to identify and distill the general values that societies hold dear into concrete and actionable concepts -these principles- that could be used by corporations or governments to assess the risks of technologies and develop technology policies and practices accordingly.
The expectations of the civil servant are representative of the way lawmakers, technologists, and publics often anchor on principles and experts (who define, deploy, and speak authoritatively on principles) to provide answers on how to manage technologies responsibly. But a principles- and expert-forward way of thinking about AI, ethics, and governance is neither inevitable nor neutral.
Thinking about ethics as principles, codes, or rules that can be formulated by experts and used to understand and guide technologies is a result of historical development in North American and European societies and their institutions (Boenig-Liptsin, 2022). It is a product, in part, of the influence of Western moral philosophical traditions of utilitarianism, deontology, and virtue ethics, each of which aims to arrive at universally valid ways to say what is ethical (and what is not) even as they propose competing theories of the good. In this tradition, principles are the tidy outputs of trained philosophers' search for ideals as they encode ideas of what is just, what is deserved, what one has a right to, and what one owes others. Under the influence of this way of thinking and doing ethics, technology ethics takes ethical principles as the well-established value standards that a given technology can either infringe upon, erode, or support.
AI ethics is also a product of the history of applied technology ethics, such as bioethics and engineering ethics (Hurlbut, 2015; Hilgartner et al., 2016). Working through decades of controversies related to science and technology -such as GMOs, nuclear and chemical accidents, bridge collapses- ethics applied to scientific and technological issues has developed practices and institutions that delegate to scientific and engineering experts the power to say what defines an issue of ethical concern regarding emerging knowledge and practices. This history of technology ethics has prepared blueprints and institutions for AI ethics today. For example, in 2017 a group of AI researchers gathered in the conference grounds of Asilomar, California to develop principles for safe and ethical AI in a direct echo of the legacy of the famous 1975 Asilomar meetings on the ethics and safety of recombinant DNA research convened by biologists ("Asilomar AI Principles," 2017). AI experts drew upon the same repertoire of what ethics means and how and by whom ethics may be done, reaffirming both the central role of principles and the experts (AI researchers, scientists, lawyers, philosophers) who speak for them.
Thinking and acting on technology ethics in terms of principles and expert assessment has some benefits. Principles provide a manageable set of concepts around which to rally and drive important advocacy and change in legislation, corporate policies, and technology design. When policies are written based on principles, this sends a signal that people in these societies find them important, want to live by them in the contexts of new technological abilities, and have their interactions with technology affirm the values that the principles represent. Experts are the dedicated and knowledgeable spokespeople for ethics, who can be brought in to solve a problem and who can also carry some of the weight of responsibility if something goes wrong.
At the same time, this approach to the ethics and governance of AI has consequential setbacks. First, the bare principles are abstract and elusive to work with. Even as the principles are developed with empirical case studies and written into standards for technical design or regulations, they stand apart from lived reality. This situation makes it difficult to know how to put ethical principles into practice and what it means to abide by them. Adopting principles can become a competitive advantage for corporations and part of corporate identities (for example, Apple has made privacy a core element of its brand). But the disconnectedness between abstract ideals and lived realities enables organizations to declare abiding by privacy, while their implementations vary dramatically, showing no real single thing "privacy." Most consequently for democratic societies, principles work by appealing to the authority of ethics abstracted from and removed from ordinary life, and in doing so principles contribute to excluding ethics from everyday experience of the multitudes who live with technologies. By restricting ethics to principles and the professional expertise needed to define and implement them, diverse individuals and societies lose the capacity and responsibility to recognize the role of AI in their lives and to shape that reality.
To support ordinary people to recognize and take the power and responsibility to shape ethical life in the age of AI, in this paper I develop "ordinary ethics" as a necessary complement to "expert ethics." Instead of leading the work of ethics with principles and experts, people engaging in AI ethics can lead it with real world socio-technical contexts where diverse people construct, challenge, and enact ethics in the course of daily life. This draws on the understanding of ethics as made and lived in ordinary life, and hence calls on anyone working to engage AI ethics to look at ordinary life as a site of ethical formation. Ethics is always-already grounded in sociotechnical reality, and held up by societies as values to aspire to that are above the fray of the everyday, and ordained and assessed by experts from on-high. In this context, rather than dictating any particular set of principles, the work of an "ethics expert" is to observe -and help others deepen their ability to see- where and how ethics takes shape in the ordinary. The opportunity is to anchor policy and technical decisions on lived realities, and, where abstractions are needed to represent populations (a multitude of individuals' daily realities) and constituencies, sustain and strengthen the connection between abstractions and the ordinary and between elite experts and regular citizens. The latter especially holds the promise to, hopefully, make democracies more resilient and citizens better capable of determining the shape of their lives in the face of powerful and ubiquitous digital technologies.
Understanding the ordinary means starting from the ordinary. This means starting from circumstances and challenges of daily life in which people encounter technologies and where technologies shape the texture and contours of life-significant aspects like language and forms of communication, identities and sense of self, relations with others, visions of desirable futures, and the design of institutions with power over their lives. The approach to ethics as ordinary draws on empirical work from Science, Technology and Society (STS) and moral anthropology, where scholars study how people, as actors and analysts of their own condition, work out what is good, right, and just in the process of going about their lives. Both moral anthropology and STS have come to the understanding of ethics as "ordinary" -unfolding in the unremarkable situations and activities of everyday life. As Michael Lambek, an anthropologist who studies morality in cultures ethnographically, puts it, "ethics is intrinsic to human life and can be understood as immanent within it" (2015: 1). To speak about ethics in the ordinary, Lambek prefers the adjective form ethical to the usual noun ethics. The adjective form draws attention to the processual, practice-oriented, and aspirational quality of ethics: something to be actively envisioned, aimed at, reached for, and enacted. The concept of "ordinary ethics" draws attention away from ethics as that which transcends the ordinary and is encoded beyond and "above" the fray of daily circumstances in principles, laws, practices, and institutions. In contrast, it shows ethics as formed and experienced in everyday encounters and relations among people. Ethics as ordinary is expressed and evidenced in how people build their lives. Attention to "ethics as ordinary" shifts attention from sentiment and morality, towards praxis. Ethics is worked out in practice.
The sites of ordinary ethics: discourses, identities, representations, institutions
If ethics shows up in ordinary life, what can be said about ordinary ethics in the age of AI? To work on ordinary ethics in the age of AI means accounting for technology's role in daily life, especially those sites of life that are particularly salient for how people envision what is a good life to live and the means with which they pursue this good on their own and in collectives. Scholarship in STS has developed ways to attend to how science and technology are part of the ground of ordinary life in which ethics is enacted and comes to matter. This scholarship provides people seeking to make sense of AI ethics with where and how to look for issues or moments of ethical significance that relate to AI.
The first insight is that human lives are entangled with material artifacts and ways of knowing in ways that constitute identities, moral imaginations, and social orders. The ethical, as aim at the good life with and for others, is in dynamic "co-production" (Jasanoff, 2004) with our technologies and the epistemological commitments that give rise to them and that they in turn reproduce. In other words, people construct their sense of what is good, right, desirable, or just together with repertoires of knowing and doing (epistemologies and practices) of science and technology in the process of navigating life and making sense of a complex world.
The specific capabilities and potential impacts of emergent technologies are often the focus of analysts' ethical assessments. In these contexts, decision-makers frequently grasp for ethics to be the measure above the fray of life that can be used to assess, judge, or guide technology design or uses (as if "from above," or, to use another popular ethics metaphor, as "guardrails" to prevent technology's risks). Instead of a focus on technology as the object of assessment, STS steers attention towards the broader situations, social structures, political culture, and relationships from which technology emerges and which it participates in reconfiguring. In this wider framing, STS uncovers that what matters, ethically, goes beyond technology's direct interventions on bodies, environments, or social institutions. Technology informs more subtle transformations to foundational power relations upon which societies are built, and it configures collectively held senses of normality or rightness (Amoore, 2020). Rather than technology being the fast-driving car on "the road to progress" and ethics the "guardrails" to prevent a catastrophic slip, ethics is already inscribed in the very vehicle, the concrete, the road condition, the person as a driver (licensed or not), and their sense of appropriate speed and direction.
As we enlarge the field of technology's effects, from a focus on the direct impact on isolated processes to the subtle and co-dependent interaction and mutual configuration of technology and human lives, the sites of ethical inquiry expand accordingly. To understand how a specific concern arises around a technology, analysts must consider the cultural, political, and legal milieus of which technology and people are a part. Modes in which publics and experts, together or in opposition, make sense of the technology -such as debates, imaginations, visions, discourses, and representations- become crucial sites of inquiry into what is ethically at stake and for whom in relation to technology.
Research on the dynamics of co-production has identified four sites where ethically significant reconfigurations occur as people strive to make sense of the encounter of novel epistemic and technological capabilities with existing values, norms, and forms of life (Jasanoff, 2004). These sites are discourses, identities, representations, and institutions.
Discourses, identities, representations and institutions are sites where people make order and are the ready-made means by which individuals and collectives make sense of their circumstances in situations of change or disorder. When problems arise in societies that need solving or when there is a need for collective sense-making in contexts of emergent or controversial phenomena, discourses, identities, representations, and institutions are the places analysts can look for how societies create order by braiding social determinants with epistemic resources and, in the process, (re-)making these. I describe the four sites below and draw out their distinct ethical valence. Each site offers a reservoir of stability and constancy of value and meaning in relation to which technological novelty is assessed and understood. Each also registers change, so that the marks of transformation may be read as shifts in what societies deem to be ethically important.
Discourses refer to the content and form of communication. Scientific language takes on tacit models of nature, society, culture, or humanity from the circumstances in which it is produced. Reciprocally, social discourses, like law or the speech of patients, may reinforce and incorporate tacit understandings of science (Jasanoff, 2004: 41). For example, when experts, journalists, and lay people attempt to make sense of a new way of knowing or being with science or technology, they draw upon existing repertoires of speaking and, in the process of connecting existing repertoires with new experiences and technologies, modify existing language or produce new language. As the vocabulary and terms in which individuals name, observe, and debate what is of value, discourses contain ethics. Transformed or novel vocabularies and grammars reveal subtle shifts in what matters and for whom, pointing to the distinct ethical valence of discourses.
Identities refer to people's sense of self, how they are seen by others, and their social roles. Co-production scholarship shows that peoples' knowledge and the making of knowledge plays an important role in the shaping and sustaining of identities (Ibid.: 39). Individual or collective identities offer repertoires for how to respond to situations of life in the midst of novel technologies in ways that reflect existing relationships of roles and their respective duties and responsibilities. These reconfigured terms of self, duty, and responsibility comprise the ethical valence of identities.
Representations concern peoples' descriptions and portrayals of the world. When experts produce representations of the world these encode specific understandings of history, culture, and implicit models of human agency. Meanwhile, publics working to articulate and advocate for their concerns take up expert representations (like ethical principles) for their causes of sense-making. Each representation of the world created with science or enabled and supported through technology has a distinctive ethical valence in the way that it encodes and validates certain norms and visions of rightness, empirical and moral, and serves as an articulation that people give themselves of what is a good life to live.
The last of the four sites, institutions, refers to organizations and established practices. Existing institutions are "ready-made instruments for putting things in their places at times of uncertainty and disorder," providing societies with tested repertoires for problem-solving (Jasanoff, 2004: 40). Like discourses, identities, and representations, institutions can also undergo transformations that reflect shifting commitments to ways of knowing and living in societies. Institutions are vessels for societies to work through in structured ways their problems as they aim at a good life in novel circumstances. Institutions serve as encoders and containers of value and they bear the know-how by which societies have agreed to come to an acceptable resolution, that they themselves shape in the process. Therefore, looking at the ends and means of institutions and the subtle transformations in these can give analysts insight into the texture of ethical life in technological societies.
By describing and interpreting the interaction of epistemic and social determinants at each site, analysts can see and make visible people's ethical commitments in contexts where they grapple with making sense of and are learning to live well with a novel technology. Below I discuss each site of ordinary ethics in turn with examples from the age of AI. Separating out the sites, as I do, is useful for analytic clarity, but it does not accurately represent the reality of ordinary ethics where experiences and transformations at each site are intertwined with the others. So, for example, there are institutional elements to identity and in the construction of representations of the good. Similarly, representations of the good are inseparable from the identities of the individuals and collectives that are committed to them. While pulling things apart to gain some insight into ordinary ethics-in-the-making with AI, we should also always look for the dynamic interplay of the different sites.
Chatting about AI
Public discourse about AI took off in November 2023, when OpenAI made public its generative AI tool ChatGPT. Everyone who tried generative AI tended to want to talk about their encounter. On the news, in workplaces, and in the privacy of dinner table conversations, people set out to discuss the awesome capabilities of the new tools for answering questions, creating texts or images, and getting work done. People shared experiences of being awestruck and motivated to try out the technology in more spheres of their lives as well as stories of disbelief and skepticism. Across discussions about the dreams and nightmares of possible future AI capabilities as well as more prosaic discussions of the risks and opportunities of the technologies already in the public domain a common thread emerged. AI discourse is strongly concerned with the question of what it means to be human among technologies whose computational power has, for at least four decades, gradually become integrated with human functions of reasoning, knowing, relating, and being. Discourse about AI belies a question about the human -as well as problematizing and related normative-technological conceptions like "transhuman" or "more-than-human." In this light, the ethical principles that individuals as well as corporate and government leaders turned to in conversations about AI's new-found significance, are attempts to configure the relationships among humans, machines, other beings, and environments in different arenas of collective life from science and education to business, climate and government.
Importantly, these conversations about AI's relationship to people and their values took shape within broader socio-political developments of concern in the world. The "age of AI," a label that became common to describe the present era in advanced technological societies, is not only defined by the new capabilities of publicly available AI technologies. It is simultaneously the age of advanced capitalism and globalization, the post-Covid age, the age of new and resurgent geopolitical conflict and nationalism, the age of climate anxiety, and the age of "post truth" questioning and challenge to existing orders of expertise and knowledge-producing institutions. In the same way that discussions of the promises and perils of AI intersect with other areas of concern in public discourse, so does the work of AI ethics need to be situated in relation to these bigger contexts and overlapping issues.
From this glance at the site of discourse, we see how the overlapping sociopolitical contexts within the specific local and situated conditions of daily life where these questions are debated and (at least temporarily) settled create the frames within which AI ethics is expected to do its work. So, for example, any ethical principle that is advanced in the area of controlling and guiding developments in AI is also implicated in the issues of trust and democratic institutions or responding to concerns about climate change. Far from being neat and targeted outputs to drive technological development, AI ethics principles encode ideas about the right constellation of human-technology-environment relationships and participate in a cross-issue language and economy of value.
Identities and relationships with AI
In addition to being a subject of discourse, AI also reconfigures identities by informing the manner of speaking. To see this, I will take an example from the interpersonal realm, which is about the sense of and relationship between self and others.
When ChatGPT became available, people started to experiment with uses that cut time on daily tasks, such as composing a polite email from a few pieces of input text (Harwell et al., 2022). This simple application of generative AI is a great example of its daily benefits in efficiency. It also speaks to the subtle shifts in habits and culture. The use of generative AI to shortcut tasks of polite relation brings to the fore questions about the "uses" of politeness. Is being polite just a form one presents, or a genuine disposition towards who we are writing to?
Importantly, this case is not only an example of the automation of politeness but also the training of politeness. A student once shared with me how she used to never write expressions like "I hope you are doing well" in her emails, but since her chatbot started to suggest it, she took on the practice. On the surface of it, this is a tiny shift, and it is a shift in the direction of being more polite so maybe welcome in a world of common online aggression and bullying. But underneath, consider the importance of being "genuine" in what one says to another. Or, think of how this practice of communicating with other people through the medium of generative AI is linked to questions about how people may treat other human beings after having become used to interacting formally and brusquely with virtual assistants. Transformations to human relationships that are the result of habituated practices of interaction in daily life lead some people to be concerned that this may be further alienating (US Public Health Service, 2023). Others, meanwhile, believe it can lead to new possibilities of expanded connection among humans and non-humans.
I bring this example to illustrate another point about ordinary ethics: it can be a lot more helpful when considering ordinary ethics in the age of AI to avoid casting judgment (this is good or bad, progress or set-back), but to note the difference. We can pay attention to the moments and nature of change, to cultivate awareness, to sensibilize ourselves to what is transforming in our daily ways of relating and addressing one another. I call this sensibility "minding the gap" and it is needed now to mark what is shifting and how and to whom it matters. Ultimately, these are questions about what it means to be human, and what is the proper relationship between humans and machines -big questions that are trained, tested, and contested in micro interactions of daily life.
AI and visions of the good
In addition to configuring identities and relationships, AI also informs representations of what is a good life.
The company OpenAI has its headquarters in a historic building called "Pioneer Building" in San Francisco. The name of the building captures the pioneering innovation spirit frequently associated with the San Francisco Bay Area that combines an appreciation of technical novelty with specific ideas of what it means to live well with science and technology. This includes, for example, the value placed on innovation and the related ideas of what social and environmental risks are worth taking and who in society should bear them when innovations are introduced into the city or made public. This attitude informs OpenAI's approach to its product, ChatGPT. In the global and interconnected world, OpenAI's "release" reached millions of people around the world at record speed. It introduced a tool, crafted within a specific culture and set of attitudes, into diverse communities around the world. This inspires questions about the power of one company to inform with its values and visions of the good that of other communities.
It would be too simplistic, however, to say that this is only a matter of imposition of values from abroad. ChatGPT's public release spurred regional interest and investment in AI around the world. In Switzerland, for example, some venture capitalists said that they wanted to "replicate Silicon Valley in Switzerland" (Rai, 2024). Meanwhile, others sought to develop home-grown solutions in opposition to the global technology leaders. For example, the Swiss Federal Technical University (ETH Zürich) announced that it would be part of a new initiative called "Swiss AI" in part because of the concern of dependence upon foreign technologies and the values they embody and reproduce. On the occasion of the announcement of Swiss AI, Professor Christian Wolfrum, then-Vice President of Research at ETH, said, "Science must assume a pioneering role in such a forward-looking field [AI], rather than leaving it to a few multinational corporations. Only in this way can we guarantee independent research and Switzerland's digital sovereignty" ("Joint Initiative for Trustworthy AI," 2023). This example speaks to how people are vying to identify and create institutional pathways to pursue specific understandings of what constitutes a good life with AI. It further raises the question of where and how these questions should be settled, such as by VCs, universities or more broadly in society?
The sense of justice
Now I turn to my last example of how AI makes a difference for ordinary ethics by looking at the institutional site of ordinary ethics. This site recognizes how in everyday life people act towards the good not only through individual actions but in institutional contexts. We subordinate actions to institutions and help shape how these institutions (should) work to achieve a fair, just, or right social order. Thus, while there were notable institutional elements when I discussed identities and representations in the sections above (for example, the scientific research institutions and their role in shaping representations of the good life with AI, or education and communication institutions that inform the relationship between author and text), in this section I want to, in particular, draw attention to how institutions, alongside discourses, identities and representations of the good, are a crucial site where we can observe reordering in the age of AI.
For this, I turn to an example of the use of algorithms in decision-making, which are situations when algorithms -some that use ML models and others that are less "intelligent"- are incorporated into institutional decision-making contexts, such as in the areas of healthcare, criminal justice, social welfare, education, or employment. In one such instance, at the height of the Covid-19 pandemic in 2020, the UK resorted to an algorithm to create a "calculated grade" for students on the A-level exam (university entrance exam) in lieu of having students sit for an exam (Department of Education et al., 2020). The "calculated grade" was based on students' previous performance and other data about their schools. This case, as so many others concerning algorithms in decision-making, drew a lot of attention and contestation from students, families, political leaders, and educators in the UK, as well as observers around the world, for the way that the automation of the grade reflected and perpetuated structural inequalities in UK society. Students, teachers, their family members, and supporters came out to the street to protest this practice, and world news outlets circulated images of protesters holding signs with slogans that read, "Your Algorithm Doesn't Know Me" (Quinn, 2020).
This is one of many examples that have been discussed in scientific articles and in the press of discriminatory outcomes that can occur from reliance on inappropriately biased data or algorithmic models. But the slogan "Your Algorithm Doesn't Know Me," however, also tells of another dynamic, beyond the concern about discriminatory outcomes. With this slogan, the students appear to be saying that the issue is not only "allocative" (about what one gets or doesn't get) but also "representational" (Barocas et al., 2017), or concerning the issue of who can speak on behalf of whom. Beyond contesting the fairness of any specific grade decision, the students gathered behind the slogan are also contesting a widespread idea in the age of algorithmic decision-making that algorithmic representation is just.
Since the 1970s and the rise of distributive social justice paired with developments of computing in public life, there has been a subtle shift in what we can call the "sense of justice" -a shared understanding in society of what's right and wrong and what's the correct way to get to right (Boenig-Liptsin, under review). Over this time period, we see a tendency to equate justice with what is algorithmically optimal, a production of a sense of justice calibrated with digital capabilities to identify and measure. The use of algorithms in decision-making processes supports the dominance of this more instrumental sense of justice. And even as technologists and regulators seek to address the allocative harms through better data or adjusted models, this predilection towards the sense of justice as algorithmically optimal distribution is strengthened.
Unlike discrimination, this shift in the sense of institutional justice is not something that typically gets registered as an "ethical issue" of data, algorithms, and AI, and yet it is profoundly important to how technologies are designed, implemented, and regulated. How can societies compensate for this imbalance in the sense of justice? How can they strengthen the institutional capacities of governments to "know" their citizens in ways that are expansive and that support political voice and self-determination?
Conclusion
Across these ordinary, pedestrian examples of how AI is integrated into everyday life, we see how it makes a subtle difference in how people speak about value, how individuals see themselves and relate to others, how they envision what is a good life, and how they reflect their visions in the institutions they build and sustain. In each case, looking at the making of ethical life in the age of AI at the sites of discourse, identity, representations, and institutions points to the overarching question of "what world do we want to live in?" (Dratwa, 2019). Ordinary ethics suggests that who comprises the "we" in this question is as much a political as an ethical issue, such that who is engaged in answering the question (possibly on behalf of whom) is constitutive of the answer and must be on the table when discussing AI ethics.
While I started this essay by distinguishing principle- and expert-led ethics from ordinary ethics, I conclude by showing how the view from ordinary ethics can relate back and support the work of AI ethics focused on principles. It allows analysts to gain a deeper understanding of the principles -not just as abstract values to aspire for- but as lively and situated. For example, from looking at the way that the principles are picked up in the press and in public discourses about the risks and benefits of AI, we learn that what matters is not narrowly the ideal that the principle represents but how these ideals are used to chart the moral and epistemic boundaries among humans, technologies, and environments. We recognize that what "freedom and autonomy" means encompasses not just informed consent, but also the more subtle forms in which AI mediates human relationships and sense of self vis-à-vis others through changing the conditions of possibility and modalities of communication. We can see what the principle of "non-maleficence/beneficence" means in the age of AI has to do not only with good or bad actions or intentions but with broader visions of the good life with technology that specific individuals and collectives hold. Or, we recognize how the principle of "justice/fairness/equity" is not only about fair algorithmic outcomes but the broader perceptions of what is just that are shaped over time with the promises and practices of computing in public life.
As we look at these principles through the lens of ordinary ethics, we see them less as firm guidelines for how to act than as an invitation to examine together -with interdisciplinary scholarship, from perspectives of different societal stakeholders, and from within democratic deliberation. This suggests that the promise of ethics in the age of AI requires taking back a bit of power into ordinary life, for ordinary people, such that questions of what we value as societies and how to creatively keep and grow these values in contexts of daily life with powerful new technologies become a key element of the collective response.