Paper presented at the World Policy Conference – Health, 2 December 2020 – by Daniel Andler

Abstract. Ethics is a major concern in healthcare, and biomedical ethics, over the last six decades, has shown how to systematically integrate ethics in the decision-making processes at all levels, from policy to individual care. The coming technological revolution, powered by AI, robotics, genetics and other areas of biology, raises new problems. One is that there is no well-established way of developing and deploying technology while holding on to an ethical line. Another problem, in the case of present-day massive technologies, is that they are part and parcel of globalization, so that effective governance must transcend national borders.Ethics is not just a matter of enforcing agreed upon norms. It is also, and mainly, a common search for norms, a never-ending process. Ethics is produced on the fly, as new possibilities arise, new values emerge, new expectations crystallize in various social groups. Over the last three decades health technologies have produced a steady flux of revolutionary inventions, disrupting established practices and common understandings of some basic ethical and anthropological notions. Hence the need for guidelines, which provide a legible representation of the ethical and legal issues which allows agents in the field to navigate the situations they encounter daily. But guidelines depend on the ethical reflection conducted by society at large, they are not the beginning nor the end of the process of ethics. Many obstacles stand in the way: technological fatalism, the concentration of power and knowledge in the hands of a few, and above all the rush for dominance, whether by corporations, nations or individuals. What is needed above all is the time for “the new forms of the good to take a definite shape” (Joseph Raz): slowing down may be the most urgent need in our epoch of technological revolution.


Ethics is important. But why is it especially important in healthcare? Because, on the receiving end are people who, singly and collectively, have a lot at stake, are a captive market, and are vulnerable; because, on the providing end, in both public and private arenas, the budgets are enormous as are the opportunities for enrichment; and because research and clinic are intermingled yet pursue different agendas, raising serious conflict of interest issues.

That much is fairly obvious, while somewhat vague at first. It has taken the emergence in the 1960s and 70s of the field of bioethics, together with medical and clinical ethics, and its subsequent considerable development, to give substance to our intuitions in this regard, to reveal unsuspected complexities, and to show that bringing ethics to the fore can be productive.

The question before us today is whether the advent of hugely powerful, disruptive technologies alters the problem situation and in what ways. Part of the problem is globalization, which both amplifies these technologies and is largely enabled by them. Their governance must accommodate interdependence between nations, on pain of remaining ineffectual, and intergovernmental ethics is no simple matter. But first we must ask a more basic question.

1. What is ethics ?

We know an ethical issue when we see one. When we hear about handicapped children having been injected cancerous cells to further research programs in oncology, our ‘ethical bell’ gives out a loud ring. When we find out that Boeing let the 737 Max fly after the first crash although they knew what caused it, our bell sounds again. These are cases of what we think of as clear violations of ethical norms. A different sort of case is exemplified by end-of-life decisions in intensive care units: ethics is involved, we clearly sense, but in the form of dilemmas rather than violations. Examples abound, and our daily and professional lives, however callous or forgetful some of us may be, some or all of the time, are strewn with ethical issues.

Being familiar with the phenomenon doesn’t entail being clear about it. Institutions with the term ‘ethics’ in their title or mission struggle with spelling out what it refers to— they tend to fall back on examples, as I’ve just done. The best definition I can suggest is that of philosopher Joseph Raz: Ethics is the endeavor to give substance to the abstract category of the good.

To ‘give substance’ can be understood in two ways. If we allow ourselves to look back in time, we can imagine a moment where oncological experimentation on handicapped children was seen as a dilemma, not a violation: physicians looking for a cure were laboring for the long-term benefit of humanity, and pondered about whether this noble end justified the means. Going back just a little further, it perhaps did not occur to physicians that it might raise any ethical issue at all. It is precisely that sort of case which gave birth to the field of bioethics. And what these examples show is that ethics isn’t just about making sure that ethical norms are followed; it is also, in fact for the most part, about creating and discussing the norms to be established. These are two different ways of giving substance to the abstract category of the good.

Moral codes connect the two: they provide a temporary conclusion to the search for norms, and they make precise what is it to violate them. The Ten Commandments specify what it is to honor the good in a number of generic, familiar situations. It may be thought that such a code of conduct, suitably amended and completed, should suffice. It is important to recognize that it does not. First, because no code can come close to covering all the types of situations that people, organizations and societies run into. Second, because when new possibilities arise and new practices emerge, they often require a fresh ethical treatment. The existing ‘ethical blanket’, so to speak, cannot be stretched to cover the new territory.

2. The impact of technology on ethics

This is precisely what technology brings about: new possibilities and new practices. The more powerful the technology, the more areas it can penetrate, the more numerous the possibilities, the more outlandish and possibly transgressive the practices. The potential for disruption is even greater when cutting-edge innovations converge, creating synergies that defy extrapolations—this is what has been unfolding in the last couple of decades, as aptly described at the beginning of the millennium under  the label ‘NBIC’ (nano-bio-info-cogno)1. Data science and AI boost genetics and drug research, nanomaterials boost robotics, AI and nanomaterials conspire to deliver brain-machine interfaces, smartphones boost the internet, which enables data collection, which feeds deep learning models, which empower AI, etc.

Examples in the health sector abound. We are about to hear about genetic engineering and the ethical ‘red line’ of germline modification, and in the next session about enhancement and the goals of transhumanism. The commodification of DNA sequencing raises a series of ethical conundrums bearing on privacy violations and incidental findings (unwanted revelations about exposure to incurable diseases or kinship relations). Patient’s consent for therapeutic or palliative use of sensors, cameras, tracking devices, robots, raise issues for non- or partially competent patients. e-health can lead to the accumulation of untoward amounts of personal information on some or all members of a population, with the attendant risks of surveillance and control, or unequal protection and coverage. Generalization of systems of e-health can cause increased inequalities, either because the underprivileged lack access or the minimum skills to navigate the system, or because only the more opulent sectors of the health system can afford the best, up-to-date information and apps; or again because personal, face-to-face care might increasingly become a privilege. Progress in intensive care technologies lead to insoluble end-of-life problems. Progress in neuroimaging lead to intractable problems with comatose patients. The health sector is particularly vulnerable to misinformation, and thus concerned with the ethics of free expression. Relatedly, the anti-vaccine movement raises the typical ethical question of individual liberty vs. the protection of society, which may not seem to arise from technology, yet has a global dimension brought about by the infosphere. A whole other set of concerns arise from the enormous costs involved in the deployment of digital systems, medical equipment such as surgical robots, discovery of drugs for rare diseases or of vaccines against new viruses—conflicts of interest, political interference, share of costs and risks pose ethical challenges, as we are witnessing right now. This a just a sample of the ethical issues arising specifically from technological interventions in the health sector.

3. When does ethics come in?

It is often said that intractable ethical issues arise when technology has been given free rein to release new tools before due consideration is given to what situations their release might lead to. “Think first”, the age-old motto of practical wisdom, is offered as key to avoid finding oneself in a situation where it is too late to backtrack, and where the best one can hope for is to limit the ethical damage. Familiar examples are provided by artificial intelligence, which is now scrambling to turn into a force for the betterment of the human condition; by the internet, which is due for a ‘reset’ according to critics, including its founding father Tim Berners-Lee and our speaker in the next session, Carlos Moreira; and by digital social networks, whose destructive effects are well known— all three of which are mutual enablers.

Think last seems therefore a bad idea, but think first doesn’t work either. One reason is that before the technology is at least somewhat developed and deployed, debating about its potential risks remains abstract and general: on that level, experience shows, no consensus can be reached, no decisive argument can be made in favor of pursuing or dropping the idea. Another reason is that even when one can begin to discern the shape and likely effects of the proposed device or set-up, it is impossible to foresee how, once deployed, it will interact with other novel systems emerging at the same time. Yet more importantly, it is impossible to guess what scenarios will play out as society at large and communities take hold of the new technology. One example from the distant past and a distant country is that of the French telephone operator’s Minitel, an ancestor of today’s tablets: the device was distributed for free to all telephone subscribers in order to replace the costly and wasteful paper directories; but it soon got to be used as the ‘Minitel rose’, the ancestor of on-line dating and prostitution, with the attendant ethical and legal problems. An example from the future is the self- driving car, whose full deployment, if it ever happens, is sure to generate countless ethical puzzles, far beyond the notorious trolley problem: a self-driving car, a fleet of self-driving cars lend themselves to uses that we cannot imagine ahead of time, as they would satisfy longings and honor values that will come into being only once (and if) these vehicles populate our streets. Finally, an example from our time and age, in the health sector, is resuscitation technology: thinking first could not possibly have led to give up on the idea, nor could it have helped avoiding the distressing situation brought about by the discovery of forms of near-eternal, irreversible coma, coupled with the emergence of entirely novel religious and legal norms.

The right time for ethics is neither after nor before: it is now. Ethics is a permanent feature of human action, it is guided by action as much as it guides it. It is an ongoing task that proceeds by spurts, on the fly, as fresh challenges are brought about by new types of situations arising, new practices crystallizing, new expectations being expressed, new understandings emerging.

4. Where is ethics produced?

It may be thought that ethics is primarily produced by dedicated boards, councils, committees that establish guidelines, charters, codes of conduct, recommendations. These however are pragmatic tools that help agents on the ground, at all levels, to act in accordance with ethical principles that have been agreed upon, or tacitly endorsed as the case may be, without having to reflect on the principles and on how to apply them, a time-consuming and difficult task. What the committees achieve is to turn a complex web of ethical and closely related legal issues into a set of feasible guidelines, prima facie compatible with economic, social and practical constraints. These mid-level principles must be expressed in a pared-down vocabulary ensuring shared understanding and fostering clarity in communication. Agents can then effortlessly take them on board, memorize them, tune them to local circumstances and transmit them further down the line.

The task of these dedicated committees is by no means easy, and it does involve its members in ethical reflection. But it is limited in scope, for several reasons. Membership is limited to professionals and does not extend to the variety of stakeholders and end-users whose life is impacted by the technologies. The presence of industry representatives, though necessary, comes with the risk of less than full disclosure of interests and information. More importantly perhaps, the decision process is constrained by a predetermined set of terms and by rules that are thought to be necessary to achieve a consensus among the representatives of various legal, political and religious creeds. Not much room is left for questioning the basic assumptions driving the industry.

In the case of technology-enhanced healthcare, there exists an entire field, Health Technological Assessment (HTA), devoted to answering questions of the form: Do the ends—the presumed benefit in terms of health— justify the means—the cost of the proposed technology, together with the systemic changes and collateral effects it would bring? But although the field explicitly includes the ethical perspective in its official charter, by its own lights it has so far been at pains to do it, probably because deliberation in HTA is even more constrained than in guideline-producing bodies.

Ethics is produced to a large extent outside of these bodies, in two kinds of settings. In the first kind, practitioners, philosophers, social scientists debate about what general shape the good assumes in the field at hand (in our case, technology-enhanced healthcare). They aim at identifying principles that should be upheld come what may and be systematically called upon when various feasible options are considered, in the light of their conception of the kind of future they want. Nor are these principles set in stone: the path that has led to them continues, as they are constantly reinterpreted, refined and occasionally updated. Such settings need not be restricted to formal structures. In fact, they should not be: however carefully balanced a committee might be, in the end it includes a few people, generally picked among those who are most eager and prepared to intervene in that setting, and it leaves out most people, including those who may well have a deeper understanding and the willingness to take a step sideways. Moreover, no time limit can be set on the process, no protocol can be imposed on the flux of ideas. The conversation must be allowed to unfold in many venues, on different time scales, and assume many forms, including books and scholarly papers as well as debates of all kinds.

Bioethics, an area closely related to, and largely overlapping with today’s topic, provides an illuminating example. It settled some decades back on four major principles: autonomy, beneficence, non-maleficence, justice. These were not the result of any single committee’s work, but rather the temporary conclusion of an extended discussion in many venues, drawn by the two authors of a celebrated treatise, first published in 1979 and now in its 8th edition2. These principles are understood as useful conceptual guideposts, organizing a complex and evolving process of collective intelligence that not only pursues ways of applying these principles but can go as far as questioning their value.

The other kind of settings in which ethics is generated or pursued are local ethics committees and other non-formalized venues, where decisions are made regarding singular cases — how to treat this particular patient in these particular circumstances; whether to adopt this particular piece of software in this particular healthcare system to deal with this particular population of patients, etc. Many of these decisions can be made by way of routine application of existing guidelines and codes of best practices, and/or by reference to closely resembling cases. But not all decisions can be disposed of in this way, because different principles back incompatible recommendations, or because no principle seems to apply, or because the values of the people concerned clash between themselves or with some principles normally honored in the time and place where the case has arisen. The ‘labor of ethics’, in Marta Spranzi’s (full disclosure: my wife) felicitous phrase3, deployed in such cases serves a dual purpose: deliver an acceptable decision, and further the understanding of the underlying ethical issues, often suggesting ways to expand and refine it.

5. How can ethics find its place in today’s technological surge?

How ethics is produced in the area of bioethics and clinical ethics is fairly well understood. The process consists in a collective effort in which principles and practices are considered in alternation and side by side. The quest for general principles or rules follows a cycle of iterations, starting from a preliminary understanding of the actual and possible practices, and proceeding to formulate some principles, rules and recommendations on the basis of a first assessment of what is right and what is wrong in the set of practices used as a starting point. These principles and rules then redraw the boundaries of the set of theoretically acceptable or preferred practices. Tested in the field, however, these reveal new issues, calling for a reconsideration of the principles and rules. Meanwhile, scientific, technological and clinical novelties occur, which also call for a revision of principles and rules. And  so the cycle goes on. When it comes to making particular decisions, principles and practices are simultaneously enlisted, and in some cases clash, leading to a call for reform. Whatever the exact details, which vary as one moves from one issue to another or one area of applied ethics to another, the responsibilities are fairly clearly apportioned, no stakeholders are systematically excluded from the collective reflection, and the policy decisions can be revised in the light of experience within a reasonable timescale.

So much cannot be said when it comes to the new technologies, making an integration of the ethical dimension particularly problematic. As is being extensively discussed, the responsibility for developing new technologies rests on a minuscule group of people with exclusive access to knowledge, power and money and who answer to virtually no-one. Deployment involves governments, and thus to some limited extent, via democratic representation, a larger set of people; in practice however, the decisions rest essentially on the technocratic structure; the social gap remains immense. Just as wide is the temporal gap: by the time a technology which has been selected for development and deployment hits the world, it has gone from emerging to entrenched, and previous ways of doing things or inhabiting one’s surroundings have been foreclosed. One further problem is that the technologies most transformative for health are generally (though not exclusively) global in nature, so that national policies are mutually dependent and must be coordinated in order to have any lasting effect.

These problems are well known, and humanity is not at a complete loss before them. In fact, in the last several years we have been witnessing a rich set of initiatives aiming at turning around the direction of compliance, from humanity complying to the demands of technology to the reverse. One of these in fact is this very conference, WPC Health, and we will hear this call voiced by Carlos Moreira in the next session. But calling and hoping do not amount to achieving. There are many obstacles standing in our way. Technological determinism, together with pessimism arising from historical evidence, may discourage too many people, leaving the rest too weak to change the status quo. Conflicting interests, mediated by politics, will continue to be an essential driver of technological evolution; in fact, the battle cry of putting humanity first founders on the issue of who do we take humanity to be: values, situations and priorities differ. Finally, we know, again from experience, that when push comes to shove ethics tends to be an afterthought.

In the face of these obstacles, we need to be imaginative and tenacious, but there is no reason to despair: we are witnessing a vigorous pushback against fatalism. I do have a worry, though. We also need to be patient. For as Joseph Raz puts it, « The new forms of the good take time, and require the density of repeated actions and interactions to crystallize and take a definite shape, one that is specific enough to allow people to intentionally realize it in their life or in or through their actions »4. Where Raz says “people”, we should read, for our purposes, “society”, but the point remains. What we are witnessing in AI, robotics, and above all biotechnology is the mere beginning of a revolution, or so we are told. The rush to dominance, by nations, corporations, scientists, is underway. In such a moment in history, how on earth can we be collectively persuaded to slow down so as to leave time for the new forms of the good to take shape? This is the question with which I leave you.


  1. M. Roco and W. Bainbridge, eds. Converging Technologies for Improving Human Performance: Nanotechnology, Biotechnology, Information Technology, and Cognitive Science, NSF/DOC-sponsored report, Arlington, 2002. See the European Commission’s report Converging Technologies — Shaping the Future of European Societies, Alfred Nordmann rapporteur, 2004.
  2. Tom L. Beauchamp, James F. Childress, Principles of Biomedical Ethics, New York: Oxford University Press, Eighth Edition, 2019.
  3. Marta Spranzi, Le Travail de l’éthique, Bruxelles, Mardaga, 2018.
  4. Joseph Raz, The Practice of Value, New York: Oxford University Press, 2003, p. 58.