Artificial intelligence

AI has not created a situation where we need new principles. The established principles remain the crucial ones. But we do need to think through the changing situations anew, in order to figure out how to respond well to a rapidly changing landscape.

About The Researchs Ethics Library (FBIB). This article is a part of The research ethics library, offering 75 specialised articles on topics linked to research ethics, written by a large number of different experts and professionals. It also includes articles on relevant Norwegian laws and international guidelines. Taken as a whole, FBIB shall serve as an introduction to key topics in the area of research ethics. Each article contains additional links to further resources.

Its purpose is to help engender reflection and debate, rather than to create an encyclopaedia or provide universally applicable answers.

The perspectives and viewpoints presented in the FBIB articles do not necessarily reflect those of The Norwegian National Research Ethics Committees; all authors are responsible for their own perspectives.

About the author: Hallvard Fossheim is  professor of philosophy at the University of Bergen in Norway.

This article was first published in University World News.

Introduction

Recent and ongoing developments in artificial intelligence cause worries and uncertainty. At the moment, large language models (LLMs) get the most attention, as they are widely distributed and able to do what is traditionally considered intellectual work.

This raises important questions about responsibilities, training and awareness, both when it comes to those developing the technology and when it comes to those utilising it – be they researchers, research institutions, research funders, national or multinational fora, or private individuals. In this situation, there is an obvious need for renewed research ethics reflection.

Research ethics broadly construed is about ensuring the ethical quality of research processes and products through constructive reflection and good practices. Ethically relevant is not only what happens in a given project between the researchers, or between researchers and research participants, but also possible or likely societal and environmental impacts.

New questions, established principles

Research ethics understood in this way is principle-based. That is to say, research ethics is not about the application of some esoteric normative theory, or limited to purely juridical issues; it is concerned with asking about the values we deem central, and hopefully finding wise and practicable answers.

When it comes to the relations between researchers and the rest of the world, core values can be articulated in terms of three basic principles: [i]respect for persons[/i], [i]good consequences[/i], and [i]justice[/i]. (‘The rest of the world’ includes anything from research participants to ecosystems, depending on the case at hand.) When it comes to relations among researchers, [i]integrity[/i] is such a basic principle.

AI has not created a situation where we need new principles. The established principles remain the crucial ones. But we do need to think through the changing situations anew, in order to figure out how to respond well to a rapidly changing landscape.

In what follows, I will take up the four basic research ethical principles mentioned, briefly specifying in each case one or two ways in which they can help identify research ethical challenges in the framework of AI as it appears to us today. Not surprisingly, there will be more questions and challenges than there will be answers.

Respect for persons

Treating persons with respect includes treating them as persons, that is, as the sort of beings who should have a say in things that concern them. Voluntary informed consent remains a main way in which to ensure such respect.

The way AI works now, however, represents a very real challenge when it comes to being informed. Deep neural networks can often reach better conclusions than humans. But finding out and understanding what lies behind such a conclusion is normally at present impossible, if understood in terms of explainability.

The technology works so inscrutably and on such immense levels of complexity that the idea of informed consent becomes quite a challenge whenever what needs consent, is based on an AI-generated conclusion.  

Se also the article on: Consent

Good consequences

Ensuring good consequences and avoiding bad ones in many cases requires one to think beyond an ongoing research project and to consider its wider impact.

Central to discourse around consequences of AI has been ‘The Alignment Problem’ (a concept developed to a great extent by Brian Christian and Stuart Russell), which has to do with shortcomings both in humans and in AI, as neither is very good at understanding what humans in the end want or indeed should want. 

Realising or securing our values in an AI context requires interpretation and communication between AI and humans (the alignment in question). Some have speculated that communicative failure could end in a global catastrophe; even barring such sci-fi-like scenarios, however, there are obviously innumerable ways in which a lack of alignment can lead to bad consequences and take us further away from the hope of good ones.  

Justice

Justice applies first and foremost to groups and inter-group comparisons. If your research has one group (identifiable by, say, economic situation, nationality, ethnicity, or gender) bear the brunt of potential disadvantages in your research, while another such group gets to reap benefits, then you might have an issue concerning justice.

As mentioned, research ethics broadly construed also extends to immediate and mediate products of research. AI technology that increases differences between groups, or treats them differently in an ethically problematic manner, thus becomes a research ethical matter. Some instances of so-called predictive policing, the use of AI in deciding what areas to prioritise when it comes to police presence, have been persuasively shown as racist.

While recent analyses have suggested there might not always be an ethically obvious answer to such questions, that is not to say some implemented practices have not been much worse than others. Additionally, if an entire field of decision-making is fraught with real ethical uncertainty, that just makes it all the more crucial that responsibility for chosen practices is not pulverised through delegation.

Structurally similar issues of justice apply to a variety of cases. Although there are ways of tweaking the output in an AI learning process, there is still also much truth to the ‘garbage in, garbage out’ adage in this context.

As has been very publicly demonstrated, an image recognition programme can seem very much like an objectively reasonable judge, until it suddenly characterises an image of humans in terms of another species and in a way that appears wildly racist. What data an AI gets to train on, and what sort of assistance it gets in the process, can both be matters of justice or fairness as well as of objectivity in some perhaps more naïve sense.  

Integrity

At present, many research institutions are producing and publishing guidelines on how to use AI. The focus is on both students and teaching, on the one hand, and researchers and their research activities, on the other.

It is best to keep in mind, however, that the two are not isolated from each other. Some of today’s students will be tomorrow’s researchers, and the practices and approaches they learn now will dictate the practices and approaches of future research. 

A serious breach of research (and student) integrity is plagiarism. It used to be the case that plagiarism could be identified by comparison with an available corpus of texts. At the moment of writing, technology is at a place where this is no longer the case, because LLMs work by generating new texts through statistical combinations, while the corresponding technology designed to identify machine-generated text is not sufficiently reliable.

Developing guidelines for how to use AI technology in student and researcher work is among the immediate needs across the board. With swiftly changing technologies and practices, living guidelines seems to be a constructive option.

That a set of guidelines is ‘living’ means that its recommendations go on being updated, as part of the identity of the guidelines. Optimal guidelines are close enough to the technology to be useful, and at sufficient distance from it to last longer than a few weeks.  

AI co-authorship?

One might think a solution regarding the use of AI in research is to give it credit as co-author. Being open about how one has used AI in one’s work is a tenet of integrity. Making it co-author will not do, however. A reason one might think it does, is that one thinks of authorship primarily in terms of credit or acknowledgement.

Acknowledgement is one important reason why researchers publish – being recognised for one’s hard work or brilliance, with the further prospectives of upping one’s chances of employment, tenure, pay rise, or funding. However, credit in this sense is not the only ethically relevant function of authorship.

Another defining feature of authorship is responsibility. Having one’s name on a publication means one takes responsibility for its content. This in turn is central to the trustworthiness of research, whether vis-à-vis other researchers, policy makers, or the broader public.

AI does not take responsibility. For all we know, this is something that might change at some point in the future. At present, however, we are far from a world where we can sensibly attribute ethical responsibility to AI technology.

Attributing responsibility for authorship to AI is wrong, then, partly for the same reasons attributing responsibility to AI is wrong when it comes to decision-making that affects people or the environment in other fields. For better or worse, responsibility remains with us.

Ethics and change

New developments might very well make the above reflections soon look seriously dated. This will not be because the research ethical principles are dated, however, but because the technology or the understanding of it is.

The fact that things move fast, as AI-related technological developments are doing at present, is not an excuse for postponing reflection. On the contrary, swift change creates an added need for reflection. Correspondingly, we can even say that ongoing and upcoming AI developments in a way are a help to ethical thought, in the specific sense that they force us to think through again the principles and values we depend on and cherish.