AI code of ethics

For the next two years, a pair of University of Copenhagen researchers will examine ethical dilemmas associated with artificial intelligence. Among other things, they will follow AI research at close quarters, implement ethics in the academic context and develop a code of ethics that describes responsible research practices in the field of AI.

Who is responsible for a driverless car crash? Is it acceptable for police to use algorithms to predict who will commit a crime? Or, that a municipality uses social data to monitor specific groups? There are numerous dilemmas associated with new technology – as well as opportunities.

Alongside the technological AI work, two researchers will uncover ethical pitfalls in the use of artificial intelligence. The two are: Peter Sandøe, a professor of bioethics at both SCIENCE and SUND, and Sune Hannibal Holm, an associate professor of philosophy at HUM, who will move to SCIENCE on January 1.

“It is important that when we as a university involve ourselves in a new technology, like AI, this involvement manifests itself in a responsible way where our ultimate aims are thoroughly discussed,” say Sune Holm and Peter Sandøe, who over the next two years will examine AI’s ethical nooks and crannies.

Promoting nuanced debate


The two scientists cite previous decades of discussion about genetic modification, on plants and foods for example, where sceptics’ voices and fears of new technological possibilities were dropped due to flimsy science. Their point with this comparison is that debate ought to be balanced, informed and nuanced, so that a community can benefit from new technologies in a way that people, as well as science and industry, feel heard.

“We don’t have the solution to the question of whether artificial intelligence and the use of algorithms is right or wrong, but we need to try and understand the considerations of sceptics and of those backing development,” say the researchers.

What is the potential consequence if we as a society don’t engage in a nuanced debate on the subject?

“Societal prosperity and welfare are at stake if we do not do it. If the products that our researchers are helping to develop, for the heath sector for example, are to make a difference, they must be sold. This is only possible if we have had a discussion about how the products can be used, for both the benefit of and respect for citizens,” say the researchers.

In both research and instruction


Throughout the project, the two ethicists will follow researchers at the SCIENCE AI Centre (Department of Computer Science) and other relevant Danish and international research environments to gain first-hand knowledge about the most pressing issues related to ethics and standards for good practice in the field of AI. They will also organize debates and workshops, and contribute to the design of an AI code of ethics. Concurrently, they must implement the ethical dimension in instruction, so that students are equipped with a broader awareness as they head out into the ‘real world’.

“Much of the knowledge that students will have to work with will be deployed amidst a societal context. They will develop programs that have a major impact on the everyday lives of people, programs that will quickly gain value and have an ethical dimension,” say the researchers, adding:

“If we are successful, once we have finished, students will not be able to avoid learning about ethics in relation to the use of artificial intelligence over the course of their studies. We also aim for our results to become an integral part of the research,” say two ethicists.

 

 

 

(Release by University of Copenhagen)