Following an online consultation organised by the European Commission’s High-Level Expert Group on Artificial Intelligence (HLEG AI), we submitted our humanist contribution to the on-going debate on the European ethical guidelines that should accompany the development of this technology (or rather these technologies) that carry both tremendous potential but also very serious threats.
In brief, we welcomed the work done by the HLEG AI, which identified many areas of concern and sometimes brought interesting proposals to address them. We however also expressed our worries about quite a few issues that the group seems not to have tackled properly.
Explainability as a key element of user empowerment: the document considers that users have to receive sufficient information on an AI application so as to give informed consent about the way it functions. We argue that the relationship between the user and the provider should be bidirectional and continuous, in order to empower the user and better inform the provider. In this domain, a lot still has to be done.
No evidence provided on certain claims: the document claims without any proof that the benefits of AI largely outweigh its potential threats. We argue that the understandable race to reap the economic benefits of this technology should not result in a lack of methodology, especially when the matter at hand is to draft ethical guidelines.
As any technology, AI has to be subject to societal control: recognizing the threats, the document attempts to address them in advance, mostly at design stage. We do agree with this approach, we however also argue that given the nature and pervasiveness of AI technologies and the necessarily unknown future developments, this has to be complemented with strong control mechanisms, including by allowing quick and easy compensation when citizens are harmed.
No consensus on key threats: the document contains an entire chapter on critical threats. These include identification without consent, covert AI systems, mass citizen scoring and lethal autonomous weapon systems. It is however exactly this chapter which seems to be controversial within the HLEG AI. We sincerely hope that the presence of many actors having strong economic incentives in the domain of AI will not undermine the comprehensiveness of the ethical guidelines.
Avoid reproducing or reinforcing discrimination: we highlighted another threat: the risk that algorithms based on machine learning reinforce discrimination. This can happen both intentionally and unintentionally, either because the data used to train the algorithm contains discriminative tendencies present in society or when a moral dimension is missed in decision that was identified as yielding better results from a quantitative perspective. The document identified the issue but seemed to assume that a simple cleaning of data sets would address all issues.
Make sure third party interests do not prevail: In areas that are particularly sensitive, such as healthcare, we should be able to guarantee that decisions put forward by AI systems are solely based on the patient’s interests and not on the commercial interests of third party providers (e.g. health insurances). This becomes all the more important when it comes to the freedom to die in dignity. The patient’s perspective should always prevail and it should be possible to discard an AI decision or recommendation when it poses a threat to the patient’s health or dignity.
Much more stress on education: citizens must be aware of the functioning, the problems and the risks related to artificial intelligence. This implies that school curricula should raise their awareness about the reality of algorithms and promote genuine education in terms of values, citizenship and critical thinking. Beyond school, public authorities must develop awareness programs on these issues and foster public debate on artificial intelligence in general. This should be a priority of EU policies in the domain of AI.
Our main proposal: A European Observatory for Artificial Intelligence.
In order to properly address already identified and yet unknown concerns related to AI, we propose the creation of a European Observatory for Artificial Intelligence. This EU body would accompany user acceptance, boost public debate and implement societal control over the risks related to AI.
Such an EU Observatory would benefit society as a whole:
- citizens would become real actors of the development of AI and would be able to provide systematic societal feedback while being guaranteed efficient redress in case they are harmed,
- developers would benefit from society’s feedback and identify new threats or risks, find solutions that truly boost societal acceptance and improve their services as a whole,
- policy makers would be able to better identify when and where government intervention is needed and select the best policy responses.
In general, systematic and continuous societal oversight would allow making sure that EU level debate will be continuous and that the final version of the ethical guidelines will not be used as a baseline to consider whether a specific AI application is deemed ethical by European standards – whether it is “trustworthy AI, made in Europe”, as the document puts it.
In any case – whether these are codes of conduct, standardization or regulation – responses to the exposed issues have to be carried out at European level. This would make it make it possible to leverage the weight of the Single Market and impose a set of high ethical standards at global scale.
Artificial Intelligence plays a growing role in our daily lives and socio-economic systems, the race for innovation on a global scale has already begun, and many countries and companies are heavily investing in this sector of huge economic and strategic potential. Sensing this trend, the European Commission published in April 2018 a Communication on Artificial Intelligence for Europe in which it proposes:
- to boost Europe’s competitiveness and investment in this field;
- to prepare citizens and businesses for the socio-economic changes related to AI;
- to ensure the development of an appropriate ethical and legal framework.
To this end, the European Commission set up a High-Level Expert Group on Artificial Intelligence and tasked it elaborate recommendations on future-related policy development and on ethical, legal and societal issues related to AI, including socio-economic challenges. Concretely, the group will publish two deliverables in the first half of 2019: ethical guidelines and policy recommendations.
Last week, was the deadline to comment on the draft version of its ethics guidelines for “trustworthy” AI. You will find our contribution here.