Regulating Trustworthy AI: a Fundamental-rights Based Approach

ARTICLE

23 Apr 2019

There is no widely accepted definition of artificial intelligence (“AI”). Within the field, a trend is growing whereby AI is defined on the basis of objectives and the manner in which AI-based systems seek to accomplish them. 

Following suit, the European Commission (“EC”), the supranational institution currently spearheading discussions in the field at the European level, explains that AI refers to systems that display intelligent behavior by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals.[1]

Recognising the ever-growing strategic importance of AI, the EC has stressed that the socio-economic, legal and ethical implications of this transformative technology must be carefully considered. In its 2018 communication, the EC defined a European approach to AI based on three pillars:

  1. Being ahead of technological developments and encouraging uptake by the public and private sectors;
  2. Preparing for socio-economic changes brought about by AI;
  3. Ensuring an appropriate ethical and legal framework.

In support of these pillars, the EC appointed theHigh-Level Expert Group on Artificial Intelligence (“AI HLEG”) whose work includes the elaboration of recommendations on future-related policy developments. Among its first work, the AI HLEG proposed its first draft AI ethics guidelines[2] to the EC on 18 December 2018. Following the closure of the consultation period, the official Ethics Guidelines for Trustworthy AI were published on 8 April 2019. (the “Guidelines”).

The Guidelines introduce an extension of the concept of AI which is the notion of “trustworthy AI.” Underlying this idea is the belief that people will embrace AI more confidently if they are able to trust the technology. Accordingly, trustworthy AI is composed of three cumulative elements whereby the technology is:

  1. Lawful, complying with all applicable laws and regulations;
  2. Ethical, ensuring adherence to ethical principles and values;
  3. Robust, from both a technical and social perspective, in view of the reality that AI systems can cause unintended harm. 

The Guidelines set out a framework for achieving trustworthy AI, including the adherence of AI systems to four ethical principles (referred to as “Ethical Imperatives”). Recognising that new technologies should not mean new values and should not de-base the current society values, the Ethical Imperatives find their roots in the rights protected under the Charter of Fundamental Rights of the European Union, including: (i) respect for human autonomy; (ii) the prevention of harm; (iii) fairness; and (iv) explicability. 

Drawing upon these Ethical Imperatives, the Guidelines outline how trustworthy AI systems can be realised through their adherence to seven requirements which can be implemented through both technical and non-technical methods. These are:

  1. Human agency and oversight – Users should be able to make informed decisions about AI systems and to monitor the overall activity of the AI system, including being able to decide when and how to use the system in any particular situation.
  2. Technical robustness and safety – AI systems are to be developed using a preventative risk-based approach and must behave as intended while minimising unwarranted and unexpected harm while preventing unacceptable damage.
  3. Privacy and data governance – Any interaction between AI systems and personal data should be secure and processed in a manner that protects privacy.
  4. Transparency – AI systems should be capable of being traced and understood by humans and should be identifiable as technology-based systems.
  5. Diversity, non-discrimination and fairness – AI systems should enable inclusion and diversity while ensuring equal access and equal treatment to all users.
  6. Environmental and societal well-being – AI systems should encourage sustainability and ecological responsibility, and research should be fostered into AI solutions addressing areas of global concern with the goal to benefit all human beings, including future generations.
  7. Accountability – Mechanisms should be in place to ensure responsibility for AI systems and their outcomes, at all stages including their development, deployment and use.

The EC’s Guidelines provide a framework for trustworthy AI and represent a form of soft law which incorporates existing legal principles. The effect of regulating AI in such a manner is that it strikes a balance between the need for governance and the avoidance of heavy-handed government intervention. Furthermore, by taking up a human rights lens, the Guidelines demonstrate that the classes of risks and harms associated with AI systems fall within the purview of a known phenomenon i.e. human rights. This type of lawmaking in the field of technology finds its basis in the principle of ‘technological neutrality’ which requires that the same general rules should apply to legal relationships involving the use of emerging technologies, other technologies and no technologies at all.

AI raises profound questions across ethical, legal and regulatory domains, all of which touch upon a myriad of fronts. On the other hand, while human rights law covers a broad class of risks and harms, they are not equipped to address all of the known and unknown concerns pertaining to AI. Gaps will indeed remain between rights, principles, design, development, deployment and usage of AI systems. Moreover, one should keep in mind, that implementing, and monitoring the compliance with, the principles enshrined in the proposed ethical framework is definitely easier said than done. In particular, the current state of art AI, based principally on deep learning techniques, is largely incapable of explainability of outcomes, which is necessary - and rightly so – under the proposed ethical framework. New approaches are needed if we are – as we should - to uphold the proposed ethical principles.

About the authors:

Olga Finkel is the co-managing partner of WH Partners. She is widely regarded and respected for her knowledge of gambling and technology law. In addition to having a Doctor of Law degree, she also holds an MSc in Computer Science. Her specialisation, backed by extensive experience and in-depth background in both law and computer science, includes data-driven and technology-driven businesses (AI-based, blockchain-based, FinTech, RegTech), gaming, data privacy and data security. Olga is part of the Legal & Ethical Working Group of the Malta AI Task Force. 

Tessa Schembri is a trainee lawyer at WH Partners. She is a Bachelor of Laws (Honours) graduate who is currently reading for a Masters in Advocacy from the University of Malta. Tessa has a profound interest in Technology law and has completed an undergraduate dissertation entitled ‘The Legal Status of Cryptocurrencies in the EU.’

--

[1] Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions on Artificial Intelligence for Europe (25 April 2018). Available at https://ec.europa.eu/digital-single-market/en/news/communication-artificial-intelligence-europe.

[2] Available at https://ec.europa.eu/digital-single-market/en/news/have-your-say-european-expert-group-seeks-feedback-draft-ethics-guidelines-trustworthy.