Policymakers all over the globe are looking at how to tackle the risks associated with the development of Artificial Intelligence (AI). In April 2019, the EU published its guidelines on ethics in AI, thereby positioning itself as a frontrunner on AI policy. Ethical rules on AI, where they exist, are essentially self-regulatory, and there is thus growing demand for more government oversight. In the EU, there are strong calls for clarifying guidelines, fostering the adoption of ethical standards and adopting legally binding instruments that set common rules on transparency and requirements for fundamental rights impact assessments, while provide an adequate legal framework for face recognition technology.
Should AI be regulated?
The discussion around artificial intelligence (AI) technologies and their impact on society is increasingly focused on the question of regulation. Following the call from the European Parliament to update and complement the Union’s existing legal framework with guiding ethical principles, the EU has carved out a ‘human-centric’ approach to AI that is respectful of European values and principles. As part of this approach, the EU published in April 2019 its guidelines on ethics in AI, and European Commission President-elect, Ursula von der Leyen, has announced that the Commission will soon put forward further legislative proposals for a coordinated European approach to the human and ethical implications of AI.
What is Artificial Intelligence? What are its risks?
Artificial intelligence (AI) commonly refers to a combination of the following:
- machine learning techniques used for searching and analysing large volumes of data;
- robotics dealing with the conception, design, manufacture and operation of programmable machines;
- algorithms and automated decision making systems (ADMS) able to predict both human and machine behaviour, and thus make autonomous decisions.
AI technologies can be extremely beneficial from an economic and social point of view and are already being used in areas such as healthcare (for instance, to find effective treatments for cancer) and transport (for instance, to predict traffic conditions and guide autonomous vehicles), and to efficiently manage energy and water consumption. AI increasingly affects our daily lives, and its potential range of applications is so broad that it is sometimes referred to as the fourth industrial revolution1https://www.weforum.org/platforms/shaping-the-future-of-technology-governance-artificial-intelligence-and-machine-learning.
However, while most studies concur that AI brings many benefits, they also highlight a number of ethical, legal, and economic concerns relating primarily to the risks AI poses for human rights and fundamental freedoms. While the potential for algorithmic bias in AI has been well-documented, especially regarding its application in criminal justice situations, AI also poses issues regarding the right to privacy and/or data protection. There are also some concerns about the impact of AI technologies and robotics on the labour market (e.g. jobs being lost to automation). Furthermore, there are calls to assess the impact of algorithms and automated decision making systems (ADMS) in the context of defective products (safety and liability), digital currency (blockchain), disinformation-spreading (fake news) and the potential military application of algorithms (autonomous weapons systems and cybersecurity). Finally, the question of how to develop ethical principles in algorithms and AI design has also been raised.
What are the current EU Guidelines?
The EU’s Guidelines put forward the following seven key requirements that AI systems should meet in order to be deemed trustworthy, with a specific assessment list which aims to help verify the application of each of the key requirements.
- Human agency and oversight: AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches.
- Technical robustness and safety: AI systems need to be resilient and secure. They need to be safe, ensuring a fall-back plan in case something goes wrong, as well as being accurate, reliable and reproducible. That is the only way to also ensure that unintentional harm can be minimized and prevented.
- Privacy and data governance: besides ensuring full respect for privacy and data protection, adequate data governance mechanisms must also be ensured, taking into account the quality and integrity of the data, and ensuring legitimised access to data.
- Transparency: the data, system and AI business models should be transparent. Traceability mechanisms can help achieving this. Moreover, AI systems and their decisions should be explained in a manner adapted to the stakeholder concerned. Humans need to be aware that they are interacting with an AI system, and must be informed of the system’s capabilities and limitations.
- Diversity, non-discrimination and fairness: Unfair bias must be avoided, as it could have multiple negative implications, from the marginalization of vulnerable groups, to the exacerbation of prejudice and discrimination. Fostering diversity, AI systems should be accessible to all, regardless of any disability, and involve relevant stakeholders throughout their entire life circle.
- Societal and environmental well-being: AI systems should benefit all human beings, including future generations. It must hence be ensured that they are sustainable and environmentally friendly. Moreover, they should take into account other environmental factors, including other living beings, while their social and societal impact should be carefully considered.
- Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. Auditability, which enables the assessment of algorithms, data and design processes plays a key role therein, especially in critical applications. Moreover, adequate and accessible redress should be ensured.
To move forward, AI must earn trust through trust.
It is important to build AI systems that are worthy of trust. Human beings will only be able to confidently reap the benefits of AI when the processes and people behind it are trustworthy.
Trustworthy AI should have three components:
- It should be lawful, ensuring compliance with all applicable laws and regulations.
- It should be ethical, ensuring adherence to ethical principles and values.
- It should be robust, both from a technical and social perspective to ensure that, even with good intentions, AI systems do not cause any unintentional harm.
Each component is necessary – but not sufficient by itself – to achieve Trustworthy AI.
Europe has a unique vantage point given its citizen-centric approach to legislation. This approach is written into the very DNA of the European Union through the Treaties upon which it is built. The current guidelines form part of a vision that promotes Trustworthy AI which we believe should be the foundation upon which Europe can build leadership in innovative, cutting-edge AI systems.
This ambitious vision should encourage improvement in quality of life for European citizens, both individually and collectively.
This text is based upon two texts from European Parliament: