Supervision of Artificial Intelligence in the EU and the Protection of Privacy, by J Chamberlain and J Reichel

After decades of technological advancements, artificial intelligence (AI) is now causing a flurry of movement in the legal domain. Lately, strong caveats from developers in the AI industry seem to have been accompanied by increasing societal concern about the potential negative effects on humans of unregulated AI. As part of its European Digital Strategy, the EU has put forward a number of initiatives addressing AI. The most important of these is the regulation referred to as the Artificial Intelligence Act (“the AI Act”), proposed by the Commission in April 2021 and currently still under negotiation. If passed, this will be the first comprehensive legal instrument regarding AI at a global level. Following the regulatory success of the GDPR, the EU could once again take the lead in developing regulatory regimes creating both fair competition and foreseeability for market actors and protection of the rights and interests of individuals, guaranteed by effective cross-border enforcement machinery. Will the AI Act become the next example of the Brussels effect, as Anu Bradford has labeled it, and would this be a good idea?

In an article in FIU Law Review, vol. 17, no 2, “Supervision of Artificial Intelligence in the EU and the Protection of Privacy,” we examined the risk-based approach of the proposed AI Act in relation to the composite supervisory structure suggested for AI systems. The latter was analyzed in light of general EU developments in the area of administrative supervision, which is constantly expanding. We also discussed the AI Act’s approach to privacy and data protection – two fundamental rights that, at least at first glance, are not easy to combine with the mass data handling required by AI and the monitoring effects of AI systems. As is always the case when describing a moving target, the proposal has been updated since the article was submitted in October 2022, most importantly with the Council Common Position in December 2022 and European Parliament amendments in June 2023. However, the main observations remain valid.

As is now well-known, the proposed AI Act divides AI systems into four risk levels, creating a “risk pyramid” where the focus is on so-called high-risk systems. Most of the articles in the AI Act target these systems, including the sections regarding supervision. According to the Explanatory Memorandum of the AI Act, high-risk systems “create a high risk to the health and safety or fundamental rights of natural persons.” Although there are no direct references to specific fundamental rights in connection to the high-risk systems discussed in the Explanatory Memorandum, it does not seem far-fetched to assume that relevant rights include privacy and data protection – codified in Articles 7 and 8 of the Charter of Fundamental Rights of the European Union. In the European Parliament amendments, the fundamental rights aspect has generally been emphasized, and a number of references to privacy and data protection specifically added.

While the fundamental rights perspective is thus not as prominent in the Commission Proposal, both privacy and data protection are repeatedly described as areas at risk. Privacy is mentioned specifically in connection to employment (regarding AI monitoring of employees) and the health sector (access to health data for AI training should be designed in a “privacy-preserving” way). The term data protection appears more frequently than the term privacy in the proposed AI Act – primarily in the context of existing regulations on data protection. AI systems are trained on massive amounts of data, including personal data. A structural challenge when developing the regulation on AI is thus to provide some clarity on how the legal frameworks concerning data protection and AI are to interact. According to the Explanatory Memorandum of the AI Act, the proposal “complements” the GDPR – but many questions remain as to how this will work in practice. One concrete example of data protection in the AI context can be found regarding the top category of the risk pyramid, i.e., prohibited AI systems of “unacceptable risk,” where it is stated in the Explanatory Memorandum that the manipulative or exploitative practices that fall under this category may already be covered by existing data protection regulation.

Another parallel to the GDPR is the composite supervisory structure of the proposed AI Act, where there are many references to national data protection authorities (DPAs), the European Board of Data Protection (EDPB), and the European Data Protection Supervisor (EDPS). A new sector of EU governance is introduced with the AI Act, including an EU agency – the European Artificial Intelligence Board (or Office, in the European Parliament amendments) – and the EDPS is to act as the competent supervisory authority. At the national level, the AI Act introduces an elaborate administrative structure including implementation, supervision, and mechanisms for standardization.

The supervisory structure of the proposal is organized with a combination of ex-ante and ex-post controls for high-risk AI, with both private and public actors involved. The main ex-ante controls are performed by notified bodies, private or public, appointed by a national notifying authority. Their main task is to verify the conformity of high-risk AI systems with the assessment procedures laid down in the Act. The ex-post supervision is conducted within a three-level infrastructure, with requirements on human oversight – an internal control function within the service providers – as well as national and European authorities. In contrast to the GDPR, which introduced a single category of national competent bodies, the DPAs, the Commission Proposal identifies three forms of national competent authorities: the notifying bodies, one supervisory authority within each state, as well as additional market surveillance authorities. In the Council Common Position, the supervisory authorities were discarded. The market surveillance authorities are appointed from amongst pre-existing sector-specific authorities, amongst others the EU Market Surveillance Regulation and the Data Protection Directive for Police and Criminal Justice Authorities (Article 63.1 and 5). Since each member state usually has different market surveillance authorities appointed for different sectors of the market, the number of AI market surveillance authorities will be high. These authorities may use the powers bestowed on them under the relevant sector-specific law to monitor high-risk AI systems, but the proposed AI Act also provides for specific procedures, for example placing requirements on AI system providers, even if their AI systems are in compliance with the rules of the Act, in order to safeguard health and safety of persons, fundamental rights, and the public interest protection (Article 67). The proposed AI Act requires high-risk AI service providers to present vast amounts of information throughout the lifecycle of each service, such as keeping technical documentation, records, or logs of the service operations, informing of risks, and reporting serious incidents and malfunctioning (Articles 11–12, 18–20, 22–24, 50, 62). This, among other information, may be used by the human oversight function or relevant authorities in their supervision (Articles 14, 31, 43, and 64). Some of the information will be collected in an EU database for stand-alone high-risk AI systems (Article 60). The Council has in its Common Position proposed that AI-specific cross-border investigations could be carried out, with the assistance of the European Artificial Intelligence Board (Article 58). Article 70 stresses that all involved authorities must respect the confidentiality of information and data obtained, but at the same time declares that the confidentiality requirements should not affect the right of the involved public and private entities to exchange information and disseminate warnings. It can be foreseen that the information exchange will be extensive. The competent authorities are to use all corrective and restrictive measures in sector-specific market surveillance law to ensure compliance, and may further require a provider to put an end to non-compliance under the proposed AI Act (Articles 65 and 67). Just like in the GDPR, the supervisory authorities are mandated to enact decisions on penalties and fines (Article 71). These can be set as either a fixed maximum amount up to 30,000,000 euros or up to 6 percent of a provider’s annual turnover. Lastly, the European Parliament added a more precise standardization regime to the AI Act (Article 40), giving market actors an important platform to influence the specificities of the regulatory framework.

With the proposed AI Act, the EU takes a big and (for some actors) controversial step towards controlling AI systems. Some will argue that the EU is trying to “win the race” to set a global norm for regulating AI. All else aside, the fact is that the EU is taking responsibility for a development that has been left to technology long enough. This must be seen as positive, considering the significant impact of AI on human beings and the lack of regulation thus far.

Even with this starting point, there is reason to follow the evolving regulation with a critical eye. The central narrative of controlling “AI risks” can be questioned, as these risks – along with the effects if they are realized – are still largely unknown. This means that we may be looking at “AI uncertainties” rather than risks. However, uncertainties are difficult to approach from a legal angle and this is a probable explanation for the EU legislator’s chosen terminology. A question that might be posed is the following: what happens with the rights paradigm that has long been dominating EU regulation, when the risk narrative takes over? Is there a reason why the Commission Proposal focuses on risks, instead of starting from the fundamental rights at risk in AI development? These rights are mentioned as a motivation to legislate, but it is not very clear what rights are at stake and how they are threatened.

The fuzziness regarding protected interests in the AI Act also becomes problematic in the context of supervision. The supervisory structure suggested is more comprehensive than in earlier EU regulations, involving an unprecedented number of European and national authorities, with vast competences under a combination of sector-specific and common regulatory frameworks. The composite administration for AI must thus be described as opaque, and the black box analogy often used for AI itself seems disturbingly fitting. It is in this aspect worrying that the proposed AI Act repeatedly underlines the importance of persons involved in the supervisory regime being sufficiently knowledgeable. Who can ensure that the persons involved in the human oversight mechanism “fully understand the capacities and limitations of the high-risk AI system”? How can the national competent authorities ensure that they have “a sufficient number of personnel permanently available whose competences and expertise shall include an in-depth understanding of artificial intelligence technologies, data and data computing, fundamental rights, health and safety risks and knowledge of existing standards and legal requirements”? Where can these super-officials be found? And why are they not already hired by big tech?

AI is a policy area under intense development, with technological advancements going well beyond most people’s understanding, and with some of the world’s biggest companies as targets of the regulation. It is therefore an intrinsically difficult area to regulate. With the uncertainty concerning what risks are to be expected, and what effects they may have on fundamental rights, this black box AI administration can be expected to face major challenges. The question may be raised if too much trust is placed in the composite supervisory structure consisting of private actors and European and national authorities cooperating under the vaguest constitutional framework, with diverse administrative cultures and with very few common judicial or democratic accountability mechanisms. Still, as the situation stands, this seems to be the most promising regulatory framework for AI in the world today.

Posted by Johanna Chamberlain (Postdoctoral researcher in Commercial Law, Uppsala University) & Jane Reichel (Professor of Administrative Law, Stockholm University)

Suggested citation: J Chamberlain and J Reichel, “Supervision of Artificial Intelligence in the EU and the Protection of Privacy”, REALaw. blog, https://wp.me/pcQ0x2-Jc

Editor’s note: This piece has been revised on 27th September to provide the references to the forthcoming article and on 12th December to provide the link to the published article.