We live in the age of information. Information is accessible, fast and empowering. By automatically processing and analysing large amounts of information, the governance of modern society has become more efficient and reliable. The large extent to which novel forms of information processing are being relied on, is changing the very nature of the act of decision-making. This contribution defines such decision-making as ‘semi-automated’ because of the decisional value of automatically generated information source. This article (see the full version here) starts with a brief background to the European initiatives, introducing automation in information processing, before outlining the impact of such initiatives on the decision-making conduct and the take-overs for the Union legal order at large.
Information in the heart of EU multilevel decision-making and the determination to automatise
The existing European legal scholarship informs about the complexities of EU multilevel decision-making arising from its increasingly integrated (composite) nature. The integrated nature of decision-making entails interdependence between the actions, required from the authorities of different jurisdictions, when adopting a decision under EU law. For example, when one Member State issues an entry ban concerning a third-country national, the other EU Member States must recognise and enforce it on their own territories. Existing entry bans in the Schengen area are communicated as alerts through large-scale information systems, such as the Schengen Information System. The authority that makes the decision concerning the applicant for a residence permit often acts based on a hit with the stored alert. The hit is generated through an automated processing of the information stored in the system. As such, the automatically generated output in the form of a hit instructs the retrieving authority on how to decide concerning the applicant.
The speed with which the EU employs automation for the purposes of enhancing such forms of decision-making raises important concerns among the wider European community (see e.g. here and here). Concretely, an unprecedented expansion and transformation of the informational cooperation in the Area of Freedom, Security and Justice (AFSJ) has been among the most alarming examples of technological progress outpacing the law. However, such efforts to exploit technological benefits are manifested across broader policy areas (see the EU artificial intelligence plans). Generally, the policy-makers pledge to temper the technological developments with human-centred safeguards guaranteeing, inter alia, respect for the fundamental values of a society founded on the Rule of Law. Among those values most at risk are, not only the data protection and privacy values, but also—more widely—the constitutional promise of effective judicial protection of fundamental rights and of good administration. With respect to those values, the ambitions to enhance and speed-up multilevel decision-making takes a sway on the very act of deciding, rendering it an ever more ‘automatised’ behaviour.
Increasingly, novel forms of automated searches are being employed in the large-scale informational cooperation—including, for instance, algorithmic matching of sensitive biometric data. The more advanced such forms of automatic generation of the informational basis for the subsequent decisions are, the more difficult it becomes for the decision-maker to verify its correctness. Furthermore, the underlying informational cooperation in the EU operates based on mutual trust and mutual recognition. The increasing employment of automation in the European informational cooperation based on these principles further “automatises” the corresponding decision-making. However, the “medium-level” automation that so far underlies most of European informational cooperation, as far as it employs pre-programmed algorithms, means that a human agent remains responsible for any action taken based on the automatically generated informational output. As a result, such decision-making could best be characterised as ‘semi-automated’ conduct. This reference avoids making a misleading insinuation that would engage the GDPR prohibition under Article 22(1). In the latter context, the minimum safeguards arising from Article 22 GDPR for automated processing of personal data, especially the guarantee of ‘at least the right to a human intervention’, are nonetheless also subject of much concern. In due course, asserting that human intervention would guarantee a meaningful exercise of decision-making discretion could be misleading.
Effects of automation on the decision-making conduct
Evidently, not all automation is equally transformative. Hence, automation is not a ‘unitary concept’. Instead, automation comprises of a spectrum of technological applications ranging from weaker forms of computation to those resembling human intelligence levels. Currently, the informational cooperation in AFSJ mostly entails what could be called a ‘medium-level’ automation that relies on pre-programmed algorithms (meaning ‘rules followed by a computer, as programmed by humans, which translate input data into outputs’), rather than self-learning capabilities of the system. For instance, the Interoperability Framework for the AFSJ employs some previously employed means of automation in the connecting of existing information systems, albeit on an unprecedented scale and with important risks to individual data protection. Instead, the plans on updating the functional capacities of the individual systems also include introducing more advanced forms of automation, especially for searches based on sensitive biometric data. Yet, even this type of automation, thus far, differs from the applications embodying self-learning capacities, known as artificial intelligence (AI) (see also here). The aspirations concerning the systems’ capacities in this respect, however, go far beyond the thus far achieved technological progress towards the AI applications. Despite the differences in the degree of automation employed, behind all automated decision-making ‘support systems’, there is an array of actors involved in their design and implementation, including private parties. As Mittelstadt and others explain, the resulting cooperation thus embodies an accountability ‘gap between the designer’s control and algorithm’s behaviour (…) wherein blame can potentially be assigned to several moral agents simultaneously’. It is hence of utmost importance for the modern society to reflect better and acknowledge the interactions between the human decision-makers and the technology that supports the very act of public decision-making.
There are several effects of automation on the value of information and on the decision-making conduct. Concretely, automation alters the value of information from a ‘means of assistance’ to a powerful ‘decisional’ asset. In this respect, at least two interrelated decisional effects of automation are particularly worth highlighting: output obsession (or automation bias) and algorithmic opacity. The two effects are transformative as they undermine (or even prevent) a proper exercise of human decision-making agency. First, the output obsession, also known as the phenomenon of automation bias, refers to the authorities’ tendency to trust a computationally obtained output to be correct, objective and reliable. Because of this tendency, a human agent endowed with the decision-making responsibility is less likely to question the acquired output, and thus less likely to perform an effective verification of the correctness of the automatically retrieved or generated informational source for the decision. Other reasons for not effectively questioning the automatically acquired information include the need for a timely action from the responsible agent, and, as mentioned above, reliance on mutual trust and mutual recognition. Indeed, the key benefit of automation is enhancing efficiency in complex decision-making through minimising the bureaucratic workload. Second, closely related to the tendency to trust computational output, is the fact that the automated ‘advice’ is largely beyond the understanding capacities of the human decision-maker. This is a consequence, inter alia, of the algorithmic opacity, i.e., a limited explainability of the automated output. The algorithmic opacity not only undermines transparency but also effectively removes the ability of the decision-maker to exercise their decision-making discretion.
The two decisional effects of automation on the informational output consequently also alter the nature of the decision-making. Output biases and algorithmic opacity render it more difficult for the responsible agent to understand, consult, or otherwise verify the output, i.e., to meaningfully exercise their decision-making discretion. Such ‘rubber-stamping’ of ‘advice’ is not uncommon to other types of complex public decision-making. For instance, it relies on technical or scientific ‘advice’, such as in risk regulation or banking supervision. Consequently, the decision-making authority with the final discretion may be unable or unwilling to, technically or scientifically, verify and potentially divert from the solutions proposed by an expert agency. However, the concern with a meaningful human intervention where the ‘advice’ entails a computerised/automatised information differs from relying on ‘expert or scientific advice’. The latter is subject to concrete procedural requirements, ranging from occupational qualifications of the experts to the duty of care and the related reasoning obligations, compliance with which is, albeit to a narrow extent, subject to effective review. Instead, in semi-automated decision-making, the effects of automation bias, coupled with the algorithmic opacity, render the agent almost unable to contradict the algorithmic output. The ‘human in the loop’ safeguard shall thus be ensured with a set of similar procedural guarantees, which would reflect the ‘decisional’ value of information in light of the effects of automation on the decision-making agency.
The justiciability of public conduct depends on the definition of a reviewable act. The notion essentially distinguishes ‘acts’ producing legal effects (i.e. binding acts) from non-binding acts. Concretely, the distinction is between legal acts that are capable of changing the legal position of an individual and physical or purely ‘factual acts’, which merely produce ‘some change in the physical world’ without directly or indirectly altering the original ‘legal relation’—consisting of rights and/or duties (see here). In composite decision-making that relies on automated-processing of personal information, the seemingly consecutive ‘acts’ of information processing and decision-taking sit uneasily within the legal/factual dichotomy. A quasi-autonomous output, generated through automated processing of information, embodies a distinct form of ‘factual conduct’ which—on the face of it—produces only secondary legal effects (in determining the action to be taken concerning an individual), but also increasingly alters the legal position of the individual. Independently from the decision-making that follows the processing of information, the European Data Protection Framework also recognises the primary legal effects arising from the data processing conduct. This is evident from the fact that individuals, whose personal data are processed for the purposes of decision-making, enjoy substantive rights as data subjects (Chapter III of the GDPR). These rights can be directly enforced before the competent supervisory authorities, including courts. What remains troubling, from the perspective of justiciability of semi-automated decision-making that relies on information that embodies the ‘decisional’ importance, is acknowledging, in the terminology of Türk and Xanthoulis, the primary indirect legal effects of the processing on the decision-taking that follows a retrieval of an automated informational output. To that end, an ‘automated factual conduct’ which underpins semi-automated decision-making requires a distinct appreciation of its potential legal, i.e., ‘decisional’ effects.
Take-aways for the Union legal order at large
Not acknowledging the effects of automation on the value of information for the decision-making conduct puts individuals in a particularly vulnerable legal position. Such acknowledgement is especially needed with respect to the legal effects of the preliminary conduct in multilevel European decision-making that takes the form of automated information processing. Currently, the European standard, in this respect, holds that only the final decision based on such preliminary conduct produces legal effects vis-à-vis the concerned individual. Jurisprudence concerning the existing safeguards in the context of automated processing of personal information is still in the early phases of development. As the plaintiffs, their legal counsel, the competent public authorities, and the supervisory authorities come to rapidly experience the novel challenges arising from automated processing of information, it is necessary to revisit the systemic legal barriers that persist within the EU.
Posted by Simona Demková
Simona Demková is a postdoctoral researcher at the University of Luxembourg within the framework of the DILLAN project (Digitalisation, Law and Innovation). In October 2021, Simona completed her PhD Thesis entitled ‘Effective Review in the Age of Information: The Case-study of Semi-automated Decision-making based on Schengen Information System’ at the University of Luxembourg under the supervision of Prof. Herwig Hofmann.
Simona holds a master’s degree in International Relations from the Central European University in Hungary and an Advanced LL.M. in European and international human rights law from Leiden University in the Netherlands.
Suggested citation: S Demková, “The Decisional Value of Information in (Semi-)automated Decision-making”, REALaw.blog, available at https://realaw.blog/2021/11/01/682/