The Journal of Things We Like (Lots)
Select Page
Reuben Binns and Michael Veale, Is that your final decision? Multi-stage profiling, selective effects, and Article 22 of the GDPR, 11 Int'l Data Privacy L. 319 (2021).

In their brief and astute article Is That Your Final Decision? Multi-stage Profiling, Selective Effects, and Article 22 of the GDPR, Reuben Binns and Michael Veale discuss the arduous issues of the EU GDPR’s prohibition of impactful automated decisions. The seemingly Delphic article 22.1 of the GDPR provides data subjects with a right not to be subject to solely automated decisions with legal effect or similarly significant effect. As the authors indicate, similar default prohibitions (of algorithmic decision-making) can be found in many other jurisdictions, raising similar concerns. The article’s relevance for data protection law sits mainly in their incisive discussion of how multi-level decision-making fares under such prohibitions and what ambiguities affect the law’s effectiveness. The authors convincingly argue that there is a disconnect between the potential impact of ‘upstream’ automation on fundamental rights and freedoms and the scope of article 22. While doing so, they lay out the groundwork for a more future-proof legal framework regarding automated decision-making and decision-support.

The European Data Protection Board (EDPB), which advises on the interpretation of the GDPR, has determined that the ‘right not to be subject to’ impactful automated decisions must be understood as a default prohibition that does not depend on data subjects invoking their right. Data controllers (those who determine purpose and means of the processing of personal data) must abide by the prohibition unless one of three exceptions apply. These concern (1) the necessity to engage such decision-making for ‘entering into, or performance of, a contract between the data subject and a data controller’, (2) authorization by ‘Union or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests’ or (3) ‘explicit consent’ for the relevant automated decision-making.

Binns and Veale remind us that irrespective of whether automated decisions fall within the scope of article 22, insofar as they entail the processing of personal data, the GDPR’s data protection principles, transparency obligations and the requirement of a legal basis will apply. However, automated decisions are often made based on patterns or profiles that do not constitute personal data, precisely because they are meant to apply to a number of individuals who share certain (often behavioral) characteristics. Article 22 seeks to address the gap between data protection and the application of non-personal profiles, both where such profiles have been mined from other people’s personal data and where they are applied to individuals singled out because they ‘fit’ a statistical pattern that in itself is not personal data.

Once a decision is qualified as an article 22 decision, a series of dedicated safeguards is put in place, demanding human invention, some form of explanation and an even more stringent prohibition on decisions based on article 9 “sensitive” data (‘revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation’).

The authors are interested in the salient question of how different layers of automation create a disconnect between, on the one hand, the impact on the fundamental rights and freedoms of those targeted and, on the other hand, the protection offered by article 22. For instance, algorithmically inferred dynamic pricing (or ‘willingness to pay’) may be used to inform human decisions on insurance, housing, credit and recruitment. However, it escapes the GDPR’s protection against automated decisions because humans make the final decision. Considering ‘automation bias’, the presorting that takes place in the largely invisible backend systems may disenfranchise those targeted from the kind of human judgement and effective contestability that article 22 calls for. (See recently Margot E. Kaminski & Jennifer M. Urban, The Right to Contest AI.) The ensuing gap in legal protection is key to the Schufa case that is now pending before the Court of Justice of the European Union, raising the question of whether a credit risk score decided by the scoring algorithm of a credit information agency that is used by an insurance company, in itself qualifies as an automated decision (case C-634/21).

The authors distinguish five types of ‘distinct (although in practice, likely interrelated) challenges and complications for the scope of article 22. The first (1) is that adding human input at the level of all data subjects, which affects whether article 22 applies, can still leave a subset of data subjects not protected by that human input. The second (2) is the GDPR’s lack of clarity on ‘where to locate the decision itself.’ The third challenge (3) is whether the prohibition concerns potential or only ‘realised’ impact. The fourth (4) is the likelihood that largely invisible automated backend systems have a major impact irrespective of the human input that is available on the frontend. And the fifth (5) and perhaps most significant challenge is the GDPR’s focus on only the final decision in a chain of relevant decisions, which ignores the impact of prior automated decisions on the choice architecture of those making the final decision. This is the “multi-stage” profiling the authors reference in their title.

The abstruse wordings of article 22, probably due to compromises made during the legislative process, may inadvertently reduce or obliterate what the European Court of Human Rights would call the ‘practical and effective’ protection that article 22 nevertheless aims to provide. The merit of the points made by Binns and Veale is their resolute escape from the usual distractions that turn discussions of article 22 into a rabbit hole of fruitless speculations, for instance on whether there is a right to explanation and what this could mean in the case of opaque algorithmic decision-making and on whether the explanations are due before decisions are made or only after. As they explain, all this will depend on the circumstances and should be decided in light of the kind of protection the GDPR aims to provide (notably enhancing both control over one’s personal data and accountability of data controllers).

Binns and Veale’s precise and incisive assessment of the complexities of upstream automation and the potential impact on those targeted should be taken into account by the upcoming legislative frameworks for AI and by courts and regulators deciding relevant cases. In the US we can think of the Federal Trade Commission’s mandate and the National Artificial Intelligence Initiative Act of 2020. Binns and Veale remind us of the gaps that will occur in practical and effective legal protection if AI legislation restricts itself to the behavior of data-driven systems instead of incorporating decisions of deterministic decision-support systems, which will be the case if AI is defined such that the latter systems fall outside the scope of AI legislation. Both Veale and Binns are prolific writers, anyone interested in the underlying rationale of EU data protection law and the relevant technical background should keep a keen eye on their output.

Download PDF
Cite as: Mireille Hildebrandt, The Disconnect Between ‘Upstream’ Automation and Legal Protection Against Automated Decision Making, JOTWELL (April 7, 2022) (reviewing Reuben Binns and Michael Veale, Is that your final decision? Multi-stage profiling, selective effects, and Article 22 of the GDPR, 11 Int'l Data Privacy L. 319 (2021)), https://cyber.jotwell.com/the-disconnect-between-upstream-automation-and-legal-protection-against-automated-decision-making/.