All publications should have at least one open access link. Get in touch if you have any problems accessing any of the materials.

2021

Little attention has been paid to the GDPR's Article 22 in light of decision-making processes with multiple stages, potentially both manual and automated. We diagramatically identify and analyse 5 main complications: the potential for selective automation on subsets of data subjects despite generally adequate human input; the ambiguity around where to locate the decision itself; whether ‘significance’ should be interpreted in terms of any potential effects or only selectively in terms of realised effects; the potential for upstream automation processes to foreclose downstream outcomes despite human input; and that a focus on the final step may distract from the status and importance of upstream processes.

We present an overview of the proposed EU AI Act and analyse its implications, drawing on scholarship ranging from the study of contemporary AI practices to the structure of EU product safety regimes over the last four decades. We find that some provisions of the draft AI Act have surprising legal implications, whilst others may be largely ineffective at achieving their stated goals, including the enforcement regime and the effect of maximum harmonisation on the space for AI policy more generally.
Covered in POLITICO, the New York Times, Engineering and Technology, by the European Parliament, and consultation responses including Amnesty International, EDRi, Verbraucherzentrale Bundesverband, European Center for Not-for-Profit Law, EPIC and Access Now.

Digital vaccination certification involves making many promises, few of which can realistically be kept. In this paper, we demonstrate how this phenomenon constitutes various forms of theatre – immunity theatre, border theatre, behavioural theatre and equality theatre – doing so by drawing on perspectives from technology regulation, migration studies and critical geopolitics. A shorter version of this paper is available as a LexAtlas blog.

Policy challenges of machine learning and sustainability share significant structural similarities, including difficult to observe credence properties, such as data collection characteristics or carbon emissions from model training, and value chain concerns, including core-periphery inequalities, networks of labor, and fragmented and modular value creation. We apply research on certification systems in sustainability, particularly of commodities, to generate lessons across both areas, informing emerging proposals such as the EU’s AI Act.

This paper analyses the extent to which practices of real-time bidding are compatible with the requirements regarding (i) a legal basis for processing, transparency, and security in European data protection law. We conclude that, in concept and in practice, RTB is structurally difficult to reconcile with European data protection law.
Covered in the Guardian and by the Norwegian Consumer Council

There is growing evidence that SARS-CoV-2 can be transmitted beyond close proximity contacts, in particular in closed and crowded environments with insufficient ventilation. To help mitigation efforts, contact tracers need a way to notify those who were present in such environments at the same time as infected individuals. Neither traditional human-based contact tracing powered by handwritten or electronic lists, nor Bluetooth-enabled proximity tracing can handle this problem efficiently. In this paper, we propose CrowdNotifier, a protocol that can complement manual contact tracing by efficiently notifying visitors of venues and events with SARS-CoV-2-positive attendees. We prove that CrowdNotifier provides strong privacy and abuse-resistance, and show that it can scale to handle notification at a national scale. This protocol has since been adapted and deployed by national responses including Germany's CoronaWarnApp.
Covered in Netzpolitik, Die Zeit, Le Soir.

The Lex-Atlas: Covid-19 (LAC19) project provides a scholarly report and analysis of national legal responses to Covid-19 around the world. There are nearly 200 jurists participating in the LAC19 network and who have contributed to writing national country reports. The Oxford Compendium of National Legal Responses to Covid-19 launched on 21 April 2021 with 19 Country and Territory Reports and a further 41 will be added on a rolling basis across the Spring and Summer of 2021. More information is available on the LexAtlas website.

This chapter elaborates on challenges and emerging best practices for state regulation of electoral disinformation throughout the electoral cycle. It is based on research for three studies during 2018-20: into election cybersecurity for the Commonwealth; on the use of Artificial Intelligence (AI) to regulate disinformation for the European Parliament; and for UNESCO, the United Nations body responsible for education.

2020

An introduction to the use, possibilities, limitations and considerations of the use of data protection transparency provisions as a research method.

A short chapter on the interaction of platforms, technology and contact tracing systems.
Read more: op-ed in the Guardian.

The DP-3T Bluetooth proximity tracing protocol white paper, for supporting contact tracing efforts during COVID-19. See more on the GitHub.
See Wikipedia page for more information and impact.

A conceptual guide to the concept of cybersecurity over time, from multiple disciplinary angles.

A book resulting from a study of cybersecurity in an electoral context undertaken for the Commonwealth. Contains recommendations and best practices.

A critical view on the EU HLEG-AI's recent guidelines, highlighting the lack of focus on power, infrastructure, the pervasive technosolutionism, the problematic representativeness of the group, and the reluctance to talk about funding of regulators, among other issues.

A guide to data rights, recent case law, challenges and trajectories to feed into the European Data Protection Board's drafting process for their data rights guidance.

We show using a scrape of the top 10,000 UK websites that only 11.8% of websites using the 5 big providers of consent pop-up libraries have configured them in ways minimally compliant with the GDPR and ePrivacy law.
Coverage in the BBC, TechCrunch, DR (Danish public broadcaster), Fast Company, Les Echos, cited by the Irish Data Protection Commissioner, the American Bar Association, Mayer Brown, Orange, Stiftung Neue Verantwortung, and Facebook, credited with changing regulatory guidance around cookies and consent in Denmark.

2019

This PhD thesis unpacks the provisions and framework of European data protection law in relation to social concerns and machine learning’s technical characteristics to identify tensions between the legal regime and machine learning practice; and draws on empirical data of machine learning in use in public sector institutions around the world to identify tensions between scholarship on fair and transparent machine learning and the social routines attempting to deploy it responsibly on the ground.

This report examines the England and Wales landscape around criminal justice and the increasing and varied use of algorithmic systems within it, proposing an array of policy recommendations.

This chapter asks and attempts to answer aspects of three main questions: What are the drivers and logics behind the use of machine learning in the public sector, and how should we understand it in the contexts of administrations and their tasks? Is the use of machine learning in the public sector a smooth continuation of ‘e-Government’, or does it pose fundamentally different challenges to the practice of public administration? How are public management decisions and practices at different levels enacted when machine learning solutions are implemented in the public sector?

This chapter focuses on the extent to which sophisticated profiling techniques may end up undermining, rather than enhancing, our capacity for ethical agency - and how, if at all, personalisation and recommendation systems may be responsibly designed in light of this.

2018

Recent 'model inversion' attacks from the information security literature indicate that machine learning models might be personal data, as they might leak data used to train them. We analyse these attacks and discuss their legal implications.
Coverage and citation by the Information Commissioner's Office in their AI Auditing Framework, the Council of Europe, the European Parliament, the Royal Society, Chatham House, the Future of Privacy Forum.

Where 'debiasing' approaches are appropriate, they assume modellers have access to often highly sensitive protected characteristics. We show how, using secure multi-party computation, a regulator and a modeller can build and verify a 'fair' model without ever seeing these characteristics, and can verify decisions were taken using a given 'fair' model.
Coverage in the Financial Times.

We interviewed 27 public sector, machine learning practitioners about how they cope with challenges of fairness and accountability. Their problems are often different from those in FAT/ML research so far, including internal gaming, changing data distributions and inter-departmental communication, how to augment model outputs and how to transmit hard-won social practices.
Coverage and use by the Australian Human Rights Commission; the Royal United Service Institute for Defence and Security Studies (RUSI), the European Parliament, the European Commission, the Boston Consulting Group, the United Nations Economic and Social Commission for Asia and the Pacific, the Alan Turing Institute/UK Office for AI, UNESCO, the German Government's Expert Council for Consumer Affairs, the Council of Europe, and the Centre for Data Ethics and Innovation.

We presented participants in the lab and online with adverse algorithmic decisions and different explanations of them. We found strongly dislike of case-based explanations where they were compared to a similar individual, even though these are arguably highly faithful to the way machine learning systems work.

In this workshop paper, we argue that sense-making is important not just for experts but for laypeople, and that expertise from the HCI sense-making community would be well-suited for many contemporary privacy and algorithmic responsibility challenges.

The General Data Protection Regulation has significant effects for machine learning modellers. We outline what human-computer interaction research can bring to strengthening the law, and enabling better trade-offs.

Data protection law gives individuals rights, such as to access or erase data. Yet when data controllers slightly de-identify data, they remove the ability to grant these rights, without removing real re-identification risk. We look at this in legal and technological context, and suggest provisions to help navigate this trade-off between confidentiality and control.

In-store tracking, using passive and active sensors, is common. We look at this in technical context, as well as the European legal context of the GDPR and forthcoming ePrivacy Regulation. We consider two case studies: Amazon Go, and rotating MAC addresses.

We outline the European 'right to an explanation' debate, consider French law and the Council of Europe Convention 108. We argue there is an unmet need to empower third party bodies with investigative powers, and elaborate on how this might be done.

We critically examine the Article 29 Working Party guidance that relates most to machine learning and algorithmic decisions, finding it has interesting consequences for automation and discrimination in European law.

2017

FAT/ML techniques for 'debiasing' machine learned models all assume the modeller can access the sensitive data. This is unrealistic, particularly in light of stricter privacy law. We consider three ways some level of understanding of discrimination might be possible, even without collecting such data as ethnicity or sexuality.

We consider the so-called 'right to an explanation' in the GDPR and in technical context, arguing that even if it (as manifested in Article 22) was enforced from the non-binding recital in European law, it would not trigger for group-based harms or in the important cases of decision-support. We argue instead for the use of instruments such as data protection impact assessment and data protection by design, as well as investigating the right to erasure and right to portability of trained models, as potential avenues to explore.
Coverage and use by the Article 29 Working Party (the official group of European regulators) in their guidance on the regulation that the paper itself analysed; The Information Commissioner's Office described it as "important to the development of the [regulator's] thinking" in this area; by the Council of Europe [1, 2, 3, 4], the European Commission (DG JRC, DG JUST, DG COMP), the European Parliament [1, 2, 3], the UN Special Rapporteur on Extreme Poverty and Human Rights Philip Alston, the Privacy Commissioner of Hong Kong, the German Government's Expert Council for Consumer Affairs, Amnesty International, RUSI, Access Now, Privacy International, the US Federal Trade Commissioner Noah Williams, the Centre for Data Ethics and Innovation, ARTICLE 19, the Nuffield Foundation, the New Economics Foundation, and the Government of New Zealand; the House of Lords (13/11 2017 vol 785 col 1862--4 & 13/12 2017 vol 787 col 1575--7) with amendments based upon it; profiled in the Journal of Things We Like (Lots), awarded a Privacy Papers for Policymakers prize by the Future of Privacy Forum at the US Senate in 2019.

I authored the case studies for the Royal Society and British Academy report which led to the UK Government's new Centre for Data Ethics and Innovation. I also acted as drafting author on the main report.

This is a preliminary version of the 'Fairness and accountability design needs' CHI'18 paper above.

We considered the detection of offensive and hateful speech, looking at a dataset of 1 million annotated comments. Taking gender as an illustrative split (without making any generalisable claims), we illustrate how the labellers' conception of toxicity matters in the trained models downstream, and how bias in these systems will likely be very tricky to understand.

2016

A conference paper on public sector values in machine learning, and public sector procurement in practice.

2015

This paper argues that performance-based sustainability standards, using a case study from the sugar-cane sector, have significant benefits over technology-based standards, and suggests directions in which this can be explored.