All publications should have at least one open access link. Get in touch if you have any problems accessing any of the materials.
2024
- Terzis P., Veale M., Gaumann N, (2024) Law and the Emerging Political Economy of Algorithmic Audits Proceedings of the 2024 ACM Conference on Fairness, Accountability and Transparency (FAccT '24)
- Gorwa R., Veale M. (2024) Moderating Model Marketplaces: Platform Governance Puzzles for AI Intermediaries 16(2) Law, Innovation and Technology doi:10/k5kf
We look at case studies of Hugging Face, GitHub and Civitai's moderation practices in relation to uploaded AI models and show significant tensions in successfully moderating that space, and suggest ways forward that may be more sustainable.
- Micklitz H.-W., Helberger N., Kas B., Namysłowska M., Naudts L., Rott P, Sax M., Veale M. Towards Digital Fairness (2024) 13 Journal of European Consumer and Market Law 24
- Helberger N., Kas B., Micklitz H.-W., Namysłowska M., Naudts L., Rott P, Sax M., Veale M. Digital Fairness for Consumers (BEUC, The European Consumer Organisation, 2024)
2023
- Veale M. (2023, preprint) Privacy, Informational Infrastructures and Covid-19: Comparative Legal Responses forthcoming in Jeff King and Octavio Ferraz (eds.) Comparing Covid Laws: A Critical Global Survey (OUP 2024).
- Veale M. (2023) Confidentiality Washing in Online Advertising in Corinne Cath-Speth (ed.) Eaten by the Internet (Meatspace Press) doi:10.31235/osf.io/53ays
In this chapter, I consider the rise of confidentiality-preserving techniques and infrastructures in online advertising, and how they might reconfigure power and maintain some of the hazards of the contemporary tracking ecosystem.
- Gaumann N., Veale M. (2023) AI Providers as Criminal Essay Mills? Large Language Models meet Contract Cheating Law (UCL Faculty of Laws) doi:10.31235/osf.io/cpbfd
Many jurisdictions have passed very broadly drafted laws to tackle academic integrity issues, criminalising the provision or advertising of contract cheating or essay mills, such as the Skills and Post-16 Education Act 2022 in England and Wales. Recently, AI models such as chatGPT have amplified academic concerns. Here, we look at the intersection between these phenomena. We review academic cheating laws, showing that several may apply even to general purpose AI services like chatGPT, without knowledge and intent. We identify a range of illegal adverts for AI-enhanced essay mills, and illustrate how difficult it is to draw the line between writing an essay and supporting it, such as by generating bone fide references. We also outline the consequences for intermediaries hosting these ads or providing these services, which may be significantly affected by these primarily symbolic laws. We conclude with a series of recommendations for policymakers, legislators, and education providers.
- Veale M. (2023) Verification Theatre at Borders and in Pockets Forthcoming in Colleen M. Flood, Y.Y. Brandon Chen, Raywat Deonandan, Sam Halabi, and Sophie Thériault (eds.) Pandemics, Public Health, and the Regulation of Borders: Lessons from COVID-19 (Routledge, forthcoming)
The COVID-19 pandemic saw the creation of a wide array of digital infrastructures, underpinning both digital and paper systems, for proving attributes such as vaccination, test results or recovery. These systems were hotly debated. Yet this debate often failed to connect their social, technical and legal aspects, focussing on one area to the exclusion of the others. In this paper, I seek to bring them together. I argue that fraud-free “vaccination certificate” systems were a technical and social pipe-dream, but one that was primarily advantageous to organisations wishing to establish and own infrastructure for future ambitions as verification platforms. Furthermore, attempts to include features to ostensibly reduce fraud had, and risks further, broader knock-on effects on local digital infrastructures around the world, particularly in countries with low IT capacities easily captured by large firms and de facto excluded from and by global standardisation processes. The paper further reflects on the role of privacy in these debates, and how privacy, and more specifically confidentiality, was misconstrued as a main design aim of these systems, when the main social problems could manifest even in a system built with state of the art privacy-enhancing technologies. The COVID-19 pandemic should sharpen our senses towards the importance of infrastructures, and more broadly, how to use technologies in societies in crises.
- Veale M. (2023) Some Commonly-Held but Shaky Assumptions about Data, Privacy and Power Forthcoming in Maria Ioannidou, and Despoina Mantzari (eds.) Research Handbook on Competition Law and Data Privacy (Edward Elgar, forthcoming)
Data has been seen as central to understanding privacy, informational power, and increasingly, digital-era competition law. Data is not unimportant, but it is misunderstood. I highlight several assumptions in need of challenge. Firstly, that data protection is distinct from privacy, and has a broader role correcting digitally-exacerbated power asymmetries. Secondly, contrary to economic received wisdom, data is not fully non-rivalrous due to the infrastructural implications of its integration. Thirdly, data can be less important than capacity for experimentation and intervention, which is not simple to ‘open up’. Lastly, data is increasingly unimportant due to large firms’ investments in confidential computing technologies, facilitating distributed analysis, learning, and even microtargeting. In the right conditions, data can be economically substituted for the ability to orchestrate a protocol — an infrastructural capacity unrecognised sufficiently in competition or other fields. This substitutability also requires the ability to force users to adhere to a protocol, bringing further privacy concerns. In sum, privacy, data protection and power need to be considered more closely entwined than at present, and all fields need to consider the infrastructural dimensions of large platforms, more than focussing on the data they accumulate.
- Veale M. (2023) Rights for Those Who Unwillingly, Unknowingly and Unidentifiably Compute! Forthcoming in Hans-Wolfgang Micklitz and Giussepe Vettori (eds.), The Person and the Future of Private Law (Hart)
Profiling of individuals has long been a concern to scholars and civil society, and a lucrative way for platforms to shape markets and extract value. However, people and their environments are not just computed, but they are increasingly also expected to become agents in large scale computations. The strengthening of privacy and data protection law has been used as a reason to move more and more advanced computation concerning individuals, groups and environments onto people’s devices, in a shift called ‘local processing’. This sees individuals’ devices and software work together to undertake collective computations, which often claim to be confidential with regard to the data of each person involved. For example, using technologies such as secure multi-party computation, phones may work together to create models or analysis of spoken language, without revealing anything that any user said to any other person. Such privacy-enhancing technologies equate privacy with confidentiality, and have interesting potential, but seeing an individual as participants in a computation raises new challenges. What autonomy do people have to shape such participation, given the limited technical and practical control over the devices in their pockets? Does their contribution to ethically questionable computation bring some responsibility, and should they be facilitated to refuse to participate in it? How does the individual relate to the overarching forces that orchestrate their ‘personal’ computers? Here, I present a guide and an agenda to navigate these issues, and analyse emerging regimes, such as the ex ante provisions in the Digital Markets Act and the Data Act, as well as interpretations of the General Data Protection Regulation, to understand how, if at all, they support those that unwillingly, unknowingly and perhaps even unidentifiably facilitate, rather than become the subject of, controversial computation.
- Veale M. (2023) Denied by Design: Data Access Rights in Encrypted Infrastructures Forthcoming in Jef Ausloos and Siddharth P de Souza (eds) Research Access to Digital Infrastructures. doi:10.31235/osf.io/94y6r
There are increasing demands and hopes for data access provisions to open accountability of systems. In parallel, major technology platforms and stacks are encrypting their business models and moving increasingly to privacy-enhancing technology approaches such as 'confidential computing'. These go beyond encrypting the content of communication to encrypting the very approaches that underpin their constitutive algorithmic systems, such as those used in recommendation and allocation. The motivations for these range from concerns around privacy to desires to avoid liability by seeing, hearing — and then presumably doing — no evil. Approaches to studying and improving these are nascent even for developers of these systems, who can struggle with a lack of telemetry and feedback data when trying to tweak, adjust and increase functionality of the systems they are deployed. Given that platforms themselves lack data on these systems, the task is doubly hard for external researchers. This paper characterises those challenges and suggests pathways and approaches legal regimes could take to ensure they remain accountable
- Veale M., Matus K., Gorwa R. (2023) AI and Global Governance: Modalities, Rationales, Tensions 19 Annual Review of Law and Social Science.
Artificial intelligence (AI) is a salient but polarizing issue of recent times. Actors around the world are engaged in building a governance regime around it. What exactly the “it” is that is being governed, how, by who, and why—these are all less clear. In this review, we attempt to shine some light on those questions, considering literature on AI, the governance of computing, and regulation and governance more broadly. We take critical stock of the different modalities of the global governance of AI that have been emerging, such as ethical councils, industry governance, contracts and licensing, standards, international agreements, and domestic legislation with extraterritorial impact. Considering these, we examine selected rationales and tensions that underpin them, drawing attention to the interests and ideas driving these different modalities. As these regimes become clearer and more stable, we urge those engaging with or studying the global governance of AI to constantly ask the important question of all global governance regimes: Who benefits?
- Cobbe J., Veale M, Singh J. (2023) Understanding accountability in algorithmic supply chains FAccT'23: Proceedings of the ACM Conference on Fairness, Accountability, and Transparency doi:10.1145/3593013.3594073.
Academic and policy proposals on algorithmic accountability often seek to understand algorithmic systems in their socio-technical context, recognising that they are produced by ‘many hands’. In- creasingly, however, algorithmic systems are also produced, de- ployed, and used within a supply chain comprising multiple actors tied together by flows of data between them. In such cases, it is the working together of an algorithmic supply chain of different actors who contribute to the production, deployment, use, and functional- ity that drives systems and produces particular outcomes. We argue that algorithmic accountability discussions must consider supply chains and the difficult implications they raise for the governance and accountability of algorithmic systems. In doing so, we explore algorithmic supply chains, locating them in their broader technical and political economic context and identifying some key features that should be understood in future work on algorithmic gover- nance and accountability (particularly regarding general purpose AI services). To highlight ways forward and areas warranting atten- tion, we further discuss some implications raised by supply chains: challenges for allocating accountability stemming from distributed responsibility for systems between actors, limited visibility due to the accountability horizon, service models of use and liability, and cross-border supply chains and regulatory arbitrage.
- Veale M., Silberman M. Six, Binns R. (2023) Fortifying the algorithmic management provisions in the proposed Platform Work Directive European Labour Law Journal doi:10.1177/20319525231167983
The European Commission proposed a Directive on Platform Work at the end of 2021. While much attention has been placed on its effort to address misclassification of the employed as self-employed, it also contains ambitious provisions for the regulation of the algorithmic management prevalent on these platforms. Overall, these provisions are well-drafted, yet they require extra scrutiny in light of the fierce lobbying and resistance they will likely encounter in the legislative process, in implementation and in enforcement. In this article, we place the proposal in its sociotechnical context, drawing upon wide cross-disciplinary scholarship to identify a range of tensions, potential misinterpretations and perversions that should be pre-empted and guarded against at the earliest possible stage. These include improvements to ex ante and ex post algorithmic transparency; identifying and strengthening the standard against which human reviewers of algorithmic decisions review; anticipating challenges of representation and organising in complex platform contexts; creating realistic ambitions for digital worker communication channels; and accountably monitoring and evaluating impacts on workers while limiting data collection. We encourage legislators and regulators at both European and national level to act to fortify these provisions in the negotiation of the Directive, its potential transposition, and in its enforcement.
2022
- Veale M. (2022). Schools must resist big EdTech – but it won’t be easy. In S. Livingstone & K. Pothong (Eds.), Education Data Futures: Critical, Regulatory and Practical Reflections (pp. 67–78). 5Rights: London. HTML mirror
This chapter outlines some of the challenges when schools are exposed to the business models of technology platforms. It outlines their most salient impacts, and critically evaluates approaches schools can take to ensure their pedagogical autonomy and student privacy is safeguarded.
- Troncoso, C., Bogdanov, D., Bugnion, E., Chatel, S., Cremers, C., Gürses, S., Hubaux, J.-P., Jackson, D., Larus, J. R., Lueks, W., Oliveira, R., Payer, M., Preneel, B., Pyrgelis, A., Salathé, M., Stadler, T., & Veale, M. (2022). Deploying decentralized, privacy-preserving proximity tracing. Communications of the ACM, 65(9), 48–57. doi:10.1145/3524107
This paper outlines some of the practical challenges of deploying Bluetooth contact tracing technologies, drawing on the team's experience deploying DP-3T.
- Matus K.J.M. and Veale M. (2022) Certification Systems for Machine Learning: Lessons from Sustainability Regulation and Governance 16(1) 177-196 doi:10.1111/rego.12417
Policy challenges of machine learning and sustainability share significant structural similarities, including difficult to observe credence properties, such as data collection characteristics or carbon emissions from model training, and value chain concerns, including core-periphery inequalities, networks of labor, and fragmented and modular value creation. We apply research on certification systems in sustainability, particularly of commodities, to generate lessons across both areas, informing emerging proposals such as the EU’s AI Act.
- Veale M., Nouwens M., Santos C. (2022) Impossible Asks: Can the Transparency and Consent Framework Ever Authorise Real-Time Bidding After the Belgian DPA Decision? 2022 Technology and Regulation 12 doi:10.26116/techreg.2022.002
This paper summarises and analyses a decision of the Belgian Data Protection Authority concerning IAB Europe and its Transparency and Consent Framework (TCF). We argue that by characterising IAB Europe as a joint controller with RTB actors, this important decision gives DPAs an agreed-upon blueprint to deal with a structurally difficult enforcement challenge. Furthermore, under the DPA’s simple-looking remedial orders are deep technical and organisational tensions. We analyse these “impossible asks”, concluding that absent a fundamental change to RTB, IAB Europe will be unable to adapt the TCF to bring RTB into compliance with the decision.
- Veale M., Zuiderveen Borgesius F. (2022) Adtech and Real-Time Bidding under European Data Protection Law 23(2) German Law Journal doi:10.31235/osf.io/wg8fq (published online as pre-print April 2021)
This paper analyses the extent to which practices of real-time bidding are compatible with the requirements regarding (i) a legal basis for processing, transparency, and security in European data protection law. We conclude that, in concept and in practice, RTB is structurally difficult to reconcile with European data protection law.
Covered in the Guardian and by the Norwegian Consumer Council. Cited extensively and extracted upon in the decision of the Belgian Data Protection authority concerning IAB Europe, itself analysed in Veale, Nouwens and Santos (2020) above.
-
Yaghini M., Kulynych B., Cherubin G., Veale M., Troncoso C. Disparate Vulnerability to Membership Inference Attacks (2022) 2022(1) Proceedings on Privacy Enhancing Technologies 460-480 doi:10.2478/popets-2022-0023 mirror
-
Pavel V., Kind C., Strait A., Reeve O., Peppin A., Szymielewicz K., Veale M., MacDonald R., Lynskey O., Coyle D., Nemitz P, Rethinking data and digital power (Ada Lovelace Institute, 2022)
2021
- Binns R., Veale M. (2021) Is That Your Final Decision? Multi-Stage Profiling, Selective Effects, and Article 22 of the GDPR International Data Privacy Law 11(4) 319–332 doi:10.1093/idpl/ipab020 mirror
Little attention has been paid to the GDPR's Article 22 in light of decision-making processes with multiple stages, potentially both manual and automated. We diagramatically identify and analyse 5 main complications: the potential for selective automation on subsets of data subjects despite generally adequate human input; the ambiguity around where to locate the decision itself; whether ‘significance’ should be interpreted in terms of any potential effects or only selectively in terms of realised effects; the potential for upstream automation processes to foreclose downstream outcomes despite human input; and that a focus on the final step may distract from the status and importance of upstream processes.
- Veale M., Zuiderveen Borgesius F. (2021) Demystifying the Draft EU Artificial Intelligence Act 22(4) Computer Law Review International 97-112 [open access version] doi:10.9785/cri-2021-220402
We present an overview of the proposed EU AI Act and analyse its implications, drawing on scholarship ranging from the study of contemporary AI practices to the structure of EU product safety regimes over the last four decades. We find that some provisions of the draft AI Act have surprising legal implications, whilst others may be largely ineffective at achieving their stated goals, including the enforcement regime and the effect of maximum harmonisation on the space for AI policy more generally.
Covered in POLITICO, the New York Times, Engineering and Technology, by the European Parliament, and consultation responses including Amnesty International, EDRi, Verbraucherzentrale Bundesverband, European Center for Not-for-Profit Law, EPIC and Access Now.
- Taylor L., Milan S., Veale M. and Gürses S. (2021) Promises made to be broken: Performance and performativity in digital vaccine and immunity certification European Journal of Risk Regulation doi:10.1017/err.2021.26
Digital vaccination certification involves making many promises, few of which can realistically be kept. In this paper, we demonstrate how this phenomenon constitutes various forms of theatre – immunity theatre, border theatre, behavioural theatre and equality theatre – doing so by drawing on perspectives from technology regulation, migration studies and critical geopolitics. A shorter version of this paper is available as a LexAtlas blog.
- Lueks W., Gürses S., Veale M., Bugnion E., Salathé M., Paterson K.G., Troncoso C. (2021) CrowdNotifier: Decentralized privacy-preserving presence tracing 2021(4) Proceedings on Privacy Enhancing Technologies 350-368 doi:10.2478/popets-2021-0074
There is growing evidence that SARS-CoV-2 can be transmitted beyond close proximity contacts, in particular in closed and crowded environments with insufficient ventilation. To help mitigation efforts, contact tracers need a way to notify those who were present in such environments at the same time as infected individuals. Neither traditional human-based contact tracing powered by handwritten or electronic lists, nor Bluetooth-enabled proximity tracing can handle this problem efficiently. In this paper, we propose CrowdNotifier, a protocol that can complement manual contact tracing by efficiently notifying visitors of venues and events with SARS-CoV-2-positive attendees. We prove that CrowdNotifier provides strong privacy and abuse-resistance, and show that it can scale to handle notification at a national scale. This protocol has since been adapted and deployed by national responses including Germany's CoronaWarnApp.
Covered in Netzpolitik, Die Zeit, Le Soir.
- King J. et al (eds.) Oxford Compendium of National Legal Responses to Covid-19 (Oxford University Press 2021)
The Lex-Atlas: Covid-19 (LAC19) project provides a scholarly report and analysis of national legal responses to Covid-19 around the world. There are nearly 200 jurists participating in the LAC19 network and who have contributed to writing national country reports. The Oxford Compendium of National Legal Responses to Covid-19 launched on 21 April 2021 with 19 Country and Territory Reports and a further 41 will be added on a rolling basis across the Spring and Summer of 2021. More information is available on the LexAtlas website.
- Marsden C., Brown I., Veale M., Responding to Disinformation: Ten Recommendations for Regulatory Action and Forbearance in Martin Moore and Damian Tambini (eds), Regulating Big Tech (Oxford University Press 2021) 195-220. doi:10.1093/oso/9780197616093.003.0012 OA mirror
This chapter elaborates on challenges and emerging best practices for state regulation of electoral disinformation throughout the electoral cycle. It is based on research for three studies during 2018-20: into election cybersecurity for the Commonwealth; on the use of Artificial Intelligence (AI) to regulate disinformation for the European Parliament; and for UNESCO, the United Nations body responsible for education.
2020
- Ausloos J., Veale M. (2020) Researching with Data Rights Technology and Regulation 2020 doi:10.26116/techreg.2020.010.
An introduction to the use, possibilities, limitations and considerations of the use of data protection transparency provisions as a research method.
- Veale M. (2020) Sovereignty, Privacy, and Contact Tracing Protocols in Taylor, L., Sharma, G., Martin, A.K., Jameson, S.M. (Eds.), Data Justice and COVID-19: Global Perspectives. London: Meatspace Press.
A short chapter on the interaction of platforms, technology and contact tracing systems.
Read more: op-ed in the Guardian.
- Troncoso C. et al. (2020) Decentralised Privacy-Preserving Proximity Tracing 43 IEEE Data Eng Bull 36.
The DP-3T Bluetooth proximity tracing protocol white paper, for supporting contact tracing efforts during COVID-19. See more on the GitHub.
See Wikipedia page for more information and impact.
- Veale M., Brown I. (2020) Cybersecurity Internet Policy Review 9(4) doi:10.14763/2020.4.1533. pdf
A conceptual guide to the concept of cybersecurity over time, from multiple disciplinary angles.
- Brown I., Marsden C., Lee J., Veale M. (2020) Cybersecurity for Elections: A Commonwealth Guide on Best Practice London: Commonwealth Secretariat doi:10.31228/osf.io/tsdfb mirror.
A book resulting from a study of cybersecurity in an electoral context undertaken for the Commonwealth. Contains recommendations and best practices.
- Veale M. (2020) A Critical Take on the Policy Recommendations of the EU High-Level Expert Group on Artificial Intelligence European Journal of Risk Regulation doi:10/djhf.
A critical view on the EU HLEG-AI's recent guidelines, highlighting the lack of focus on power, infrastructure, the pervasive technosolutionism, the problematic representativeness of the group, and the reluctance to talk about funding of regulators, among other issues.
- Ausloos J., Mahieu R. & Veale M. (2020) Getting Data Subject Rights Right Journal of Intellectual Property, Information Technology and Electronic Commerce Law (JIPITEC) doi:10/djhg.
A guide to data rights, recent case law, challenges and trajectories to feed into the European Data Protection Board's drafting process for their data rights guidance.
- Nouwens M., Liccardi I., Veale M., Karger D., Kagal L (2020) Dark Patterns after the GDPR: Scraping Consent Pop-ups and Demonstrating their Influence Forthcoming (conditional on camera-ready acceptance) in the Proceedings of CHI '20 CHI Conference on Human Factors in Computing Systems, April 25–30, 2020, Honolulu, HI, USA (ACM 2020). arXiv:https://arxiv.org/abs/2001.02479
We show using a scrape of the top 10,000 UK websites that only 11.8% of websites using the 5 big providers of consent pop-up libraries have configured them in ways minimally compliant with the GDPR and ePrivacy law.
Coverage in the BBC, TechCrunch, DR (Danish public broadcaster), Fast Company, Les Echos, cited by the Irish Data Protection Commissioner, the American Bar Association, Mayer Brown, Orange, Stiftung Neue Verantwortung, and Facebook, credited with changing regulatory guidance around cookies and consent in Denmark.
2019
- Veale M. (2019) Governing Machine Learning that Matters (PhD, University College London). https://discovery.ucl.ac.uk/id/eprint/10078626/
This PhD thesis unpacks the provisions and framework of European data protection law in relation to social concerns and machine learning’s technical characteristics to identify tensions between the legal regime and machine learning practice; and draws on empirical data of machine learning in use in public sector institutions around the world to identify tensions between scholarship on fair and transparent machine learning and the social routines attempting to deploy it responsibly on the ground.
- The Law Society of England and Wales (Lead author: M Veale) Algorithms in the Criminal Justice System (The Law Society of England and Wales, 2019). direct mirror link
This report examines the England and Wales landscape around criminal justice and the increasing and varied use of algorithmic systems within it, proposing an array of policy recommendations.
- Veale M. & Brass I. (2019) Administration by Algorithm? Public Management meets Public Sector Machine Learning. In: Algorithmic Regulation (K Yeung & M Lodge eds., Oxford University Press) doi:10/gfzvz8.
This chapter asks and attempts to answer aspects of three main questions: What are the drivers and logics behind the use of machine learning in the public sector, and how should we understand it in the contexts of administrations and their tasks? Is the use of machine learning in the public sector a smooth continuation of ‘e-Government’, or does it pose fundamentally different challenges to the practice of public administration? How are public management decisions and practices at different levels enacted when machine learning solutions are implemented in the public sector?
- Delacroix S. & Veale M. (2019) Smart Technologies and Our Sense of Self: Going Beyond Epistemic Counter-Profiling. In: Life and the Law in the Era of Data-Driven Agency (K O'Hara & M Hildebrandt eds., Edward Elgar) doi:10/gfzvz9.
This chapter focuses on the extent to which sophisticated profiling techniques may end up undermining, rather than enhancing, our capacity for ethical agency - and how, if at all, personalisation and recommendation systems may be responsibly designed in light of this.
2018
- Veale M., Binns R., & Edwards L. (2018). Algorithms That Remember: Model Inversion Attacks and Data Protection Law Philosophical Transactions of the Royal Society A, doi:10.1098/rsta.2018.0083 [mirror]
Recent 'model inversion' attacks from the information security literature indicate that machine learning models might be personal data, as they might leak data used to train them. We analyse these attacks and discuss their legal implications.
Coverage and citation by the Information Commissioner's Office in their AI Auditing Framework, the Council of Europe, the European Parliament, the Royal Society, Chatham House, the Future of Privacy Forum.
- Kilbertus N., Gascón A., Kusner M., Veale M., Gummadi K.P., Weller A. (2018) Blind Justice: Fairness with Encrypted Sensitive Attributes Proceedings of the 35th International Conference on Machine Learning (ICML 2018), Stockholm, Sweden, PMLR 80. [mirror]
Where 'debiasing' approaches are appropriate, they assume modellers have access to often highly sensitive protected characteristics. We show how, using secure multi-party computation, a regulator and a modeller can build and verify a 'fair' model without ever seeing these characteristics, and can verify decisions were taken using a given 'fair' model.
Coverage in the Financial Times.
- Veale M., Van Kleek M., & Binns R. (2018). Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI'18), doi:10.1145/3173574.3174014 [mirror]
We interviewed 27 public sector, machine learning practitioners about how they cope with challenges of fairness and accountability. Their problems are often different from those in FAT/ML research so far, including internal gaming, changing data distributions and inter-departmental communication, how to augment model outputs and how to transmit hard-won social practices.
Coverage and use by the Australian Human Rights Commission; the Royal United Service Institute for Defence and Security Studies (RUSI), the European Parliament, the European Commission, the Boston Consulting Group, the United Nations Economic and Social Commission for Asia and the Pacific, the Alan Turing Institute/UK Office for AI, UNESCO, the German Government's Expert Council for Consumer Affairs, the Council of Europe, and the Centre for Data Ethics and Innovation.
- Binns R., Van Kleek M, Veale M, Lyngs U., Zhao J., & Shadbolt N. (2018). 'It’s Reducing a Human Being to a percentage”; Perceptions of Justice in Algorithmic Decisions Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI'18), doi:10.1145/3173574.3173951 [mirror]
We presented participants in the lab and online with adverse algorithmic decisions and different explanations of them. We found strongly dislike of case-based explanations where they were compared to a similar individual, even though these are arguably highly faithful to the way machine learning systems work.
- Van Kleek M., Seymour W., Veale M., Binns R. & Shadbolt N. (2018) The Need for Sensemaking in Networked Privacy and Algorithmic Responsibility Sensemaking in a Senseless World: Workshop at ACM CHI’18, 22 April 2018, Montréal, Canada. [mirror]
In this workshop paper, we argue that sense-making is important not just for experts but for laypeople, and that expertise from the HCI sense-making community would be well-suited for many contemporary privacy and algorithmic responsibility challenges.
- Veale M., Binns R. & Van Kleek M. (2018) Some HCI Priorities for GDPR-Compliant Machine Learning The General Data Protection Regulation: An Opportunity for the CHI Community? (CHI-GDPR 2018), Workshop at ACM CHI’18, 22 April 2018, Montréal, Canada. [mirror]
The General Data Protection Regulation has significant effects for machine learning modellers. We outline what human-computer interaction research can bring to strengthening the law, and enabling better trade-offs.
- Veale M., Binns R., & Ausloos J. (2018). When Data Protection by Design and Data Subject Rights Clash International Data Privacy Law 8(2), 105-123, doi:10.1093/idpl/ipy002 [mirror]
Data protection law gives individuals rights, such as to access or erase data. Yet when data controllers slightly de-identify data, they remove the ability to grant these rights, without removing real re-identification risk. We look at this in legal and technological context, and suggest provisions to help navigate this trade-off between confidentiality and control.
- Mavroudis V. & Veale M. (2018). Eavesdropping whilst you’re shopping: Balancing Personalisation and Privacy in Connected Retail Spaces. Proceedings of Living in the Internet of Things 2018, doi:10.1049/cp.2018.0018.
In-store tracking, using passive and active sensors, is common. We look at this in technical context, as well as the European legal context of the GDPR and forthcoming ePrivacy Regulation. We consider two case studies: Amazon Go, and rotating MAC addresses.
- Edwards L., & Veale M. (2018). Enslaving the Algorithm: From a “Right to an Explanation” to a “Right to Better Decisions”? IEEE Security & Privacy 16(3), 46-54, doi:10.1109/MSP.2018.2701152 [mirror]
We outline the European 'right to an explanation' debate, consider French law and the Council of Europe Convention 108. We argue there is an unmet need to empower third party bodies with investigative powers, and elaborate on how this might be done.
- Veale M., & Edwards L. (2018). Clarity, Surprises, and Further Questions in the Article 29 Working Party Draft Guidance on Automated Decision-Making and Profiling Computer Law & Security Review 34(2), 398-404, doi:10.1016/j.clsr.2017.12.002. [mirror]
We critically examine the Article 29 Working Party guidance that relates most to machine learning and algorithmic decisions, finding it has interesting consequences for automation and discrimination in European law.
2017
- Veale M., & Binns R. (2017). Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data Big Data & Society 4(2), doi:10.1177/2053951717743530. [mirror]
FAT/ML techniques for 'debiasing' machine learned models all assume the modeller can access the sensitive data. This is unrealistic, particularly in light of stricter privacy law. We consider three ways some level of understanding of discrimination might be possible, even without collecting such data as ethnicity or sexuality.
- Edwards L., & Veale M. (2017). Slave to the Algorithm? Why a 'Right to an Explanation' is Probably Not the Remedy You Are Looking For Duke Law & Technology Review, 16(1), 18–84, doi:10.2139/ssrn.2972855. [mirror]
We consider the so-called 'right to an explanation' in the GDPR and in technical context, arguing that even if it (as manifested in Article 22) was enforced from the non-binding recital in European law, it would not trigger for group-based harms or in the important cases of decision-support. We argue instead for the use of instruments such as data protection impact assessment and data protection by design, as well as investigating the right to erasure and right to portability of trained models, as potential avenues to explore.
Coverage and use by the Article 29 Working Party (the official group of European regulators) in their guidance on the regulation that the paper itself analysed; The Information Commissioner's Office described it as "important to the development of the [regulator's] thinking" in this area; by the Council of Europe [1, 2, 3, 4], the European Commission (DG JRC, DG JUST, DG COMP), the European Parliament [1, 2, 3], the UN Special Rapporteur on Extreme Poverty and Human Rights Philip Alston, the Privacy Commissioner of Hong Kong, the German Government's Expert Council for Consumer Affairs, Amnesty International, RUSI, Access Now, Privacy International, the US Federal Trade Commissioner Noah Williams, the Centre for Data Ethics and Innovation, ARTICLE 19, the Nuffield Foundation, the New Economics Foundation, and the Government of New Zealand; the House of Lords (13/11 2017 vol 785 col 1862--4 & 13/12 2017 vol 787 col 1575--7) with amendments based upon it; profiled in the Journal of Things We Like (Lots), awarded a Privacy Papers for Policymakers prize by the Future of Privacy Forum at the US Senate in 2019.
- Veale M. (2017). Data management and use: Case studies of technologies and governance London: The Royal Society; the British Academy. [mirror]
I authored the case studies for the Royal Society and British Academy report which led to the UK Government's new Centre for Data Ethics and Innovation. I also acted as drafting author on the main report.
- Veale M. (2017). Logics and practices of transparency and opacity in real-world applications of public sector machine learning FAT/ML'17 [mirror]
This is a preliminary version of the 'Fairness and accountability design needs' CHI'18 paper above.
- Binns R., Veale M., Van Kleek M., & Shadbolt N. (2017). Like trainer, like bot? Inheritance of bias in algorithmic content moderation 9th International Conference on Social Informatics (SocInfo 2017), 405–415, doi:10.1007/978-3-319-67256-4_32. Springer Lecture Notes in Computer Science. [mirror]
We considered the detection of offensive and hateful speech, looking at a dataset of 1 million annotated comments. Taking gender as an illustrative split (without making any generalisable claims), we illustrate how the labellers' conception of toxicity matters in the trained models downstream, and how bias in these systems will likely be very tricky to understand.
2016
- Veale M. (2016). Connecting diverse public sector values with the procurement of machine learning systems In: Data for Policy 2016 — Frontiers of Data Science for Government: Ideas, practices and projections. Cambridge, United Kingdom, 15–16 September 2016, doi:10.5281/zenodo.571786. [mirror]
A conference paper on public sector values in machine learning, and public sector procurement in practice.
2015
- Veale M., & Seixas R. (2015). Moving to metrics: Opportunities and challenges of performance-based sustainability standards S.A.P.I.EN.S, 5(1). [mirror]
This paper argues that performance-based sustainability standards, using a case study from the sugar-cane sector, have significant benefits over technology-based standards, and suggests directions in which this can be explored