All publications should have at least one open access link. Get in touch if you have any problems accessing any of the materials.
Veale M., Zuiderveen Borgesius F. (2021) Demystifying the Draft EU Artificial Intelligence Act 22(4) Computer Law Review International
Taylor L., Milan S., Veale M. and Gürses S. (2021) Promises made to be broken: Performance and performativity in digital vaccine and immunity certification European Journal of Risk Regulation
Matus K.J.M. and Veale M. (2021) Certification Systems for Machine Learning: Lessons from Sustainability Regulation and Governance
Veale M., Zuiderveen Borgesius F. (2021) Adtech and Real-Time Bidding under European Data Protection Law German Law Journal
Lueks W., Gürses S., Veale M., Bugnion E., Salathé M., Paterson K.G., Troncoso C. (2021) CrowdNotifier: Decentralized privacy-preserving presence tracing Proceedings on Privacy Enhancing Technologies
King J. et al (eds.) Oxford Compendium of National Legal Responses to Covid-19 (Oxford University Press 2021)
Marsden C.T., Brown I., Veale M. (2021) Disinformation and digital dominance: Regulation through the lens of the election lifecycle in Martin Moore & Damian Tambini (eds.) Dealing with Digital Dominance (Oxford University Press 2021)
- Ausloos J., Veale M. (2020) Researching with Data Rights Technology and Regulation 2020 doi:10.26116/techreg.2020.010.
An introduction to the use, possibilities, limitations and considerations of the use of data protection transparency provisions as a research method.
- Veale M. (2020) Sovereignty, Privacy, and Contact Tracing Protocols in Taylor, L., Sharma, G., Martin, A.K., Jameson, S.M. (Eds.), Data Justice and COVID-19: Global Perspectives. London: Meatspace Press.
A short chapter on the interaction of platforms, technology and contact tracing systems.
- Troncoso C. et al. (2020) Decentralised Privacy-Preserving Proximity Tracing 43 IEEE Data Eng Bull 36.
The DP-3T Bluetooth proximity tracing protocol white paper, for supporting contact tracing efforts during COVID-19. See more on the GitHub.
A conceptual guide to the concept of cybersecurity over time, from multiple disciplinary angles.
- Brown I., Marsden C., Lee J., Veale M. (2020) Cybersecurity for Elections: A Commonwealth Guide on Best Practice London: Commonwealth Secretariat doi:10.31228/osf.io/tsdfb mirror.
A book resulting from a study of cybersecurity in an electoral context undertaken for the Commonwealth. Contains recommendations and best practices.
- Veale M. (2020) A Critical Take on the Policy Recommendations of the EU High-Level Expert Group on Artificial Intelligence European Journal of Risk Regulation doi:10/djhf.
A critical view on the EU HLEG-AI's recent guidelines, highlighting the lack of focus on power, infrastructure, the pervasive technosolutionism, the problematic representativeness of the group, and the reluctance to talk about funding of regulators, among other issues.
- Ausloos J., Mahieu R. & Veale M. (2020) Getting Data Subject Rights Right Journal of Intellectual Property, Information Technology and Electronic Commerce Law (JIPITEC) doi:10/djhg.
A guide to data rights, recent case law, challenges and trajectories to feed into the European Data Protection Board's drafting process for their data rights guidance.
- Nouwens M., Liccardi I., Veale M., Karger D., Kagal L (2020) Dark Patterns after the GDPR: Scraping Consent Pop-ups and Demonstrating their Influence Forthcoming (conditional on camera-ready acceptance) in the Proceedings of CHI '20 CHI Conference on Human Factors in Computing Systems, April 25–30, 2020, Honolulu, HI, USA (ACM 2020). arXiv:https://arxiv.org/abs/2001.02479
We show using a scrape of the top 10,000 UK websites that only 11.8% of websites using the 5 big providers of consent pop-up libraries have configured them in ways minimally compliant with the GDPR and ePrivacy law.
The Law Society of England and Wales (Lead author: M Veale) Algorithms in the Criminal Justice System (The Law Society of England and Wales, 2019).
Veale M. & Brass I. (2019) Administration by Algorithm? Public Management meets Public Sector Machine Learning. In: Algorithmic Regulation (K Yeung & M Lodge eds., Oxford University Press) doi:10/gfzvz8.
Delacroix S. & Veale M. (2019) Smart Technologies and Our Sense of Self: Going Beyond Epistemic Counter-Profiling. In: Life and the Law in the Era of Data-Driven Agency (K O'Hara & M Hildebrandt eds., Edward Elgar) doi:10/gfzvz9.
- Veale M., Binns R., & Edwards L. (2018). Algorithms That Remember: Model Inversion Attacks and Data Protection Law Philosophical Transactions of the Royal Society A, doi:10.1098/rsta.2018.0083 [mirror]
Recent 'model inversion' attacks from the information security literature indicate that machine learning models might be personal data, as they might leak data used to train them. We analyse these attacks and discuss their legal implications.
- Kilbertus N., Gascón A., Kusner M., Veale M., Gummadi K.P., Weller A. (2018) Blind Justice: Fairness with Encrypted Sensitive Attributes Proceedings of the 35th International Conference on Machine Learning (ICML 2018), Stockholm, Sweden, PMLR 80. [mirror]
Where 'debiasing' approaches are appropriate, they assume modellers have access to often highly sensitive protected characteristics. We show how, using secure multi-party computation, a regulator and a modeller can build and verify a 'fair' model without ever seeing these characteristics, and can verify decisions were taken using a given 'fair' model.
- Veale M., Van Kleek M., & Binns R. (2018). Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI'18), doi:10.1145/3173574.3174014 [mirror]
We interviewed 27 public sector, machine learning practitioners about how they cope with challenges of fairness and accountability. Their problems are often different from those in FAT/ML research so far, including internal gaming, changing data distributions and inter-departmental communication, how to augment model outputs and how to transmit hard-won social practices.
- Binns R., Van Kleek M, Veale M, Lyngs U., Zhao J., & Shadbolt N. (2018). 'It’s Reducing a Human Being to a percentage”; Perceptions of Justice in Algorithmic Decisions Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI'18), doi:10.1145/3173574.3173951 [mirror]
We presented participants in the lab and online with adverse algorithmic decisions and different explanations of them. We found strongly dislike of case-based explanations where they were compared to a similar individual, even though these are arguably highly faithful to the way machine learning systems work.
- Van Kleek M., Seymour W., Veale M., Binns R. & Shadbolt N. (2018) The Need for Sensemaking in Networked Privacy and Algorithmic Responsibility Sensemaking in a Senseless World: Workshop at ACM CHI’18, 22 April 2018, Montréal, Canada. [mirror]
In this workshop paper, we argue that sense-making is important not just for experts but for laypeople, and that expertise from the HCI sense-making community would be well-suited for many contemporary privacy and algorithmic responsibility challenges.
- Veale M., Binns R. & Van Kleek M. (2018) Some HCI Priorities for GDPR-Compliant Machine Learning The General Data Protection Regulation: An Opportunity for the CHI Community? (CHI-GDPR 2018), Workshop at ACM CHI’18, 22 April 2018, Montréal, Canada. [mirror]
The General Data Protection Regulation has significant effects for machine learning modellers. We outline what human-computer interaction research can bring to strengthening the law, and enabling better trade-offs.
- Veale M., Binns R., & Ausloos J. (2018). When Data Protection by Design and Data Subject Rights Clash International Data Privacy Law 8(2), 105-123, doi:10.1093/idpl/ipy002 [mirror]
Data protection law gives individuals rights, such as to access or erase data. Yet when data controllers slightly de-identify data, they remove the ability to grant these rights, without removing real re-identification risk. We look at this in legal and technological context, and suggest provisions to help navigate this trade-off between confidentiality and control.
- Mavroudis V. & Veale M. (2018). Eavesdropping whilst you’re shopping: Balancing Personalisation and Privacy in Connected Retail Spaces. Proceedings of Living in the Internet of Things 2018, doi:10.1049/cp.2018.0018.
In-store tracking, using passive and active sensors, is common. We look at this in technical context, as well as the European legal context of the GDPR and forthcoming ePrivacy Regulation. We consider two case studies: Amazon Go, and rotating MAC addresses.
- Edwards L., & Veale M. (2018). Enslaving the Algorithm: From a “Right to an Explanation” to a “Right to Better Decisions”? IEEE Security & Privacy 16(3), 46-54, doi:10.1109/MSP.2018.2701152 [mirror]
We outline the European 'right to an explanation' debate, consider French law and the Council of Europe Convention 108. We argue there is an unmet need to empower third party bodies with investigative powers, and elaborate on how this might be done.
- Veale M., & Edwards L. (2018). Clarity, Surprises, and Further Questions in the Article 29 Working Party Draft Guidance on Automated Decision-Making and Profiling Computer Law & Security Review 34(2), 398-404, doi:10.1016/j.clsr.2017.12.002. [mirror]
We critically examine the Article 29 Working Party guidance that relates most to machine learning and algorithmic decisions, finding it has interesting consequences for automation and discrimination in European law.
- Veale M., & Binns R. (2017). Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data Big Data & Society 4(2), doi:10.1177/2053951717743530. [mirror]
FAT/ML techniques for 'debiasing' machine learned models all assume the modeller can access the sensitive data. This is unrealistic, particularly in light of stricter privacy law. We consider three ways some level of understanding of discrimination might be possible, even without collecting such data as ethnicity or sexuality.
- Edwards L., & Veale M. (2017). Slave to the Algorithm? Why a 'Right to an Explanation' is Probably Not the Remedy You Are Looking For Duke Law & Technology Review, 16(1), 18–84, doi:10.2139/ssrn.2972855. [mirror]
We consider the so-called 'right to an explanation' in the GDPR and in technical context, arguing that even if it (as manifested in Article 22) was enforced from the non-binding recital in European law, it would not trigger for group-based harms or in the important cases of decision-support. We argue instead for the use of instruments such as data protection impact assessment and data protection by design, as well as investigating the right to erasure and right to portability of trained models, as potential avenues to explore.
- Veale M. (2017). Data management and use: Case studies of technologies and governance London: The Royal Society; the British Academy. [mirror]
I authored the case studies for the Royal Society and British Academy report which led to the UK Government's new Centre for Data Ethics and Innovation. I also acted as drafting author on the main report.
- Veale M. (2017). Logics and practices of transparency and opacity in real-world applications of public sector machine learning FAT/ML'17 [mirror]
This is a preliminary version of the 'Fairness and accountability design needs' CHI'18 paper above.
- Binns R., Veale M., Van Kleek M., & Shadbolt N. (2017). Like trainer, like bot? Inheritance of bias in algorithmic content moderation 9th International Conference on Social Informatics (SocInfo 2017), 405–415, doi:10.1007/978-3-319-67256-4_32. Springer Lecture Notes in Computer Science. [mirror]
We considered the detection of offensive and hateful speech, looking at a dataset of 1 million annotated comments. Taking gender as an illustrative split (without making any generalisable claims), we illustrate how the labellers' conception of toxicity matters in the trained models downstream, and how bias in these systems will likely be very tricky to understand.
- Veale M. (2016). Connecting diverse public sector values with the procurement of machine learning systems In: Data for Policy 2016 — Frontiers of Data Science for Government: Ideas, practices and projections. Cambridge, United Kingdom, 15–16 September 2016, doi:10.5281/zenodo.571786. [mirror]
A conference paper on public sector values in machine learning, and public sector procurement in practice.
- Veale M., & Seixas R. (2015). Moving to metrics: Opportunities and challenges of performance-based sustainability standards S.A.P.I.EN.S, 5(1). [mirror]
This paper argues that performance-based sustainability standards, using a case study from the sugar-cane sector, have significant benefits over technology-based standards, and suggests directions in which this can be explored.