Skip to main content

Algorithms, Law, and Policy

Algorithms are deployed “in the wild” in numerous sectors of public and private industry, including medicine, law, education, and employment. Many of these sectors are grappling with how laws and policies are shaped by, responding to, and leveraging this pervasive use of algorithms. The EAAMO-Bridges Working Group on Algorithms, Law, and Policy focuses on this complex relationship between algorithms and mechanisms on the one hand and law and policy on the other hand. Some of the topics the group will work on include but are not limited to free speech, content moderation, antitrust, the use of “black box” machine learning models, data-driven algorithms, and decision-support tools. We study the real-world impacts of these methods through the lens of law and public policy. We also aim to support law and policy practitioners through, for instance, providing digital forensic expert assistance. This group began meeting in Spring 2021 growing out of the earlier Bias, Discrimination, and Fairness working group.

We meet every other week for a presentation from an invited speaker or group member, followed by a group discussion.

Spring 2024 #

Topics of interest: #

Our focus this term is on exploring global perspectives on AI policy and regulation. In each session, we will discuss one concrete policy issue. We hope to identify available policy and technical options, understand rationals behind different policy choices, and understand areas of global consensus and divergence. We are currently putting together a schedule to plan biweekly meetings between January and May 2024. Sessions will be a mix of invited speakers, presentations by group members, and guided readings and discussions.

We warmly welcome group member presentation on but not limited to following concrete AI governance issues, with the hope to understand policy and technical options, rationals, and areas of global consensus and divergence:

  1. Can AI generated content have copyright?
  • Beijing court finds AI-generated image is copyrightable, link.
  • Opinion from prof Angela Huyue Zhang on Beijing’s ruling: Chinese authorities have become fixated on ensuring that the country can surpass the US to become the global leader in artificial intelligence. Short-sighted.
  • US: a work entirely created by an AI cannot be protected under copyright. See here, and here.
  • A discussion on China and US’s approaches
  1. Can AI be trained on copyrighted data?
  • In OpenAI’s written evidence to UK parliament: Because copyright today covers virtually every sort of human expression, it would be impossible to train today’s leading AI models without using copyrighted materials.
  • One of the key disputes is whether training generative AI models on copyrighted work is a “fair use”.
  • The use of copyrighted data for AI training is supported in Japan, Israel. Japan’s view may be changing.
  • ccording to the December 2023 text of the EU AI Act: Any use of copyright protected content requires the authorization of the rightholder unless relevant copyright exceptions. Any provider placing a general-purpose AI model on the EU market should comply with this obligation, regardless of where the training of these models take place. no providers should be able to gain a competitive advantage by placing a lower copyright standard than those provided in the EU.
  • Major court cases have been filed in the US, including New York Times v. OpenAI, visual artists v. Stability AI.
  • A good review of this issue on Japan, EU, and US: link
  • OpenAI’s response to NY Times.
  1. compute threshold approach to regulate foundation model
  • The US white house’s executive order takes a threshold approach to regulate foundation models. Many requirements, including to report red-teaming results, ownership of the weights, etc., apply to for foundation models above a given threshold of compute.
  • Andrew Ng’s criticism on the compute threshold approach.
  1. Should foundation models be regulated?
  • Many are against regulating foundation models, including Andrew Ng, Yann LeCun, and LAION.
  • The criticism mainly is on its negative impact on science, research, and the advancement of AI. Openness and accessibility should not be sacrificed
  • The alternative solution the critics support is to regulate at the application level
  • Many support the regulation of foundation models, because they are powerful, general-purpose, and could be dual-used.
  1. Whether/How to regulate open-source datasets/models?
  • Child sex abuse material found in LAION, link
  • Some argue against regulating open-source data/models, e.g., Yann LeCun
  1. what value to align AI to
  • China’s interim guidelines for generative AI services requires alignment with socialist values, link
  • AI in Europe need to respect fundamental rights and democracy, link
  • Taiwan builds own LLM to counter China’s influence, link

Invited speakers: #

  • Jonas Geiping (ELLIS Institute & MPI-IS Tübingen)
  • Prof. Angela Huyue Zhang (The University of Hong Kong)
  • Prof. Simon Chesterman (National University of Singapore)
  • Prof. Dr. Michèle Finck, LL.M. (University of Tübingen)
  • Sayash Kapoor (Princeton University)

Working Group Organizers #

Joachim BaumannPhD StudentUniversity of Zurich
Xudong ShenPhD StudentNational University of Singapore

With a lot of help from Ana-Andreea Stoica, Thomas Gilbert, and Ayse Yasar.

Meeting the ALP WG Speakers #

Gili VidanCounterfeit Deterrence26 May 2022
Thomas GilbertAccountability Infrastructure for Social Media Companies19 May 2022
Jake GoldenfeinLaw’s Consumers and Platform Users21 Apr 2022
Bertan TurhanImplementation of Affirmative Action Policy in India07 Apr 2022
Ayse YasarPerspectives on the EU Digital Legislative Package24 Mar 2022
Alex SotropoComputational Antitrust10 Mar 2022
Lauren ChambersCampaigns to Regulate Facial Recognition at the Municipal and State Level24 Feb 2022
Shlomi HodPedagogical Interventions in Algorithms, Law, and Policy07 Dec 2021
Mandy LauOnline Speech Moderation as Language Policy23 Nov 2021
Logan StapletonRisk Assessment Algorithms in U.S. Child Welfare09 Nov 2021
Maksim KarliukThe Role of Proportionality in AI Ethical Frameworks24 Oct 2021
Andreas HauptRecommender System Design and Mechanism Design24 Oct 2021
Burcu BaykurtAlgorithmic Accountability in the City: A Review of Local Policies in the U.S.10 Jun 2021
Seth LazarLegitimacy, Authority and the Political Value of Explanations27 May 2021
Fernando DelgadoCommon Task Framework, meet Participatory Design13 May 2021
Yoan HermstrüwerGoverning with Humans and Machines29 Apr 2021
Nana NwachukuThe political influence of social media algorithms15 Apr 2021
Kandrea WadeHow we’ve taught algorithms to see identity: constructing race and gender in image databases for facial analysis (link)01 Apr 2021
Ayse YasarAntitrust and Big Tech18 Mar 2021
Thomas GilbertMapping the Political Economy of Reinforcement Learning Systems: The Case of Autonomous Vehicles (link)04 Mar 2021