Algorithms, Law, and Policy
Algorithms are deployed “in the wild” in numerous sectors of public and private industry, including medicine, law, education, and employment. Many of these sectors are grappling with how laws and policies are shaped by, responding to, and leveraging this pervasive use of algorithms. The EAAMO-Bridges Working Group on Algorithms, Law, and Policy focuses on this complex relationship between algorithms and mechanisms on the one hand and law and policy on the other hand. Some of the topics the group will work on include but are not limited to free speech, content moderation, antitrust, the use of “black box” machine learning models, data-driven algorithms, and decision-support tools. We study the real-world impacts of these methods through the lens of law and public policy. We also aim to support law and policy practitioners through, for instance, providing digital forensic expert assistance. This group began meeting in Spring 2021 growing out of the earlier Bias, Discrimination, and Fairness working group.
We meet every other week for a presentation from an invited speaker or group member, followed by a group discussion.
Spring 2024 #
Topics of interest: #
Our focus this term is on exploring global perspectives on AI policy and regulation. In each session, we will discuss one concrete policy issue. We hope to identify available policy and technical options, understand rationals behind different policy choices, and understand areas of global consensus and divergence. We are currently putting together a schedule to plan biweekly meetings between January and May 2024. Sessions will be a mix of invited speakers, presentations by group members, and guided readings and discussions.
We warmly welcome group member presentation on but not limited to following concrete AI governance issues, with the hope to understand policy and technical options, rationals, and areas of global consensus and divergence:
- Can AI generated content have copyright?
- Beijing court finds AI-generated image is copyrightable, link.
- Opinion from prof Angela Huyue Zhang on Beijing’s ruling: Chinese authorities have become fixated on ensuring that the country can surpass the US to become the global leader in artificial intelligence. Short-sighted.
- US: a work entirely created by an AI cannot be protected under copyright. See here, and here.
- A discussion on China and US’s approaches
- Can AI be trained on copyrighted data?
- In OpenAI’s written evidence to UK parliament: Because copyright today covers virtually every sort of human expression, it would be impossible to train today’s leading AI models without using copyrighted materials.
- One of the key disputes is whether training generative AI models on copyrighted work is a “fair use”.
- The use of copyrighted data for AI training is supported in Japan, Israel. Japan’s view may be changing.
- ccording to the December 2023 text of the EU AI Act: Any use of copyright protected content requires the authorization of the rightholder unless relevant copyright exceptions. Any provider placing a general-purpose AI model on the EU market should comply with this obligation, regardless of where the training of these models take place. no providers should be able to gain a competitive advantage by placing a lower copyright standard than those provided in the EU.
- Major court cases have been filed in the US, including New York Times v. OpenAI, visual artists v. Stability AI.
- A good review of this issue on Japan, EU, and US: link
- OpenAI’s response to NY Times.
- compute threshold approach to regulate foundation model
- The US white house’s executive order takes a threshold approach to regulate foundation models. Many requirements, including to report red-teaming results, ownership of the weights, etc., apply to for foundation models above a given threshold of compute.
- Andrew Ng’s criticism on the compute threshold approach.
- Should foundation models be regulated?
- Many are against regulating foundation models, including Andrew Ng, Yann LeCun, and LAION.
- The criticism mainly is on its negative impact on science, research, and the advancement of AI. Openness and accessibility should not be sacrificed
- The alternative solution the critics support is to regulate at the application level
- Many support the regulation of foundation models, because they are powerful, general-purpose, and could be dual-used.
- Whether/How to regulate open-source datasets/models?
- Child sex abuse material found in LAION, link
- Some argue against regulating open-source data/models, e.g., Yann LeCun
- what value to align AI to
- China’s interim guidelines for generative AI services requires alignment with socialist values, link
- AI in Europe need to respect fundamental rights and democracy, link
- Taiwan builds own LLM to counter China’s influence, link
Invited speakers: #
- Jonas Geiping (ELLIS Institute & MPI-IS Tübingen)
- Prof. Angela Huyue Zhang (The University of Hong Kong)
- Prof. Simon Chesterman (National University of Singapore)
- Prof. Dr. Michèle Finck, LL.M. (University of Tübingen)
- Sayash Kapoor (Princeton University)
Working Group Organizers #
Joachim Baumann | PhD Student | University of Zurich |
Xudong Shen | PhD Student | National University of Singapore |
With a lot of help from Ana-Andreea Stoica, Thomas Gilbert, and Ayse Yasar.
Meeting the ALP WG Speakers #
Gili Vidan | Counterfeit Deterrence | 26 May 2022 |
Thomas Gilbert | Accountability Infrastructure for Social Media Companies | 19 May 2022 |
Jake Goldenfein | Law’s Consumers and Platform Users | 21 Apr 2022 |
Bertan Turhan | Implementation of Affirmative Action Policy in India | 07 Apr 2022 |
Ayse Yasar | Perspectives on the EU Digital Legislative Package | 24 Mar 2022 |
Alex Sotropo | Computational Antitrust | 10 Mar 2022 |
Lauren Chambers | Campaigns to Regulate Facial Recognition at the Municipal and State Level | 24 Feb 2022 |
Shlomi Hod | Pedagogical Interventions in Algorithms, Law, and Policy | 07 Dec 2021 |
Mandy Lau | Online Speech Moderation as Language Policy | 23 Nov 2021 |
Logan Stapleton | Risk Assessment Algorithms in U.S. Child Welfare | 09 Nov 2021 |
Maksim Karliuk | The Role of Proportionality in AI Ethical Frameworks | 24 Oct 2021 |
Andreas Haupt | Recommender System Design and Mechanism Design | 24 Oct 2021 |
Burcu Baykurt | Algorithmic Accountability in the City: A Review of Local Policies in the U.S. | 10 Jun 2021 |
Seth Lazar | Legitimacy, Authority and the Political Value of Explanations | 27 May 2021 |
Fernando Delgado | Common Task Framework, meet Participatory Design | 13 May 2021 |
Yoan Hermstrüwer | Governing with Humans and Machines | 29 Apr 2021 |
Nana Nwachuku | The political influence of social media algorithms | 15 Apr 2021 |
Kandrea Wade | How we’ve taught algorithms to see identity: constructing race and gender in image databases for facial analysis (link) | 01 Apr 2021 |
Ayse Yasar | Antitrust and Big Tech | 18 Mar 2021 |
Thomas Gilbert | Mapping the Political Economy of Reinforcement Learning Systems: The Case of Autonomous Vehicles (link) | 04 Mar 2021 |