A new publication by former Stone Center postdoctoral scholar Tina Law and Stone Center Associate Director Leslie McCall discusses why and how social scientists should help shape policy debates about artificial intelligence to center equity and public engagement.
Like it or not, AI is a part of our everyday lives. We engage with it through digital assistants on our phones, predictive searches on Google, spam filters, personalized social media feeds, smart home devices, and security systems. It corrects our mistakes in emails and suggests our answers to online forms. It analyzes our personal information when we apply for jobs, housing, or social service benefits, and captures our images when we enter public and private spaces.
As the presence of AI has attracted more attention, sociologists have increasingly engaged with it, both as a subject of research and as a tool for analyzing data. Inequality scholars tend to focus on AI’s effects on society, while computational sociologists have developed ways to use AI — such as machine learning and large language models — to power their own studies. A new paper by former Stone Center postdoctoral scholar Tina Law, now an assistant professor of sociology at the University of California, Davis, and Leslie McCall, associate director of the Stone Center and Presidential Professor of Sociology and Political Science at the CUNY Graduate Center, discusses these current forms of engagement, and proposes a new way forward for sociologists and other social scientists: engaging and investing in more policy-oriented research that can support regulation of AI to ensure and promote the public good. The paper is published in Socius: Sociological Research for a Dynamic World, as part of a forthcoming special issue on AI.
In their paper, Law and McCall conduct a policy review and identify two leading approaches to AI governance in the U.S.: a safety-based approach and an equity-based approach. (See the table below for an overview.) The key document of the safety-based approach is the 2023 AI Risk Management Framework developed by the U.S. Department of Commerce’s National Institute of Standards and Technology, while that of the equity-based approach is the 2022 Blueprint for an AI Bill of Rights. The first is aimed at risk management, both on a national level (e.g., concerning national security) and on an individual level (e.g., concerning the protection of personal data). This approach, which focuses on the creation of “trustworthy” AI systems, has received voluntary commitments of support from many of the largest tech firms, including Google, Meta, Microsoft, and OpenAI. The second document focuses on the protection of civil rights and civil liberties (including privacy) and the need for consistent, fair, and impartial treatment of individuals, taking into account “the status of individuals who belong to underserved communities that have been denied such treatment.”
Overview of Emerging Approaches to AI Governance
Dimension | Safety-Based AI Governance | Equity-Based AI Governance |
Core goal | Create safe AI systems through efficient management of risks | Create equitable AI systems through principled protection of rights |
Primary targets | Organizations developing AI | Organizations across sectors |
Key terms | AI system: “an engineered or machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, or decisions influencing real or virtual environments” (NIST 2023:1) AI actors: “those who play an active role in the AI system lifecycle, including organizations and individuals that deploy or operate AI” (NIST 2023:2) Safety: AI systems are not safe if they “under defined conditions, lead to a state in which human life and health, property, or the environment is endangered” (NIST 2023:14) Risk: “the composite measure of an event’s probability of occurring and the magnitude or degree of the consequences of the corresponding event” (NIST 2023:4) |
Automated system: “any system, software, or process that uses computation as whole or part of a system to determine outcomes, make or aid decisions, inform policy implementation, collect data or observations, or otherwise interact with individuals and/or communities” (White House OSTP 2022:10) Equity: “the consistent and systematic fair, just, and impartial treatment of all individuals” that “must take into account the status of individuals who belong to underserved communities that have been denied such treatment” (White House OSTP 2022:10) Underserved communities: “communities that have been systematically denied a full opportunity to participate in aspects of economic, social, and civic life” (White House OSTP 2022:11) Rights: the set of “civil rights, civil liberties, and privacy, including freedom of speech, voting, and protections from discrimination, excessive punishment, unlawful surveillance, and violations of privacy and other freedoms in both public and private sector contexts” (White House OSTP 2022:11) |
Main policy developments | • NIST Artificial Intelligence Risk Management Framework (2023) • Biden Administration’s Company Commitments on AI (2023) • Executive Order No. 14110: Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (2023) |
• White House OSTP’s Blueprint for an AI Bill of Rights (2022) • Executive Order No. 14110: Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (2023) |
Note: AI = artificial intelligence.
The safety-based approach, which has emerged as the dominant framework in AI policymaking, is inadequate, Law and McCall argue. “Although human safety and national security are important concerns, the safety-based approach focuses on risks while underappreciating the implications of AI for equity in American society, which, as the Blueprint recognizes, are quite profound,” they write. “Moreover, the market-based orientation of this approach defers to tech firms and does little to check their growing power.” It is rooted, in other words, in the notion that AI firms can and should self-regulate.
The authors draw on research on AI and society as well as work on democratic innovation to propose an alternative approach that centers equity and public engagement, and that reframes AI as “an issue of equity that concerns the public at large (rather than merely an economic and technocratic issue of private interest).” This effort is needed, they say, for two key reasons. First, powerful tech corporations pursuing their own interests currently hold too much sway over AI policymaking. Second, current AI policy debates are dominated by computer scientists, legal scholars, and economists who are not representative of all experts engaged in AI research, though sociologists did play a significant role in steering certain policies toward equity, such as in the Blueprint for an AI Bill of Rights.
This approach involves two main steps. A crucial first step in this approach involves theoretically and empirically examining the impact of tech companies and their private interests on shaping AI policy debates. The second step is to organize existing and new AI research around a coherent re-framing of AI as a matter of public interest and equity, and Law and McCall identify four key questions to jumpstart this effort. The questions focus on examining how AI interacts with social structure, the ways that different groups define and advocate for inclusion and equity in AI, differences between public information and proprietary data, and how AI governance can affirm and expand democratic engagement and processes. “We intend for these questions to motivate new areas of sociological research on AI as well as to advance core areas of research in the discipline on inequality, politics, and knowledge production,” the authors write.
AI has evolved to a point where sociologists should not only analyze the pervasiveness and impacts of AI on society, but also guide the way forward toward equitable policymaking, the authors conclude. “The ultimate aim of this [paper] is to shift the current AI policy landscape from one that is focused on mitigating a narrow set of risks and highly deferential to the private interests of tech firms, to one that is centered on advancing equity and in which every American has a vested stake,” they write. “Sociologists…need to go beyond descriptive and explanatory studies of AI’s impact on society and beyond development of novel methodological applications of AI, as important and necessary as this work is, to engage in policy-oriented discussions of a future in which AI both avoids harmful impacts on society and serves the public in an expansive way.”
Read the Paper:
Artificial Intelligence Policymaking: An Agenda for Sociological Research