Emergent Mind

Abstract

Public sector agencies are rapidly deploying AI systems to augment or automate critical decisions in real-world contexts like child welfare, criminal justice, and public health. A growing body of work documents how these AI systems often fail to improve services in practice. These failures can often be traced to decisions made during the early stages of AI ideation and design, such as problem formulation. However, today, we lack systematic processes to support effective, early-stage decision-making about whether and under what conditions to move forward with a proposed AI project. To understand how to scaffold such processes in real-world settings, we worked with public sector agency leaders, AI developers, frontline workers, and community advocates across four public sector agencies and three community advocacy groups in the United States. Through an iterative co-design process, we created the Situate AI Guidebook: a structured process centered around a set of deliberation questions to scaffold conversations around (1) goals and intended use or a proposed AI system, (2) societal and legal considerations, (3) data and modeling constraints, and (4) organizational governance factors. We discuss how the guidebook's design is informed by participants' challenges, needs, and desires for improved deliberation processes. We further elaborate on implications for designing responsible AI toolkits in collaboration with public sector agency stakeholders and opportunities for future work to expand upon the guidebook. This design approach can be more broadly adopted to support the co-creation of responsible AI toolkits that scaffold key decision-making processes surrounding the use of AI in the public sector and beyond.

Overview of stages in the Situate AI Guidebook for iterative deliberations in public agencies.

Overview

  • The paper introduces the 'Situate AI Guidebook', a collaborative toolkit for early AI project evaluation in public sector agencies, emphasizing multi-stakeholder deliberation.

  • Developed through iterative co-design sessions, the guidebook contains 132 questions for reflection, organized across categories like goals, societal impact, and governance.

  • The guidebook underscores principles of reflexive deliberation and practical application, aiming to deepen discussions and integrate into diverse organizational processes.

  • It highlights the need for addressing power dynamics for inclusive participation and foresees broader application across various sectors, advocating for ethical AI development.

Co-Designing a Deliberation Guidebook for Early AI Project Evaluation in Public Sector Agencies

Engaging Diverse Stakeholders in Structuring AI Deliberations

Public sector agencies increasingly deploy AI systems to augment critical societal functions, from child welfare to public health. However, the rapid adoption of these technologies often overlooks rigorous deliberation on their societal impact, leading to failures in real-world applications. Recognizing the gap in systematic processes for early-stage AI project evaluation, this paper introduces the Situate AI Guidebook, a novel toolkit co-designed with stakeholders across public sector agencies and community advocacy groups to support multi-stakeholder deliberation regarding whether and under what conditions to proceed with AI project proposals.

Framework and Methodology

The Situate AI Guidebook emerged from iterative co-design sessions with 32 stakeholders, including agency leaders, AI developers, frontline workers, and community advocates across the United States. This collaborative process yielded 132 deliberation questions organized into four categories: goals and intended use, societal and legal considerations, data and modeling constraints, and organizational governance factors. These questions serve as a scaffold for discussions around the appropriateness of AI tools in public sector contexts, emphasizing the importance of early and inclusive deliberation.

Key Insights and Guidebook Design Principles

The guidebook is shaped by two guiding design principles: promoting reflexive deliberation and ensuring practicality of the process. It aims to facilitate deep discussions among stakeholders, challenging pre-existing assumptions and enabling a shared understanding of AI projects' implications. This approach not only respects the diverse expertise of all participants but also aligns with the complex decision-making environment of public sector agencies. Moreover, the guidebook's flexible structure allows for integration into existing organizational processes, acknowledging the varied practices across agencies.

Addressing Organizational and Social Dynamics

An important consideration in the guidebook's application is the need to account for organizational power dynamics and foster inclusive participation. Participants expressed diverse preferences for engagement, reflecting a broader challenge in ensuring meaningful dialogue among stakeholders with varying degrees of power and expertise. Future iterations of the guidebook must explore mechanisms to empower all participants, potentially through role-based deliberation sessions and policy interventions that incentivize responsible toolkit use.

Broadening Participation and Future Directions

While initially developed for public sector agencies, the Situate AI Guidebook holds potential for adaptation across other high-stakes AI deployment contexts. Its emphasis on reflexivity, inclusivity, and adaptability to organizational contexts makes it a valuable resource beyond the public sector. Moving forward, efforts should focus on expanding the guidebook’s framework to engage community members directly and exploring its applicability in private sector settings with similarly high stakes in AI deployment.

Conclusion

The Situate AI Guidebook represents a significant step toward embedding ethical considerations and stakeholder perspectives in the early stages of AI project planning within public sector agencies. By fostering structured, inclusive deliberations, it addresses a critical gap in responsible AI development practices. Further research and field testing will be crucial in refining the guidebook's effectiveness and exploring its potential beyond the public sector, ultimately contributing to more socially-aware and equitable AI systems.

Create an account to read this summary for free:

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.