Emergent Mind

Abstract

2023 was the year the world woke up to generative AI, and 2024 is the year policymakers are responding more firmly. Importantly, this policy momentum is taking place alongside real world creation and distribution of synthetic media. Social media platforms, news organizations, dating apps, image generation companies, and more are already navigating a world of AI-generated visuals and sounds, already changing hearts and minds, as policymakers try to catch up. How, then, can AI governance capture the complexity of the synthetic media landscape? How can it attend to synthetic media's myriad uses, ranging from storytelling to privacy preservation, to deception, fraud, and defamation, taking into account the many stakeholders involved in its development, creation, and distribution? And what might it mean to govern synthetic media in a manner that upholds the truth while bolstering freedom of expression? What follows is the first known collection of diverse examples of the implementation of synthetic media governance that responds to these questions, specifically through Partnership on AI's (PAI) Responsible Practices for Synthetic Media - a voluntary, normative Framework for creating, distributing, and building technology for synthetic media responsibly, launched in February 2023. In this paper, we present a case bank of real world examples that help operationalize the Framework - highlighting areas synthetic media governance can be applied, augmented, expanded, and refined for use, in practice. Read together, the cases emphasize distinct elements of AI policymaking and seven emergent best practices supporting transparency, safety, expression, and digital dignity online: consent, disclosure, and differentiation between harmful and creative use cases.

Overview

  • The paper reviews the application of PAI's 'Responsible Practices for Synthetic Media' framework to eleven diverse use cases, highlighting the complexities in synthetic media governance.

  • Key themes of the paper include categorizing stakeholders, emphasizing moderation responsibility, transparency via disclosure, and the necessity of obtaining consent, illustrated through real-world examples.

  • The findings stress the importance of collaboration, standardization, documentation, public education, and adaptive regulatory policies to effectively manage synthetic media technologies.

From Principles to Practices: Lessons Learned from Applying Partnership on AI’s (PAI) Synthetic Media Framework to 11 Use Cases

The paper, authored by Claire R. Leibowicz and Christian H. Cardona of Partnership on AI (PAI), provides a comprehensive review of applying PAI's "Responsible Practices for Synthetic Media" framework to eleven diverse use cases. This effort underscores the complexities and multifaceted challenges of synthetic media governance across various organizations and sectors.

Overview

The emergence of generative AI technologies in synthetic media has significant societal implications, and regulatory frameworks are now beginning to address both the opportunities and potential harms associated with these technologies. The paper presents real-world examples of implementing the PAI framework, launched in February 2023, which seeks to guide the responsible creation, distribution, and technology development for synthetic media.

Key Themes and Best Practices

The paper categorizes stakeholders into Builders, Creators, and Distributors, each playing a crucial role in the synthetic media lifecycle. It extracts three central themes from the case studies: Creative vs. Malicious Content, Transparency via Disclosure, and Consent.

1. Creative vs. Malicious Content

  • Moderation Responsibility: Builders and Creators, and not just Distributors, should moderate content to mitigate harmful downstream effects. Cases from Synthesia, Respeecher, and Bumble illustrate how early-stage moderation can prevent harmful content from spreading.
  • Balancing Expression and Safety: Institutions need to document decision-making processes on gray-area cases to balance creative expression with safety. Specific content examples provided by TikTok and WITNESS offer insights into navigating these tensions.

2. Transparency via Disclosure

  • Indirect Disclosures: Consistent and standardized indirect disclosures, or provenance signals, can help Distributors better adjudicate content. Adobe's work on Content Credentials exemplifies this practice.
  • Public Education: For artifact-level interventions like labels to be effective, broader public education about synthetic media is essential. OpenAI and Adobe emphasize the importance of educating the public about AI limitations to ensure accurate interpretation of labels.
  • Creative Content Labeling: Even creative uses of synthetic media should include labels to prevent potential harm, with methods carefully designed to preserve artistic expression. Respeecher and the BBC cases provide practical examples.

3. Consent

  • Seeking Consent: When real people’s likeness is involved, active consent should be sought, even posthumously or in difficult scenarios. The D-ID and WITNESS cases highlight the processes for obtaining consent from relatives or next-of-kin.
  • Satirical Content: When creating satirical synthetic content, power dynamics, public vs. private figure status, and the potential for harm should guide consent strategies. Insights from the PAI case study regarding political figures and election interference underline the complexities involved.

Implications for AI Governance

The paper’s findings underscore the importance of a nuanced approach to synthetic media governance, considering both practical and theoretical implications. The PAI framework's application reveals that effective synthetic media governance necessitates collaboration across the entire media creation and distribution pipeline, encompassing both humanistic and technical perspectives.

Future Developments in AI

Looking ahead, several areas warrant further exploration and refinement:

  • Standardization and Interoperability: Development of standardized, interoperable solutions for synthetic media disclosures can enhance detection and transparency.
  • Documentation and Transparency: Comprehensive documentation of decision-making processes, especially in gray areas, can foster greater transparency and support accountability.
  • Public Engagement: Engaging with broader civic institutions to enhance public education about AI will be pivotal in mitigating misconceptions and enhancing the efficacy of governance measures.
  • Regulatory Adaptation: Continuous adaptation of policies to keep pace with rapid AI advancements is necessary. Regulatory policies must incorporate insights derived from real-world applications and feedback from diverse stakeholders.

In conclusion, this paper provides a detailed examination of the practical challenges and emergent best practices in synthetic media governance, offering valuable lessons for policymakers, researchers, and practitioners. Continued collaboration, transparency, and flexibility will be critical to advancing responsible AI governance frameworks.

Create an account to read this summary for free:

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.