Organic or Diffused: Can We Distinguish Human Art from AI-generated Images? (2402.03214v3)
Abstract: The advent of generative AI images has completely disrupted the art world. Distinguishing AI generated images from human art is a challenging problem whose impact is growing over time. A failure to address this problem allows bad actors to defraud individuals paying a premium for human art and companies whose stated policies forbid AI imagery. It is also critical for content owners to establish copyright, and for model trainers interested in curating training data in order to avoid potential model collapse. There are several different approaches to distinguishing human art from AI images, including classifiers trained by supervised learning, research tools targeting diffusion models, and identification by professional artists using their knowledge of artistic techniques. In this paper, we seek to understand how well these approaches can perform against today's modern generative models in both benign and adversarial settings. We curate real human art across 7 styles, generate matching images from 5 generative models, and apply 8 detectors (5 automated detectors and 3 different human groups including 180 crowdworkers, 4000+ professional artists, and 13 expert artists experienced at detecting AI). Both Hive and expert artists do very well, but make mistakes in different ways (Hive is weaker against adversarial perturbations while Expert artists produce higher false positives). We believe these weaknesses will remain as models continue to evolve, and use our data to demonstrate why a combined team of human and automated detectors provides the best combination of accuracy and robustness.
- Adobe. 2023. Adobe Firefly. https://www.adobe.com/products/firefly.html.
- Magnific AI. 2023. Magnific. https://magnific.ai.
- Self-Consuming Generative Models Go MAD. In arXiv preprint arxiv:2307.01850.
- ANDY BAIO. 2022. Invasive Diffusion: How one unwilling illustrator found herself turned into an AI model.
- Quentin Bammey. 2020. Synthbuster: Towards Detection of Diffusion Model Generated Images. IEEE Open Journal of Signal Processing 5 (2020), 1–9.
- James et. al. Betker. 2023. Improving Image Generation with Better Captions. OpenAI (2023).
- Detecting Generated Images by Real Images Only. arXiv preprint arXiv:2311.00962 (2023).
- Jordan J. Bird and Ahmad Lotfi. 2023. CIFAKE: Image Classification and Explainable Identification of AI-Generated Synthetic Images. arXiv preprint arXiv:2303.14126 (2023).
- Isabelle Bousquette. 2023. Companies Increasingly Fear Backlash Over Their AI Work. WSJ.
- G. W. Braudaway. 1997. Protecting publicly-available images with an invisible image watermark. In Proc. of ICIP. IEEE.
- Testing human ability to detect ‘deepfake’ images of human faces. Journal of Cybersecurity (2023), 1–18.
- Large Language Models Suffer From Their Own Output: An Analysis of the Self-Consuming Training Loop. In arXiv preprint arxiv:2311.16822.
- Cara. [n. d.]. https://cara.app/.
- Evidence-Based Survey Design: The Use of a Midpoint on the Likert Scale. Performance Improvement (2017), 15–23.
- Civitai. 2022. What the heck is Civitai? https://civitai.com/content/guides/what-is-civitai.
- Linda Codega. 2023a. Dungeons & Dragons Updates Bigby to Replace AI-Enhanced Images. Gizmodo.
- Linda Codega. 2023b. New Dungeons & Dragons Sourcebook Features AI Generated Art. Gizmodo.
- On The Detection of Synthetic Images Generated by Diffusion Models. In Proc. of (ICASSP).
- CourtListener. 2024. Andersen v. Stability AI Ltd. (3:23-cv-00201). https://www.courtlistener.com/docket/66732129/andersen-v-stability-ai-ltd/.
- Raising the Bar of AI-generated Image Detection with CLIP. arXiv preprint arXiv:2312.00195 (2023).
- Why do adversarial attacks transfer? explaining transferability of evasion and poisoning attacks. In Proc. of USENIX Security. 321–338.
- Prafulla Dhariwal and Alexander Nichol. 2021. Diffusion Models Beat GANs on Image Synthesis. Proc. of NeurIPS.
- Maggie H. Dupre. 2023. Sports Illustrated Publisher Fires CEO After AI Scandal. Futurism.
- Salako Emmanuel. 2023. AI Tools for Combating Deepfakes. https://ijnet.org/en/story/ai-tools-combating-deepfakes.
- Carl Franzen. 2023. Midjourney V6 is here with in-image text and completely overhauled prompting. https://venturebeat.com/ai/midjourney-v6-is-here-with-in-image-text-and-completely-overhauled-prompting/.
- Ethan Gach. 2023. Amazon’s First Official Fallout TV Show Artwork Is an AI-Looking Eyesore. Kotaku.com.
- R-LPIPS: An adversarially robust perceptual similarity metric. arXiv preprint arXiv:2307.15157 (2023).
- Paul Glynn. 2023. Sony World Photography Award 2023: Winner Refuses Award After Revealing AI Creation. https://www.bbc.com/news/entertainment-arts-65296763.
- Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014).
- Effects of JPEG Compression on Vision Transformer Image Classification for Encryption-then-Compression Images. Sensors ([n. d.]).
- Hive. 2023. AI-Generated Content Classification. https://thehive.ai/apis/ai-generated-content-classification.
- Denoising Diffusion Probabilistic Models. Proc. of NeurIPS.
- Karen Ho. 2024. Database of 16,000 Artists Used to Train Midjourney AI, Including 6-Year-Old Child, Garners Criticism. https://www.artnews.com/art-news/news/midjourney-ai-artists-database-1234691955/.
- D4: Detection of Adversarial Diffusion Deepfakes Using Disjoint Ensembles. In Proc. of WACV. IEEE.
- Illuminarty. 2023. Is an AI Behind Your Image? https://illuminarty.ai/.
- Heon Jae Jeong and Wui Chiang Lee. 2016. The level of collapse we are allowed: comparison of different response scales in safety attitudes questionnaire. Biometrics Biostatistics International Journal (2016), 128–134.
- Joseph Saveri Law Firm LLP. 2023. Class Action Filed Against Stability AI, Midjourney, and DeviantArt for DMCA Violations, Right of Publicity Violations, Unlawful Competition, Breach of TOS. https://cybernews.com/news/artists-unite-in-legal-battle-against-ai/.
- Analyzing and Improving the Image Quality of StyleGAN. arXiv preprint arXiv:1912.04958 (2020).
- Alex Krizhevsky and Geoffrey Hinton. 2009. Learning multiple layers of features from tiny images. Technical Report.
- SAND Lab. 2023. Web Glaze. https://glaze.cs.uchicago.edu/webglaze.html.
- Effects of JPEG compression on accuracy of image classification. Proc. of ACRS (1999).
- Effects of JPEG compression on image classification. Proc. of IJRS (2003).
- Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597 (2023).
- Seeing is not always believing: Benchmarking Human and Model Perception of AI-Generated Images. arXiv preprint arXiv:2304.13023 (2023).
- Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017).
- Emanuel Maiberg. 2023. AI Images Detectors Are Being Used to Discredit the Real Horrors of War. 404Media.
- MaxonVFX. 2024. We extend our apologies to the community. https://twitter.com/MaxonVFX/status/1748826148858208286.
- Midjourney. 2023. Midjourney. https://www.midjourney.com/.
- Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125 (2022).
- Travis Northup. 2023. Wizards of the Coast Repeats Anti-AI Art Stance After Player’s Handbook Controversy. IGN.com.
- Jie Yee Ong. 2023. Scooby-Doo: Daphne Voice Actor Fell Victim To $1,000 AI Art Scam. The Chainsaw.
- OpenAI. 2022. DALL·E 2. https://openai.com/dall-e-2.
- OpenAI. 2023. DALL·E 3. https://openai.com/dall-e-3.
- Optic. 2023. AI or Not. https://www.aiornot.com.
- Kyle Orland. 2024. Magic: The Gathering Maker Admits it Used AI-generated Art Despite Standing Ban. https://arstechnica.com/ai/2024/01/magic-the-gathering-maker-admits-it-used-ai-generated-art-despite-standing-ban/.
- Susannah Page-Katz. 2023. Introducing Our AI Policy. Kickstarter.com.
- Dustin Podell et al. 2023. SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis. arXiv preprint arXiv:2307.01952 (2023).
- Learning transferable visual models from natural language supervision. In Proc. of ICML.
- Aditya Ramesh et al. 2022. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125 (2022).
- Towards the Detection of Diffusion Model Deepfakes. arXiv preprint arXiv:2210.14571 (2023).
- High-resolution image synthesis with latent diffusion models. In Proc. of CVPR. 10684–10695.
- Kevin Roose. 2022. An A.I.-Generated Picture Won an Art Prize. Artists Aren’t Happy. https://www.nytimes.com/2022/09/02/technology/ai-artificial-intelligence-artists.html.
- Mia Sato. 2023. How AI art killed an indie book cover contest. The Verge.
- Christoph Schuhmann et al. 2022. Laion-5b: An open large-scale dataset for training next generation image-text models. arXiv preprint arXiv:2210.08402 (2022).
- DE-FAKE: Detection and Attribution of Fake Images Generated by Text-to-Image Generation Models. arXiv preprint arXiv:1804.00792 (2023).
- Sarah Shaffi. 2023. Bloomsbury admits using AI-generated artwork for Sarah J Maas novel. The Guardian.
- Glaze: Protecting artists from style mimicry by text-to-image models. In Proc. of USENIX Security.
- ”Eons Show”. 2024. Eons Show apologies for violating own AI policy. Twitter. https://x.com/EonsShow/status/1751327424556544451.
- Ilia Shumailov et al. 2023. The Curse of Recursion: Training on Generated Data Makes Models Forget. arXiv preprint arxiv:2305.17493 (2023).
- Zachary Small. 2023. As Fight Over A.I. Artwork Unfolds, Judge Rejects Copyright Claim. NY Times.
- Robustness and Generalizability of Deepfake Detection: A Study with Diffusion Models. arXiv preprint arXiv:2309.02218 (2023).
- JPEG Image Compression. National High Magnetic Field Laboratory.
- Stability AI. 2022. Stable Diffusion Public Release. https://stability.ai/blog/stable-diffusion-public-release.
- StabilityAI. 2022a. Stable Diffusion v1-4 Model Card. https://huggingface.co/CompVis/stable-diffusion-v1-4.
- StabilityAI. 2022b. Stable Diffusion v1-5 Model Card. https://huggingface.co/runwayml/stable-diffusion-v1-5.
- ”Portal Staff”. 2023. League of Legends AI-Generated LATAM Anniversary Video Gets Taken Down. ZLeague The Portal.
- Chandra Steele. 2023. How to Detect AI-Created Images. https://www.pcmag.com/how-to/how-to-detect-ai-created-images.
- Stuart A. Thompson and Tiffany Hsu. 2023. How Easy Is It to Fool A.I.-Detection Tools? https://www.nytimes.com/interactive/2023/06/28/technology/ai-detection-midjourney-stable-diffusion-dalle.html.
- CNN-generated images are surprisingly easy to spot… for now. arXiv preprint arXiv:1912.11035 (2020).
- Benchmarking Deepart Detection. arXiv preprint arXiv:2302.14475 (2023).
- Scaling Language-Image Pre-training via Masking. In Proc. of ICCV.
- Tree-Rings Watermarks: Invisible Fingerprints for Diffusion Images. In Proc. of NeurIPS.
- Cam Wilson. 2024. AI is producing ‘fake’ Indigenous art trained on real artists’ work without permission. Crickey.com.au.
- Robust Image Watermarking using Stable Diffusion. arXiv preprint arXiv:2401.04247 (2024).
- GenImage: A Million-Scale Benchmark for Detecting AI-Generated Image. arXiv preprint arXiv:2306.08571 (2023).