← Language Models Generative AI:
What You Need To Know
Artificial General Intelligence →

Image Generation

Diffusion models can generate realistic images

These systems are trained on large image collections, which lets them mimic many common photography styles.

Those images often have flaws or defects

An AI model does not have any understanding of anatomy, physics, or three-dimensional spaces. They frequently get relative sizes, structure, and the mechanics of objects wrong. Some of these flaws can be prevented by the AI vendor, but not all.

Integration into existing software is vital

The most effective way to mitigate the limitations of a diffusion model and leverage its strengths, is to use it as an integrated tool in larger image manipulation software. It can be extremely capable at transforming or modifying existing images.

By their nature, the images are mediocre

Due to the way these systems are designed, their standalone output is neither great nor bad. It isn’t unusual to have to generate many images to get even one that’s acceptable. This can get expensive and time-consuming. Even the images that are passable often need substantial post-processing before they can be useful.

They are extremely effective at misinformation and abuse

Setting up a system to create “deepfakes” of a real person is relatively inexpensive and doesn’t require many images of the victim. This capability has been used to abuse children, drive people out of their jobs, frame victims of crimes, and spread misinformation on social media.

AI art generally doesn’t get copyright protection

All AI art should be treated as non-exclusive as a result, unless it is a direct conversion of an existing work, or gets directly integrated into a larger non-AI work.

Many AI vendors seem to be undermining artists and photographers

AI vendors train their systems on the work of artists and photographers without asking permission. They include affordances in their software specifically for copying known artists, again without permission. These tools and systems seem to be positioned as direct replacements for artists and photographers.

AI art is seen by many creative communities as an attack

Using AI art in your business will be seen as a hostile gesture by many, if not most, in creative communities.

References

Cover for the book 'The Intelligence Illusion'

These cards were made by Baldur Bjarnason.

They are based on the research done for the book The Intelligence Illusion: a practical guide to the business risks of Generative AI .

Baio, Andy. AI Data Laundering: How Academic and Nonprofit Researchers Shield Tech Companies from Accountability.” Waxy, September 2022. https://waxy.org/2022/09/ai-data-laundering-how-academic-and-nonprofit-researchers-shield-tech-companies-from-accountability/.
———. “Exploring 12 Million of the 2.3 Billion Images Used to Train Stable Diffusion’s Image Generator.” Waxy, August 2022. https://waxy.org/2022/08/exploring-12-million-of-the-images-used-to-train-stable-diffusions-image-generator/.
———. “Invasive Diffusion: How One Unwilling Illustrator Found Herself Turned into an AI Model.” Waxy, November 2022. https://waxy.org/2022/11/invasive-diffusion-how-one-unwilling-illustrator-found-herself-turned-into-an-ai-model/.
———. “Online Art Communities Begin Banning AI-Generated Images,” September 2022. https://waxy.org/2022/09/online-art-communities-begin-banning-ai-generated-images/.
———. “Opening the Pandora’s Box of AI Art.” Waxy, August 2022. https://waxy.org/2022/08/opening-the-pandoras-box-of-ai-art/.
Bastian, Matthias. “Stable Diffusion V2 Removes NSFW Images and Causes Protests.” THE DECODER, November 2022. https://the-decoder.com/stable-diffusion-v2-removes-nude-images-and-causes-protests/.
Bianchi, Federico, Pratyusha Kalluri, Esin Durmus, Faisal Ladhak, Myra Cheng, Debora Nozza, Tatsunori Hashimoto, Dan Jurafsky, James Zou, and Aylin Caliskan. “Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale.” arXiv, November 2022. https://doi.org/10.48550/arXiv.2211.03759.
Birhane, Abeba, Vinay Uday Prabhu, and Emmanuel Kahembwe. “Multimodal Datasets: Misogyny, Pornography, and Malignant Stereotypes.” arXiv, October 2021. https://doi.org/10.48550/arXiv.2110.01963.
Bridle, James. “The Stupidity of AI.” The Guardian, March 2023. https://www.theguardian.com/technology/2023/mar/16/the-stupidity-of-ai-artificial-intelligence-dall-e-chatgpt.
Butterick, Matthew, and Joseph Saveri Law Firm. “Stable Diffusion Litigation · Joseph Saveri Law Firm & Matthew Butterick.” Accessed February 21, 2023. https://stablediffusionlitigation.com/.
Carlini, Nicholas. “Poisoning the Unlabeled Dataset of {Semi-Supervised} Learning,” 1577–92, 2021. https://www.usenix.org/conference/usenixsecurity21/presentation/carlini-poisoning.
Carlini, Nicholas, Jamie Hayes, Milad Nasr, Matthew Jagielski, Vikash Sehwag, Florian Tramèr, Borja Balle, Daphne Ippolito, and Eric Wallace. “Extracting Training Data from Diffusion Models.” arXiv, January 2023. https://doi.org/10.48550/arXiv.2301.13188.
Carlini, Nicholas, Matthew Jagielski, Christopher A. Choquette-Choo, Daniel Paleka, Will Pearce, Hyrum Anderson, Andreas Terzis, Kurt Thomas, and Florian Tramèr. “Poisoning Web-Scale Training Datasets Is Practical.” arXiv, February 2023. https://doi.org/10.48550/arXiv.2302.10149.
“Chatbots, Deepfakes, and Voice Clones: AI Deception for Sale.” Federal Trade Commission, March 2023. https://www.ftc.gov/business-guidance/blog/2023/03/chatbots-deepfakes-voice-clones-ai-deception-sale.
“Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence.” Federal Register, March 2023. https://www.federalregister.gov/documents/2023/03/16/2023-05321/copyright-registration-guidance-works-containing-material-generated-by-artificial-intelligence.
Crypto, CRYPTOINSIGHT PRO. “Stable Diffusion 2.0 Has “Forgotten” How to Generate NSFW Content.” Substack newsletter. CRYPTOINSIGHT.PRO Crypto NFT DeFi, November 2022. https://cryptoinsightpro.substack.com/p/technologies2.
Edwards, Benj. “Artist Finds Private Medical Record Photos in Popular AI Training Data Set.” Ars Technica, September 2022. https://arstechnica.com/information-technology/2022/09/artist-finds-private-medical-record-photos-in-popular-ai-training-data-set/.
———. “Netflix Stirs Fears by Using AI-Assisted Background Art in Short Anime Film.” Ars Technica, February 2023. https://arstechnica.com/information-technology/2023/02/netflix-taps-ai-image-synthesis-for-background-art-in-the-dog-and-the-boy/.
———. “Stability AI Plans to Let Artists Opt Out of Stable Diffusion 3 Image Training.” Ars Technica, December 2022. https://arstechnica.com/information-technology/2022/12/stability-ai-plans-to-let-artists-opt-out-of-stable-diffusion-3-image-training/.
———. “Thanks to AI, It’s Probably Time to Take Your Photos Off the Internet.” Ars Technica, December 2022. https://arstechnica.com/information-technology/2022/12/thanks-to-ai-its-probably-time-to-take-your-photos-off-the-internet/.
Foong, Ng Wai. “Stable Diffusion 2: The Good, The Bad and The Ugly.” Medium, December 2022. https://towardsdatascience.com/stable-diffusion-2-the-good-the-bad-and-the-ugly-bd44bc7a1333.
“Getty Images v. Stability AI - Complaint.” Copyright Lately. Accessed February 21, 2023. https://copyrightlately.com/pdfviewer/getty-images-v-stability-ai-complaint/.
Gilbert, David. “High Schoolers Made a Racist Deepfake of a Principal Threatening Black Students.” Vice, March 2023. https://www.vice.com/en/article/7kxzk9/school-principal-deepfake-racist-video.
Goldstein, Josh A., and Renée DiResta. “Research Note: This Salesperson Does Not Exist: How Tactics from Political Influence Operations on Social Media Are Deployed for Commercial Lead Generation.” Harvard Kennedy School Misinformation Review, September 2022. https://doi.org/10.37016/mr-2020-104.
Gupta, Abhishek. “Unstable Diffusion: Ethical Challenges and Some Ways Forward.” Montreal AI Ethics Institute, November 2022. https://montrealethics.ai/unstable-diffusion-ethical-challenges-and-some-ways-forward/.
“’Heartbreaking’: Scam Artists Extorting Children by Putting Their Faces on Explicit Images.” News Channel 5 Nashville (WTVF), December 2022. https://www.newschannel5.com/news/heartbreaking-scam-artists-extorting-children-by-putting-their-faces-on-explicit-images.
Heikkilä, Melissa. AI Models Spit Out Photos of Real People and Copyrighted Images.” MIT Technology Review, 2023. https://www.technologyreview.com/2023/02/03/1067786/ai-models-spit-out-photos-of-real-people-and-copyrighted-images/.
———. “The Viral AI Avatar App Lensa Undressed Me—Without My Consent.” MIT Technology Review, December 2022. https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/.
———. “Why You Shouldn’t Trust AI Search Engines.” MIT Technology Review, 2023. https://www.technologyreview.com/2023/02/14/1068498/why-you-shouldnt-trust-ai-search-engines/.
Hunter, Tatum. AI Porn Is Easy to Make Now. For Women, That’s a Nightmare.” Washington Post, February 2023. https://www.washingtonpost.com/technology/2023/02/13/ai-porn-deepfakes-women-consent/.
Kapoor, Sayash, and Arvind Narayanan. “Artists Can Now Opt Out of Generative AI. It’s Not Enough.” Substack newsletter. AI Snake Oil, March 2023. https://aisnakeoil.substack.com/p/artists-can-now-opt-out-of-generative.
Lee, Timothy B. “Copyright Lawsuits Pose a Serious Threat to Generative AI,” March 2023. https://www.understandingai.org/p/copyright-lawsuits-pose-a-serious.
Magazine, Smithsonian, and Jane Recker. “U.S. Copyright Office Rules A.I. Art Can’t Be Copyrighted.” Smithsonian Magazine, 2023. https://www.smithsonianmag.com/smart-news/us-copyright-office-rules-ai-art-cant-be-copyrighted-180979808/.
Maluleke, Vongani H., Neerja Thakkar, Tim Brooks, Ethan Weber, Trevor Darrell, Alexei A. Efros, Angjoo Kanazawa, and Devin Guillory. “Studying Bias in GANs Through the Lens of Race.” arXiv, September 2022. https://doi.org/10.48550/arXiv.2209.02836.
Monge, Jim Clyde. “Stable Diffusion 2.1 ReleasedNSFW Image Generation Is Back.” MLearning.ai, December 2022. https://medium.com/mlearning-ai/stable-diffusion-2-1-released-nsfw-image-generation-is-back-8bcc5c069d60.
Morales, Christina. “Pennsylvania Woman Accused of Using Deepfake Technology to Harass Cheerleaders.” The New York Times, March 2021. https://www.nytimes.com/2021/03/14/us/raffaela-spone-victory-vipers-deepfake.html.
Ortiz, Karla. “’The Images Below Aren’t @McCurryStudios Afghan Girl.’’.” Twitter, November 2022. https://twitter.com/kortizart/status/1588915427018559490.
Plunkett, Luke. AI CreatingArtIs An Ethical And Copyright Nightmare.” Kotaku, August 2022. https://kotaku.com/ai-art-dall-e-midjourney-stable-diffusion-copyright-1849388060.
Ruiz, Nataniel, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman. DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation.” arXiv, August 2022. https://doi.org/10.48550/arXiv.2208.12242.
Scalzi, John. “An Update On My Thoughts on AI-Generated Art.” Whatever, December 2022. https://whatever.scalzi.com/2022/12/10/an-update-on-my-thoughts-on-ai-generated-art/.
Shan, Shawn, Jenna Cryan, Emily Wenger, Haitao Zheng, Rana Hanocka, and Ben Y. Zhao. GLAZE: Protecting Artists from Style Mimicry by Text-to-Image Models.” arXiv, February 2023. https://doi.org/10.48550/arXiv.2302.04222.
Sharp, Sarah Rose. “He’s Bigger Than Picasso on AI Platforms, and He Hates It.” Hyperallergic, October 2022. http://hyperallergic.com/766241/hes-bigger-than-picasso-on-ai-platforms-and-he-hates-it/.
Somepalli, Gowthami, Vasu Singla, Micah Goldblum, Jonas Geiping, and Tom Goldstein. “Diffusion Art or Digital Forgery? Investigating Data Replication in Diffusion Models.” arXiv, December 2022. https://doi.org/10.48550/arXiv.2212.03860.
Stéphane Deschamps. “Why Understanding Facts and Context Is Useful for Image Recognition.” Piaille, March 2023. https://piaille.fr/@notabene/110014804392234758.
Vincent, James. “Anyone Can Use This AI Art Generator — That’s the Risk.” The Verge, September 2022. https://www.theverge.com/2022/9/15/23340673/ai-image-generation-stable-diffusion-explained-ethics-copyright-data.
———. “Getty Images Is Suing the Creators of AI Art Tool Stable Diffusion for Scraping Its Content.” The Verge, January 2023. https://www.theverge.com/2023/1/17/23558516/ai-art-copyright-stable-diffusion-getty-images-lawsuit.
“Who Owns the Copyright in AI-Generated Art?” Murgitroyd, European Patent & Trade Mark Attorneys, October 2022. https://www.murgitroyd.com/en-us/blog/who-owns-the-copyright-in-ai-generated-art/.
Xiang, Chloe, and Emanuel Maiberg. ISIS Executions and Non-Consensual Porn Are Powering AI Art.” Vice, September 2022. https://www.vice.com/en/article/93ad75/isis-executions-and-non-consensual-porn-are-powering-ai-art.