← Bias & Safety Generative AI:
What You Need To Know
Privacy →

Fraud & Abuse

Deepfakes are ideal for fraud

AI tools can be used to generate fake recordings or pictures of real people. There have already been multiple instances of deep-faked voice recordings being used for extortion or fraud.


The accessibility of generative AI have made them ideal tools for targeted harassment. Abusers can use a victim’s social media to create fake porn and fake recordings to implicate them in criminal or unethical behaviour. This has already driven innocent people out of their jobs.

Astroturfing and Social media manipulation

AI tools are already being used to manipulate social media with fake profiles and auto-generated text and images. These can only be reliably detected if the scammers are incompetent enough to allow AI-specific responses through.

Ecommerce and streaming fraud

Any paying venue for text, image, or audio is already being flooded with AI-generated output, lowering the signal-to-noise ratio for everybody on those platforms. These are often coupled with click- or streaming fraud to extract fraudulent royalty or advertising revenue.

AI “Checkers” don’t work

No existing software reliably detects AI output. Most of them are worse than nothing and will regularly classify human works as AI-generated.

Standardised tests no longer work

Any written test that is documented and standardised lends itself very well to being solved by AI tools. Any organisation that uses a standardised test to protect or safeguard a valuable process or credential will need to change their strategy.


Cover for the book 'The Intelligence Illusion'

These cards were made by Baldur Bjarnason.

They are based on the research done for the book The Intelligence Illusion: a practical guide to the business risks of Generative AI .

Belanger, Ashley. “Thousands Scammed by AI Voices Mimicking Loved Ones in Emergencies.” Ars Technica, March 2023. https://arstechnica.com/tech-policy/2023/03/rising-scams-use-ai-to-mimic-voices-of-loved-ones-in-financial-distress/.
Bensinger, Greg. ChatGPT Launches Boom in AI-Written e-Books on Amazon.” Reuters, February 2023. https://www.reuters.com/technology/chatgpt-launches-boom-ai-written-e-books-amazon-2023-02-21/.
Dreibelbis, Emily. ChatGPT Passes Google Coding Interview for Level 3 Engineer With $183K Salary.” PCMAG, 2023. https://www.pcmag.com/news/chatgpt-passes-google-coding-interview-for-level-3-engineer-with-183k-salary.
Edwards, Benj. “Thanks to AI, It’s Probably Time to Take Your Photos Off the Internet.” Ars Technica, December 2022. https://arstechnica.com/information-technology/2022/12/thanks-to-ai-its-probably-time-to-take-your-photos-off-the-internet/.
Gilbert, David. “High Schoolers Made a Racist Deepfake of a Principal Threatening Black Students.” Vice, March 2023. https://www.vice.com/en/article/7kxzk9/school-principal-deepfake-racist-video.
Goldstein, Josh A., and Renée DiResta. “Research Note: This Salesperson Does Not Exist: How Tactics from Political Influence Operations on Social Media Are Deployed for Commercial Lead Generation.” Harvard Kennedy School Misinformation Review, September 2022. https://doi.org/10.37016/mr-2020-104.
“’Heartbreaking’: Scam Artists Extorting Children by Putting Their Faces on Explicit Images.” News Channel 5 Nashville (WTVF), December 2022. https://www.newschannel5.com/news/heartbreaking-scam-artists-extorting-children-by-putting-their-faces-on-explicit-images.
Henrique, Da Silva Gameiro, Andrei Kucharavy, and Rachid Guerraoui. “Stochastic Parrots Looking for Stochastic Parrots: LLMs Are Easy to Fine-Tune and Hard to Detect with Other LLMs.” arXiv, April 2023. https://doi.org/10.48550/arXiv.2304.08968.
Herley, Cormac. “Why Do Nigerian Scammers Say They Are From Nigeria?” WEIS, June 2012. https://www.microsoft.com/en-us/research/publication/why-do-nigerian-scammers-say-they-are-from-nigeria/.
Hunter, Tatum. AI Porn Is Easy to Make Now. For Women, That’s a Nightmare.” Washington Post, February 2023. https://www.washingtonpost.com/technology/2023/02/13/ai-porn-deepfakes-women-consent/.
Kan, Michael. “Sci-Fi Mag Pauses Submissions Amid Flood of AI-Generated Short Stories.” PCMAG, 2023. https://www.pcmag.com/news/sci-fi-mag-pauses-submissions-amid-flood-of-ai-generated-short-stories.
McAfee. ChatGPT: A Scammer’s Newest Tool.” McAfee Blog, January 2023. https://www.mcafee.com/blogs/internet-security/chatgpt-a-scammers-newest-tool/.
Morales, Christina. “Pennsylvania Woman Accused of Using Deepfake Technology to Harass Cheerleaders.” The New York Times, March 2021. https://www.nytimes.com/2021/03/14/us/raffaela-spone-victory-vipers-deepfake.html.
“New AI Classifier for Indicating AI-Written Text.” OpenAI, January 2023. https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text/.
Ruiz, Nataniel, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman. DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation.” arXiv, August 2022. https://doi.org/10.48550/arXiv.2208.12242.
Stupp, Catherine. “Fraudsters Used AI to Mimic CEO’s Voice in Unusual Cybercrime Case.” WSJ. Accessed April 18, 2023. https://www.wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402.
Verma, Pranshu. “They Thought Loved Ones Were Calling for Help. It Was an AI Scam.” Washington Post, March 2023. https://www.washingtonpost.com/technology/2023/03/05/ai-voice-scam/.