Fraud & Abuse
Deepfakes are ideal for fraud
AI tools can be used to generate fake recordings or pictures of real people. There have already been multiple instances of deep-faked voice recordings being used for extortion or fraud.
Abuse
The accessibility of generative AI have made them ideal tools for targeted harassment. Abusers can use a victim’s social media to create fake porn and fake recordings to implicate them in criminal or unethical behaviour. This has already driven innocent people out of their jobs.
Astroturfing and Social media manipulation
AI tools are already being used to manipulate social media with fake profiles and auto-generated text and images. These can only be reliably detected if the scammers are incompetent enough to allow AI-specific responses through.
Ecommerce and streaming fraud
Any paying venue for text, image, or audio is already being flooded with AI-generated output, lowering the signal-to-noise ratio for everybody on those platforms. These are often coupled with click- or streaming fraud to extract fraudulent royalty or advertising revenue.
AI “Checkers” don’t work
No existing software reliably detects AI output. Most of them are worse than nothing and will regularly classify human works as AI-generated.
Standardised tests no longer work
Any written test that is documented and standardised lends itself very well to being solved by AI tools. Any organisation that uses a standardised test to protect or safeguard a valuable process or credential will need to change their strategy.
Further reading
References
These cards were made by Baldur Bjarnason.
They are based on the research done for the book The Intelligence Illusion: a practical guide to the business risks of Generative AI .