← Artificial General Intelligence Generative AI:
What You Need To Know
Shortcut “Reasoning” →

Snake Oil

AI has a long history of pseudoscience

AI researchers have in the past made impossible claims, such as being able to detect criminality, psychopathy, or sexual orientation from head shape or gait. The field has a long history of pseudoscience.

AI has a long history of over-promising

Throughout the field’s history, the AI industry has routinely claimed that its systems are much more capable than they have turned out to be. There is every indication that this continues today.

AI has a long history of very poor research

AI research tends to be poorly structured, designed to prove a predetermined outcome, be impossible to replicate, or all of the above. Most of the research you see discussed on social media or by journalists is marketing and not structured scientific or academic research.

AI has a long history of outright fraud

The US FTC has repeatedly warned about false promises and dubious practices in the AI industry.

AI has questionable legal and regulatory compliance

Regulatory bodies in the US, Canada, and the EU have opened investigations into recent practices in the AI industry. Many have issued warnings.

AI has privacy and confidentiality issues

Machine “unlearning” is still not practical. The AI does not forget. Confidential and private data has already been leaked because of it.

AI is a magnet for lawsuits

Most of the major AI companies are facing lawsuits because of their practices surrounding copyright and personal data.

References

Cover for the book 'The Intelligence Illusion'

These cards were made by Baldur Bjarnason.

They are based on the research done for the book The Intelligence Illusion: a practical guide to the business risks of Generative AI .

Al-Sibai, Noor. “Amazon Begs Employees Not to Leak Corporate Secrets to ChatGPT.” Futurism, 2023. https://futurism.com/the-byte/amazon-begs-employees-chatgpt.
Atleson, Michael. “Keep Your AI Claims in Check.” Federal Trade Commission, February 2023. https://www.ftc.gov/business-guidance/blog/2023/02/keep-your-ai-claims-check.
Bailey, Jonathan. “The Wave of AI Lawsuits Have Begun.” Plagiarism Today, January 1. https://www.plagiarismtoday.com/2023/01/17/the-wave-of-ai-lawsuits-have-begun/.
Barr, Kyle. GPT-4 Is a Giant Black Box and Its Training Data Remains a Mystery.” Gizmodo, March 2023. https://gizmodo.com/chatbot-gpt4-open-ai-ai-bing-microsoft-1850229989.
Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–23. FAccT ’21. New York, NY, USA: Association for Computing Machinery, 2021. https://doi.org/10.1145/3442188.3445922.
Birhane, Abeba, Vinay Uday Prabhu, and Emmanuel Kahembwe. “Multimodal Datasets: Misogyny, Pornography, and Malignant Stereotypes.” arXiv, October 2021. https://doi.org/10.48550/arXiv.2110.01963.
Bourtoule, Lucas, Varun Chandrasekaran, Christopher A. Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, and Nicolas Papernot. “Machine Unlearning.” arXiv, December 2020. https://doi.org/10.48550/arXiv.1912.03817.
Carlini, Nicholas, Jamie Hayes, Milad Nasr, Matthew Jagielski, Vikash Sehwag, Florian Tramèr, Borja Balle, Daphne Ippolito, and Eric Wallace. “Extracting Training Data from Diffusion Models.” arXiv, January 2023. https://doi.org/10.48550/arXiv.2301.13188.
“Chatbots, Deepfakes, and Voice Clones: AI Deception for Sale.” Federal Trade Commission, March 2023. https://www.ftc.gov/business-guidance/blog/2023/03/chatbots-deepfakes-voice-clones-ai-deception-sale.
Coles, Cameron. “3.1% of Workers Have Pasted Confidential Company Data into ChatGPT.” Cyberhaven, February 2023. https://www.cyberhaven.com/blog/4-2-of-workers-have-pasted-company-data-into-chatgpt/.
Council, Stephen. OpenAI Admits Some Premium Users’ Payment Info Was Exposed.” SFGATE, March 2. https://www.sfgate.com/tech/article/chatgpt-openai-payment-data-leak-17858969.php.
Dastin, Jeffrey. “Amazon Scraps Secret AI Recruiting Tool That Showed Bias Against Women.” Reuters, October 2018. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G.
“Data Breach Reporting Requirements Explained [2022] GDPR Register,” December 2021. https://www.gdprregister.eu/gdpr/data-breach-notification-requirements/.
Di, Jimmy Z., Jack Douglas, Jayadev Acharya, Gautam Kamath, and Ayush Sekhari. “Hidden Poison: Machine Unlearning Enables Camouflaged Poisoning Attacks.” arXiv, December 2022. https://doi.org/10.48550/arXiv.2212.10717.
Feiner, Lauren. “U.S. Regulators Warn They Already Have the Power to Go After A.I. Bias — and They’re Ready to Use It.” CNBC, April 3. https://www.cnbc.com/2023/04/25/us-regulators-warn-they-already-have-the-power-to-go-after-ai-bias.html.
Fraser, David. “Federal Privacy Watchdog Probing OpenAI, ChatGPT Following Complaint CBC News.” CBC, April 2023. https://www.cbc.ca/news/politics/privacy-commissioner-investigation-openai-chatgpt-1.6801296.
FTC Chair Khan and Officials from DOJ, CFPB and EEOC Release Joint Statement on AI.” Federal Trade Commission, April 2023. https://www.ftc.gov/news-events/news/press-releases/2023/04/ftc-chair-khan-officials-doj-cfpb-eeoc-release-joint-statement-ai.
FTC Finalizes Settlement with Photo App Developer Related to Misuse of Facial Recognition Technology.” Federal Trade Commission, May 2021. https://www.ftc.gov/news-events/news/press-releases/2021/05/ftc-finalizes-settlement-photo-app-developer-related-misuse-facial-recognition-technology.
Gal, Uri. ChatGPT Is a Data Privacy Nightmare, and We Ought to Be Concerned.” Ars Technica, February 2023. https://arstechnica.com/information-technology/2023/02/chatgpt-is-a-data-privacy-nightmare-and-you-ought-to-be-concerned/.
“Getty Images v. Stability AI - Complaint.” Copyright Lately. Accessed February 21, 2023. https://copyrightlately.com/pdfviewer/getty-images-v-stability-ai-complaint/.
Grant, Nico, and Karen Weise. “In A.I. Race, Microsoft and Google Choose Speed Over Caution.” The New York Times, April 2023. https://www.nytimes.com/2023/04/07/technology/ai-chatbots-google-microsoft.html.
Haibe-Kains, Benjamin, George Alexandru Adam, Ahmed Hosny, Farnoosh Khodakarami, Levi Waldron, Bo Wang, Chris McIntosh, et al. “Transparency and Reproducibility in Artificial Intelligence.” Nature 586, no. 7829 (October 2020): E14–16. https://doi.org/10.1038/s41586-020-2766-y.
Heaven, Will Douglas. “Hundreds of AI Tools Have Been Built to Catch Covid. None of Them Helped.” MIT Technology Review, July 2021. https://www.technologyreview.com/2021/07/30/1030329/machine-learning-ai-failed-covid-hospital-diagnosis-pandemic/.
Heavens, Will Douglas. AI Is Wrestling with a Replication Crisis.” MIT Technology Review, 2020. https://www.technologyreview.com/2020/11/12/1011944/artificial-intelligence-replication-crisis-science-big-tech-google-deepmind-facebook-openai/.
Heikkilä, Melissa. “Dutch Scandal Serves as a Warning for Europe over Risks of Using Algorithms.” POLITICO, March 2022. https://www.politico.eu/article/dutch-scandal-serves-as-a-warning-for-europe-over-risks-of-using-algorithms/.
Hutson, Matthew. “Artificial Intelligence Faces Reproducibility Crisis.” Science 359, no. 6377 (February 2018): 725–26. https://doi.org/10.1126/science.359.6377.725.
“Intelligenza Artificiale: Il Garante Blocca ChatGPT. Raccolta Illecita Di Dati Personali. Assenza Di Sistemi Per La Verifica Dell’età Dei Minori,” March 2023. https://www.garanteprivacy.it:443/home/docweb/-/docweb-display/docweb/9870847.
Kan, Michael. OpenAI Confirms Leak of ChatGPT Conversation Histories.” PCMAG, March 202AD. https://www.pcmag.com/news/openai-confirms-leak-of-chatgpt-conversation-histories.
Kapoor, Sayash, and Arvind Narayanan. “A Misleading Open Letter about Sci-Fi AI Dangers Ignores the Real Risks.” Substack newsletter. AI Snake Oil, March 4. https://aisnakeoil.substack.com/p/a-misleading-open-letter-about-sci.
———. “Leakage and the Reproducibility Crisis in ML-Based Science,” 2022. https://doi.org/10.48550/ARXIV.2207.07048.
———. OpenAI’s Policies Hinder Reproducible Research on Language Models.” Substack newsletter. AI Snake Oil, March 2023. https://aisnakeoil.substack.com/p/openais-policies-hinder-reproducible.
Kumar, Vinayshekhar Bannihatti, Rashmi Gangadharaiah, and Dan Roth. “Privacy Adhering Machine Un-Learning in NLP.” arXiv, December 2022. https://doi.org/10.48550/arXiv.2212.09573.
Lee, Timothy B. “Copyright Lawsuits Pose a Serious Threat to Generative AI,” March 2023. https://www.understandingai.org/p/copyright-lawsuits-pose-a-serious.
Lomas, Natasha. FTC Settlement with Ever Orders Data and AIs Deleted After Facial Recognition Pivot.” TechCrunch, January 5. https://techcrunch.com/2021/01/12/ftc-settlement-with-ever-orders-data-and-ais-deleted-after-facial-recognition-pivot/.
Milmo, Dan. ChatGPT Reaches 100 Million Users Two Months After Launch.” The Guardian, February 6. https://www.theguardian.com/technology/2023/feb/02/chatgpt-100-million-users-open-ai-fastest-growing-app.
Mitchell, Melanie. “Why AI Is Harder Than We Think.” arXiv, April 2021. https://doi.org/10.48550/arXiv.2104.12871.
Mukherjee, Supantha, Elvira Pollina, and Rachel More. “Italy’s ChatGPT Ban Attracts EU Privacy Regulators.” Reuters, April 2023. https://www.reuters.com/technology/germany-principle-could-block-chat-gpt-if-needed-data-protection-chief-2023-04-03/.
Narayanan, Arvind, and Sayash Kapoor. GPT-4 and Professional Benchmarks: The Wrong Answer to the Wrong Question.” Substack newsletter. AI Snake Oil, March 2023. https://aisnakeoil.substack.com/p/gpt-4-and-professional-benchmarks.
O’Leary, Lizzie. “How IBM’s Watson Went From the Future of Health Care to Sold Off for Parts.” Slate, January 2022. https://slate.com/technology/2022/01/ibm-watson-health-failure-artificial-intelligence.html.
Olson, Parmy. “Nearly Half Of All AI Startups Are Cashing In On Hype.” Forbes, 2019. https://www.forbes.com/sites/parmyolson/2019/03/04/nearly-half-of-all-ai-startups-are-cashing-in-on-hype/.
Pan, Xudong, Mi Zhang, Shouling Ji, and Min Yang. “Privacy Risks of General-Purpose Language Models.” In 2020 IEEE Symposium on Security and Privacy (SP), 1314–31, 2020. https://doi.org/10.1109/SP40000.2020.00095.
Perrigo, Billy. “Exclusive: The $2 Per Hour Workers Who Made ChatGPT Safer.” Time, January 2023. https://time.com/6247678/openai-chatgpt-kenya-workers/.
Pikuliak, Matúš. ChatGPT Survey: Performance on NLP Datasets,” March 2023. http://opensamizdat.com/posts/chatgpt_survey/.
Raieli, Salvatore. “Machine Unlearning: The Duty of Forgetting.” Medium, September 2022. https://towardsdatascience.com/machine-unlearning-the-duty-of-forgetting-3666e5b9f6e5.
Raji, Inioluwa Deborah, I. Elizabeth Kumar, Aaron Horowitz, and Andrew Selbst. “The Fallacy of AI Functionality.” In 2022 ACM Conference on Fairness, Accountability, and Transparency, 959–72. Seoul Republic of Korea: ACM, 2022. https://doi.org/10.1145/3531146.3533158.
Ramel, By David, and 03/15/2023. “Data Scientists Cite Lack of GPT-4 Details -.” Virtualization Review. Accessed April 10, 2023. https://virtualizationreview.com/articles/2023/03/15/gpt-4-details.aspx.
“Regulatory Framework Proposal on Artificial Intelligence Shaping Europe’s Digital Future,” February 2023. https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai.
Rigaki, Maria, and Sebastian Garcia. “A Survey of Privacy Attacks in Machine Learning.” arXiv, April 7. https://doi.org/10.48550/arXiv.2007.07646.
Rogers, Anna. “Closed AI Models Make Bad Baselines.” Hacking Semantics, April 2023. https://hackingsemantics.xyz/2023/closed-baselines/.
Shepardson, David, Diane Bartz, and Diane Bartz. US Begins Study of Possible Rules to Regulate AI Like ChatGPT.” Reuters, April 2023. https://www.reuters.com/technology/us-begins-study-possible-rules-regulate-ai-like-chatgpt-2023-04-11/.
Shokri, Reza, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. “Membership Inference Attacks Against Machine Learning Models.” arXiv, March 8. https://doi.org/10.48550/arXiv.1610.05820.
Simonite, Tom. “Now That Machines Can Learn, Can They Unlearn?” Wired. Accessed February 21, 2023. https://www.wired.com/story/machines-can-learn-can-they-unlearn/.
Sterling, Toby. “European Privacy Watchdog Creates ChatGPT Task Force.” Reuters, April 9. https://www.reuters.com/technology/european-data-protection-board-discussing-ai-policy-thursday-meeting-2023-04-13/.
Veale, Michael, and Frederik Zuiderveen Borgesius. “Demystifying the Draft EU Artificial Intelligence Act.” Preprint. SocArXiv, July 2021. https://doi.org/10.31235/osf.io/38p5f.
Vincent, James. “Getty Images Is Suing the Creators of AI Art Tool Stable Diffusion for Scraping Its Content.” The Verge, January 2023. https://www.theverge.com/2023/1/17/23558516/ai-art-copyright-stable-diffusion-getty-images-lawsuit.
Vinsel, Lee. “You’re Doing It Wrong: Notes on Criticism and Technology Hype.” Medium, February 2021. https://sts-news.medium.com/youre-doing-it-wrong-notes-on-criticism-and-technology-hype-18b08b4307e5.
Weidinger, Laura, John Mellor, Maribeth Rauh, Conor Griffin, Jonathan Uesato, Po-Sen Huang, Myra Cheng, et al. “Ethical and Social Risks of Harm from Language Models.” arXiv, December 2021. https://doi.org/10.48550/arXiv.2112.04359.
“What Are the GDPR Fines?” GDPR.eu, July 2018. https://gdpr.eu/fines/.
Wiggers, Kyle. “Addressing Criticism, OpenAI Will No Longer Use Customer Data to Train Its Models by Default.” TechCrunch, March 2023. https://techcrunch.com/2023/03/01/addressing-criticism-openai-will-no-longer-use-customer-data-to-train-its-models-by-default/.
Writer, Robert LemosContributing, Dark ReadingMarch 07, and 2023. “Employees Are Feeding Sensitive Business Data to ChatGPT.” Dark Reading, March 2023. https://www.darkreading.com/risk/employees-feeding-sensitive-business-data-chatgpt-raising-security-fears.
Xiang, Chloe. OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit.” Vice, February 2023. https://www.vice.com/en/article/5d3naz/openai-is-now-everything-it-promised-not-to-be-corporate-closed-source-and-for-profit.
Yeom, Samuel, Irene Giacomelli, Matt Fredrikson, and Somesh Jha. “Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting.” arXiv, May 2018. https://doi.org/10.48550/arXiv.1709.01604.