Introduction
Language models are a useful innovation
They perform well on many natural language tasks. The more synthetic the task is—the more disconnected it is from real-world processes, structures, and communications, the better they work.
Language models are very flawed
“Hallucinations” are fundamental to how they work. They have many hard-to-solve security issues. Their output is biased—often reflecting decades-old attitudes that go down poorly in modern marketing. They aren’t nearly as functional as vendors claim.
Diffusion models are a useful innovation
They do a good job of synthesising images in a variety of styles and converting them between styles. Diffusion models are hard to control—much of their output is mediocre and getting specific results takes a lot of work. They should be a useful addition to creative work.
Diffusion models are very flawed
Like language models, they are biased in that they reflect outdated attitudes and stereotypes that do not perform well in modern marketing. Their visual styles are limited. Many of those styles have been implemented without permission from the original artist.
Both language and diffusion models are lawsuit magnets
Their exact legal status in most major jurisdictions is uncertain, pending the outcome of regulatory action and numerous lawsuits. They are being positioned to directly employees, which is likely to provoke at least some additional regulatory action.
Most of what you see and hear about AI is bullshit
The AI industry has a long tradition of poor research, over-promising, making unfounded claims, and then under-delivering.
These cards were made by Baldur Bjarnason.
They are based on the research done for the book The Intelligence Illusion: a practical guide to the business risks of Generative AI .