Code Generation
AI Copilots risk licence contamination
GitHub’s safeguards against contaminating your code base with GPL-licenced code are insufficient. If the Copilot copies code under the GPL and modifies it in even the slightest way, GitHub’s safeguards no longer work, but your code will still be contaminated.
They are prone to insecure code
They seem to generate code that is at least as insecure as that written by a novice programmer.
They trigger our Automation and Anchoring Biases
Tools for cognitive automation trigger your automation bias. If you’re using a tool to help you think less, that’s exactly what you do, which compromises your judgement about what the tool is doing. We also have a bias that favours whatever “anchors” the current context, usually the first result, even if that result is subpar.
They are “stale”
Most of the code language models are trained on is legacy code. They will not be aware of deprecations, security vulnerabilities, new frameworks, new platforms, or updated APIs.
They reinforce bad ideas
Unlike research, an AI will never tell you that your truly bad idea is truly bad. This is a major issue in software development as programmers are fond of reinventing bad ideas.
They promote code bloat
The frequency of defects is software generally increases proportionally with lines of code. Larger software projects are also more prone to failure and have much higher maintenance costs. Code copilots promote code bloat and could that way increase not decrease development costs in the long term.
Further reading
References
These cards were made by Baldur Bjarnason.
They are based on the research done for the book The Intelligence Illusion: a practical guide to the business risks of Generative AI .