(Let’s skip the discussion about if LLM’s are a net positive or negative. Let’s just look at what is happening.)
LLM’s are increasingly being used to write code, lots of code. And according to a recent veracode paper they have a tendency to write insecure code.
The tl.dr. of that paper is that (just like humans), unless told and trained to write secure code they won’t.
No surprise there really, as a lot of example code out there is insecure. The old adage of bad data in, bad data out still holds.
This is probably bad news for the society, but good news for anyone working in incident response and security overall.
And even if they could be made not to write obviously insecure code, they write a lot of code, and humans are lazy and won’t spend the time to understand it. As long as it works it will be shipped.
So we’re going to se a lot of new vulnerable code being pushed to production in the coming years. I think we’re going to see a whole methodology popping up around finding what kind of mistakes are systemic for the AI’s and then abusing them.
Isn’t there anything we can do about it? Not really. After all we have been trying to get humans to write secure code since the start. The incentives are just not there.
This is one of those things where, as much as we techies would want, it’s not a technical problem it’s a political one.
Blaugust note
This post is the third one from me for the Blaugust festival. It was conceived and written in less than an hour and not edited or proofed, so I’ll mark it as a #draft