in reply to Paco Hope #resist

Here is a way that I think #LLMs and #GenAI are generally a force against innovation, especially as they get used more and more.

TL;DR: 3 years ago is a long time, and techniques that old are the most popular in the training data. If a company like Google, AWS, or Azure replaces an established API or a runtime with a new API or runtime, a bunch of LLM-generated code will break. The people vibe code won't be able to fix the problem because nearly zero data exists in the training data set that references the new API/runtime. The LLMs will not generate correct code easily, and they will constantly be trying to edit code back to how it was done before.

This will create pressure on tech companies to keep old APIs and things running, because of the huge impact it will have to do something new (that LLMs don't have in their training data). See below for an even more subtle way this will manifest.

I am showcasing (only the most egregious) bullshit that the junior developer accepted from the #LLM, The LLM used out-of-date techniques all over the place. It was using:

  • AWS Lambda Python 3.9 runtime (will be EoL in about 3 months)
  • AWS Lambda NodeJS 18.x runtime (already deprecated by the time the person gave me the code)
  • Origin Access Identity (an authentication/authorization mechanism that started being deprecated when OAC was announced 3 years ago)

So I'm working on this dogforsaken codebase and I converted it to the new OAC mechanism from the out of date OAI. What does my (imposed by the company) AI-powered security guidance tell me? "This is a high priority finding. You should use OAI."

So it is encouraging me to do the wrong thing and saying it's high priority.

It's worth noting that when I got the code base and it had OAI active, Python 3.9, and NodeJS 18, I got no warnings about these things. Three years ago that was state of the art.

in reply to Paco Hope #resist

But we are not done.

Teach your developers ignore #AI generated warnings with this one weird trick!


If you look sy my previous post, under the erroneous "You should use OAI" message, there's a little icon and the word "Explain." You can ask the #LLM to explain the warning. Here's the explanation that Claude Sonnet 4.0 has for that warning. For those of you who don't want to read a screenshot, the LLM has prompted itself with:

Explain CloudFront Distribution Resources have an S3 Origin configured without an Origin Access Identity (OAI). issue in 70-frontend-stack.yaml at [29]


And the LLM's response to its own prompt is:

This is a false positive security warning. The issue is incorrectly flagged because:

You ARE using access control - Line 16-23 creates an Origin Access Control (OAC), which is the modern replacement for OAI

OAC is properly configured - Line 38 attaches the OAC: OriginAccessControlId: !GetAtt CloudFrontOriginAccessControl.Id

S3 bucket policy enforces OAC - Lines 120-135 ensure only CloudFront can access the S3 bucket

The security scanner likely only checks for the legacy OriginAccessIdentity property and doesn't recognize the newer OriginAccessControlId. Your configuration is actually more secure than using the deprecated OAI.

The warning can be safely ignored - your setup uses the AWS-recommended OAC approach for securing S3 origins.


Thanks for wasting my time AND egregious amounts of electricity generating a pointless "high priority" security warning.

#AI #llm
This entry was edited (2 days ago)
in reply to feld

@feld You can't do that unless you know that you should. And, as I pointed out, when the code was full of outdated techniques, I wasn't getting any warnings. I know what's out of date and what isn't. That's how I got into this situation. But a junior developer who doesn't know these things, and leans more on the LLM, will not know when the LLM-supplied code is wrong. They won't know they need to prompt the LLM to do something more modern.

I'm not much focused on myself, because I have what it takes to avoid steering myself into a ditch. But I'm thinking about what happens to someone less experienced who thinks they can use LLMs to do something that they couldn't otherwise do.

@feld