I'm a little amazed by the amount of CVEs released by OpenSSL today: openssl-library.org/news/vulne…
12(!) of them were reported by people at Aisle.
Aisle makes an AI-powered code analyzer. That's what they use to find these flaws.
I mean if you are curious what AI can do in Open Source security when used for good.

Miguel de Icaza ᯅ🍉
in reply to daniel:// stenberg:// • • •daniel:// stenberg://
in reply to Miguel de Icaza ᯅ🍉 • • •Josh Bressers
in reply to daniel:// stenberg:// • • •@Migueldeicaza security code reviews are brutal to do, especially if the codebase is large
There’s almost no way an LLM won’t outperform a human doing this stuff
daniel:// stenberg://
in reply to Josh Bressers • • •pitch R.
in reply to daniel:// stenberg:// • • •After all its a tool like any other.
schnedan
in reply to daniel:// stenberg:// • • •daniel:// stenberg://
in reply to schnedan • • •Cassandrich
in reply to daniel:// stenberg:// • • •Most of "AI" is human labor misrepresented as something the machine did, to defraud customers and investors.
I wouldn't be surprised if we find out that their "AI-powered code analyzer" did little or nothing here, and that they spent a lot of money on actual labor for the sake of promoting their product.
daniel:// stenberg://
in reply to Cassandrich • • •@dalias I know that's not the case because I also have access to such tools (made by others) and as I can run them on my own code I can see what they do and what they can find.
AI powered code analyzers is a real thing, and they make better code analyzers than the ones without the AI component.
daniel:// stenberg://
in reply to daniel:// stenberg:// • • •Cassandrich
in reply to daniel:// stenberg:// • • •Cassandrich
in reply to Cassandrich • • •daniel:// stenberg://
in reply to Cassandrich • • •Cassandrich
in reply to daniel:// stenberg:// • • •daniel:// stenberg://
in reply to Cassandrich • • •Cassandrich
in reply to daniel:// stenberg:// • • •I don't think you're "lying".
I think you're giving undue credit to an industry that's vastly over-promising and that has extremely bad externalities. And that doing so lowers your credibility among folks who care about this.
daniel:// stenberg://
in reply to Cassandrich • • •@dalias I have not argued against them over-promising and doing all sorts of crap. They do. And will continue to most likely. That's certainly problematic.
What I *am saying* though, is that some of the AI (powered code analyzer) tools are better than most non-AI ones. And I think I've seen one or two in my days and I have written a line of code or two.
AI can be used to do good. Is it worth the cost? That's a separate question.
Cassandrich
in reply to daniel:// stenberg:// • • •"AI can be used to do good." is a statement that lacks meaning without clarifying what "AI" means, and that excuses all sorts of other things under the umbrella of "AI" that fundamentally cannot be used for good. Because "AI" is and always has been a strategically vague marketing term not a technical category.
It's likely that statistical models of source code correlated with vulnerabilities can be used for good. I don't think they can be built without massive scale license infringement enclosing the commons, nor lots of other types of harm.
daniel:// stenberg://
in reply to Cassandrich • • •@dalias > It's likely that statistical models of source code correlated with vulnerabilities can be used for good.
Great. That seems to be roughly what I said too.
Valerie Aurora 🇺🇦
in reply to daniel:// stenberg:// • • •daniel:// stenberg://
in reply to daniel:// stenberg:// • • •screwlisp
in reply to daniel:// stenberg:// • • •daniel:// stenberg://
in reply to screwlisp • • •screwlisp
in reply to daniel:// stenberg:// • • •Thanks. When they say
> have been building an automated AI system for deep cybersecurity discovery and remediation
it does not sound like they are talking about slopbots being infosec employees, and does sound quite a lot like deep learning code static analysis.
nyanbinary
in reply to daniel:// stenberg:// • • •Kris
in reply to daniel:// stenberg:// • • •