thats what I am wondering - quite a lot of reports in 2019 with only a tiny amount of confirmed vulnerabilities, even by later, post chatbot-hype standards
mh, yeah, I guess that makes sense. Much smaller scale but when we launched the CVD site at $workplace we also had a large number of initial bogus reports than these days...
are hackerone filtering out any of these at all, or are these the ones that make it through to you? Is it all still worth it? I don’t think I’d run a public bug bounty these days.
the primarily problem with marking them accurately as slop or not is that reporters almost always refuse to admit their use, probably because they know we frown upon it. So we're forced to make judgement calls and that's not an easy thing.
Do you think there is a way to encorporate this behaviour? Maybe add a policy like: Ban if your report is not based in a reviewable workflow or poc implementation?!
Obvs that one is to harsh. But is that direction at all feasible in your opinion?
we ban everyone we deem have submitted AI slop and we state that clearly in our policy. It might have some effect, but there are a lot of people in the world, and it is also easy for people to start new accounts.
do you publish such reports marked as slop somewhere? It might help other projects receiving similar bogus reports to see what others have received. Should be some oss-security-invalid mailing list created to share such reports?
Interesting that the slop only accounts for about half of the report increase between 2024 and 2025.. though then again that could've just been a temporary dip in 2024 for some other reason.
@natkr I also suspect there's an unknown number of issues where people were tricked by an AI into reporting something but they report it manually so we don't spot the slop.
I remeber not too long ago someone from the curl project had an interesting article about a person with an advanced AI tool that produced 300 non-slop reports. Are those represented here?
(See how I cleverly did not mention AI in the title!) You know we have seen more than our fair share of slop reports sent to the curl project so it seems only fair that I also write something about the state of AI when we get to enjoy some positive a…
@dboehmer The AI slop ones are *slop* meaning they are bad. There are a small subset of issues submitted found using AI powered tooling, but I don't have them tagged as they are not that special in my mind. Everyone uses tools to find issues.
nyanbinary (365d/y spoopy)
in reply to daniel:// stenberg:// • • •daniel:// stenberg://
in reply to nyanbinary (365d/y spoopy) • • •nyanbinary (365d/y spoopy)
in reply to daniel:// stenberg:// • • •daniel:// stenberg://
in reply to nyanbinary (365d/y spoopy) • • •nyanbinary (365d/y spoopy)
in reply to daniel:// stenberg:// • • •Faraiwe
in reply to daniel:// stenberg:// • • •ticho
in reply to daniel:// stenberg:// • • •Christoph Heiss
in reply to daniel:// stenberg:// • • •:thonk:
in reply to daniel:// stenberg:// • • •@bagder
../kajer/.
in reply to daniel:// stenberg:// • • •Geert Uytterhoeven
in reply to daniel:// stenberg:// • • •Neil Madden
in reply to daniel:// stenberg:// • • •Reed Mideke
in reply to daniel:// stenberg:// • • •JP Mens
in reply to daniel:// stenberg:// • • •weren't you looking for a talk title (FOSDEM?) recently?
That's a good one!
daniel:// stenberg://
in reply to daniel:// stenberg:// • • •daniel:// stenberg://
in reply to daniel:// stenberg:// • • •pitch R.
in reply to daniel:// stenberg:// • • •Do you think there is a way to encorporate this behaviour?
Maybe add a policy like: Ban if your report is not based in a reviewable workflow or poc implementation?!
Obvs that one is to harsh. But is that direction at all feasible in your opinion?
daniel:// stenberg://
in reply to pitch R. • • •Thibault D.
in reply to daniel:// stenberg:// • • •Grant Stephens
in reply to daniel:// stenberg:// • • •Petr Menšík
in reply to daniel:// stenberg:// • • •daniel:// stenberg://
in reply to Petr Menšík • • •AI slop security reports submitted to curl
GistMike Anderson
in reply to daniel:// stenberg:// • • •daniel:// stenberg://
in reply to Mike Anderson • • •Natalie
in reply to daniel:// stenberg:// • • •daniel:// stenberg://
in reply to Natalie • • •Mark Dominus
in reply to daniel:// stenberg:// • • •Mark Dominus
in reply to Mark Dominus • • •this was the article, I think by you.
I'm puzzled about how these two things fit together.
daniel.haxx.se/blog/2025/10/10…
A new breed of analyzers
daniel.haxx.sedaniel:// stenberg://
in reply to Mark Dominus • • •daniel:// stenberg://
in reply to daniel:// stenberg:// • • •curl - Project status dashboard
curl.seDaniel Böhmer
in reply to daniel:// stenberg:// • • •To make things even more complex 😉 you could show the confirmed vulns as share of AI slop or human reports respectively.
Maybe, just maybe, the rate of actual vulns seemingly reported by AI will increase over time??
daniel:// stenberg://
in reply to Daniel Böhmer • • •Koen 🇺🇦
in reply to daniel:// stenberg:// • • •Koen 🇺🇦
in reply to Koen 🇺🇦 • • •daniel:// stenberg://
in reply to Koen 🇺🇦 • • •Koen 🇺🇦
in reply to daniel:// stenberg:// • • •René Moser (resmo) レネ
in reply to daniel:// stenberg:// • • •