Skip to main content


"Buffer Overflow Vulnerability in WebSocket Handling".

A bot? An AI? Just a silly reporter? Another fine waste of #curl maintainer time.

hackerone.com/reports/2298307

#curl

Ondřej Caletka reshared this.

in reply to daniel:// stenberg://

Ah, now we are going to have more sophisticated *beg bounties* with AI help. What a wonderful world to live in…
in reply to daniel:// stenberg://

Hackerone should have a policy to give permabans to the people who report LLM generated shit like this.
in reply to daniel:// stenberg://

That one absolutely reeks of ChatGPT, sentence structures, phrasing and all. I would block and report.
in reply to daniel:// stenberg://

isn't the bug bounty provider responsible for filtering out the crazies, before forwarding this to you? Just curious, how this works.
in reply to winnie, the disassembling bear

I can set different filter levels, but in this case the first submission they did was not obviously rubbish so nobody would have filtered it out without closer inspection. I did not either.

We have to remember language barriers and cultural differences. Sometimes it takes a little back and forth before the real details reveal. I cannot just immediately shout AI just because they phrase themselves oddly.

This entry was edited (10 months ago)
in reply to daniel:// stenberg://

@disasmwinnie just commenting to tell you that I appreciate this thought. I wouldn’t even have considered that and would probably immediately have closed the issue.
in reply to daniel:// stenberg://

It's sad to see, I'm conviced LLMs could be used as a good translation/formulation tool for those unfamiliar with english, instead it's used by people who think they can replace jobs they don't even understand.
These kinds of reports will probably end up with maintainers (understandably) dismiss any clearly LLM-written report...
in reply to daniel:// stenberg://

> Certainly! Let me elaborate on the concerns raised by the triager

Oof, I can smell ChatGPT from a mile away 😂 Crazy how they've just kept it in, even though it makes it seem like they're addressing themselves in the third person 🤦🏻‍♂️

in reply to daniel:// stenberg://

I think the only way to effectively combat this is requiring a proof of concept before investigating a security report. This would make it very quick for you to confirm that a bug exists in curl. But I'm not sure that's possible for every security bug in curl.
in reply to daniel:// stenberg://

That reads to me like ChatGPT, maybe someone was trying to boost up there account
in reply to daniel:// stenberg://

the TL;DR I got from this is:
- strcpy might cause a buffer overflow
- there is bounds checking
- yes but that might not be sufficient
- can you show an example where it's not sufficient
- *insert example where bounds checking is sufficient*

Very clear that whatever AI was used cannot understand code at all, huh?

in reply to daniel:// stenberg://

that's quite obviously a LLM (looks very much like it and GPTzero says 91% likely), but does anyone know why people do that?

I'm legit curious if that's just someone "having fun" hooking up the ChatGPT API to a bot, or if there's actually a financial goal behind that. I can see this being useful to give credibility to a Reddit account but here, they're not gaining anything at all

This entry was edited (10 months ago)