still baffles me that this is a thing somehow. As if there are any projects that would blindly award a bug bounty for this kind of made-up non-vulnerability.
so they don't understand that the code that comes out of AI is nonsensical? Like that one case where it was something like a use after free "vulnerability" in a curl function where the example itself would call free() first, did they genuinely not understand that they made the vulnerability themselves in their own example code? If so, this scares me.
The llm 'wall o text' with authoritative style is getting easy to spot ... a layperson scanning might think its plausible (and no doubt the llm believes it as well because it cannot discern truth ) ... what I really dislike is the AI augmented real person discussion ... where the first and last sentence is obviously written by the real person with supporting generated text like some kind of strange uncanny valley AI sandwich
Aljoscha Rittner (beandev)
in reply to daniel:// stenberg:// • • •Gregory
in reply to daniel:// stenberg:// • • •daniel:// stenberg://
in reply to Gregory • • •Gregory
in reply to daniel:// stenberg:// • • •daniel:// stenberg://
in reply to Gregory • • •Jim Fuller
in reply to daniel:// stenberg:// • • •