Skip to main content

in reply to daniel:// stenberg://

this is insignificant but

> In this particular report, the user helpfully told us that they used Bard to find this issue. Bard being the Microsoft/Bing generative AI thing

bard was google's creation, not microsofts

in reply to daniel:// stenberg://

It's really disappointing how much AI-generated crap is out there. Do you think folks actually expect you to pay for these trash findings? Also, have you seen any examples of findings reported with AI that _weren't_ trash?

I can think of some things where AI could help accelerate things, but it seems limited presently. I could imagine someone writing a bot which trolls the dark web and submits findings for employee credentials found, or something like that.

in reply to manchicken moved!

@manchicken no, I have not yet seen any positive examples where AI has actually helped to find a security problem 😞 I imagine it *could* ...
in reply to daniel:// stenberg://

That's kinda what I suspected. I feel like so much of the AI/LLM stuff is still stuck in that space of hypothetical utility alongside Web3.
in reply to daniel:// stenberg://

You can almost tell ChatGPT already by how they 'sound', excessively explaining and repeating its points.
in reply to daniel:// stenberg://

thanks for sharing your perspective and experience. Unfortunately, "like for the email spammers, the cost of this ends up in the receiving end" really captures the situation quite well.

Hopefully it also forces some to reflect that tech without adequate guardrails and protections is certainly not the democratizing force they may wish it was, especially due to labor and power imbalances.

in reply to daniel:// stenberg://

I hope this is not the beginning of a flood. This Science Fiction magazine had to stop accepting submissions because they started getting so many bad stories "written" by LLMs:

clarkesworldmagazine.com/clark…

In both cases the tool has made it cheaper to create spam submissions and made the job of the editor/maintainer harder.

in reply to daniel:// stenberg://

"Sometimes reporters use AIs or other tools to help them phrase themselves or translate what they want to say."

English education in my country (Hungary) is shit, and my biggest contribution to the local community arguably was to provide opportunities to discuss IT-security in our native tongue. I also find this a huge mistake, creating a local bubble and disincentivizing people from learning to communicate internationally.

So while I greatly admire your intentions here, as a non-native speaker my opinion is that this approach is counter-productive from the reporters standpoint. Also, LLM's are not translator programs, and they hide semantic translation errors just as easily as technical non-sense.

Based on this I think banning LLMs altogether would be a reasonable choice.

Once again, thanks for your work and the great post!

in reply to buherator

@buherator I'm not suggesting that using LLMs for translations is the best idea. I'm just acknowledging the fact that some people use it for that.
in reply to daniel:// stenberg://

Incredibly frustrating to deal with, I can imagine.

Interesting how dinesh_b's English skills almost completely disappeared when he explained why he was addressing h1_analyst_oscar, and then he became much more fluent when going back to talking about the alleged vulnerability.