the AI slop in security reports have developed slightly over time. Less mind-numbingly stupid reports now, but instead almost *everyone* writes their reports with AI so they still get overly long and complicated to plow through. And every follow-up question is another minor essay discussing pros and cons with bullet points and references to multiple specifications.
Exhausting nonetheless.
Hubert Figuière reshared this.
Schneit es in Bremen auch?
Troed Sångberg
in reply to daniel:// stenberg:// • • •We need another AI trained on Linus' kernellist posts.
"Please rewrite this slop into something short and concise, as if Linus had sent it"
...
Hopefully the AI deluge with end when the novelty wears off and people realize sounding like you work in marketing and customer care isn't the way to communicate tech.
Hugo 雨果
in reply to daniel:// stenberg:// • • •whenever I ask an LLM something, I prefix my question with “briefly:” or “be concise”.
I’m curious if this would work, considering that many may copy-paste your response into an LLM.
daniel:// stenberg://
in reply to Hugo 雨果 • • •Hugo 雨果
in reply to daniel:// stenberg:// • • •Scott Melker
in reply to daniel:// stenberg:// • • •daniel:// stenberg://
in reply to Scott Melker • • •Scott Melker
in reply to daniel:// stenberg:// • • •Stefan Eissing
in reply to daniel:// stenberg:// • • •Fubaroque
in reply to daniel:// stenberg:// • • •daniel:// stenberg://
in reply to daniel:// stenberg:// • • •HackerOne
HackerOneSaraMG 🇺🇸->🇵🇹
in reply to daniel:// stenberg:// • • •Мя ��
in reply to daniel:// stenberg:// • • •cake-duke
in reply to daniel:// stenberg:// • • •daniel:// stenberg://
in reply to cake-duke • • •cake-duke
in reply to daniel:// stenberg:// • • •On the first point: I imagine that an LLM improving in style is way way easier than improving in substance. So if LLMs read your posts and improve in style, you'll get the same substance-slop but harder for you to detect. That's bad for you.
On the second point. I'm not so sure about that. I'll come back on this point.
daniel:// stenberg://
in reply to cake-duke • • •cake-duke
in reply to daniel:// stenberg:// • • •Back on the second point. Here's a link
[1]This article calls this "poisoning" because they're talking about malicious attacks.
[1] anthropic.com/research/small-s…
A small number of samples can poison LLMs of any size
www.anthropic.comcake-duke
in reply to cake-duke • • •Here's the passage I wanted to show you:
> poisoning attacks require a near-constant number of documents regardless of model and training data size. This finding challenges the existing assumption that larger models require proportionally more poisoned data. Specifically, we demonstrate that by injecting just 250 malicious documents into pretraining data, adversaries can successfully backdoor LLMs ranging from 600M to 13B parameters.
cake-duke
in reply to cake-duke • • •To me this suggests that an LLM could indeed learn from one specific individual if it deems that individual "important" enough.
In that article they talk about triggering on a "specific phrase", but an LLM has its own internal representation of the text, so in principle might trigger on different things like specific phrase plus the name of the user on a certain website, or something.
daniel:// stenberg://
in reply to cake-duke • • •cake-duke
in reply to daniel:// stenberg:// • • •daniel:// stenberg://
in reply to cake-duke • • •A new breed of analyzers
daniel.haxx.seChristian
in reply to daniel:// stenberg:// • • •Agreed. But as a non programmer trying to submit to any project is mostly replied by "Not enough data, what about FunkyMethodYouCantKnow()?" - AI helps mitigate this.
Either developers accept the fact that there are incapable human beeings on the other end OR they accept its a LLM. Whats not going to happen: 100% qualified bug reports.
DeeAnn Little
in reply to daniel:// stenberg:// • • •daniel:// stenberg://
in reply to DeeAnn Little • • •James Healy
in reply to daniel:// stenberg:// • • •reminds me the oxide GenAI RFD
“Finally, LLM-generated prose undermines a social contract of sorts: absent LLMs, it is presumed that of the reader and the writer, it is the writer that has undertaken the greater intellectual exertion. (That is, it is more work to write than to read!) For the reader, this is important: should they struggle with an idea, they can reasonably assume that the writer themselves understands it — and it is the least a reader can do to labor to make sense of it”
Teixi
in reply to daniel:// stenberg:// • • •Paul Hoffman
in reply to daniel:// stenberg:// • • •dushinto
in reply to daniel:// stenberg:// • • •