2025 had ups and downs, but I loved my 2025 at #IzzyOnDroid.
Our team has grown with several new members and we launched izzyondroid.org/, our new web home!
We even got our first grant ever (thanks, @nlnet!), allowing us to bring download statistics to IzzyOnDroid and integrate it directly into #NeoStore (and soon #Droidify), improve the #ReproducibleBuild system and more.
I'm grateful to the team for giving me a space where I feel able to make a difference ❤️
Here is to 2026! 🎉
youtube.com/watch?v=cMDIDz58LO…
ZWIFT 5000 KM WORLD RECORD ATTEMPT! - Team #IMOVEFORCANCER - Fueled by Umara
On Christmas Eve I will take on the toughest cycling challenge of my life…I will attempt to ride 5,000 km on Zwift in eight days, for a new official world re...YouTube
the AI slop in security reports have developed slightly over time. Less mind-numbingly stupid reports now, but instead almost *everyone* writes their reports with AI so they still get overly long and complicated to plow through. And every follow-up question is another minor essay discussing pros and cons with bullet points and references to multiple specifications.
Exhausting nonetheless.
On the first point: I imagine that an LLM improving in style is way way easier than improving in substance. So if LLMs read your posts and improve in style, you'll get the same substance-slop but harder for you to detect. That's bad for you.
On the second point. I'm not so sure about that. I'll come back on this point.
Back on the second point. Here's a link
[1]This article calls this "poisoning" because they're talking about malicious attacks.
[1] anthropic.com/research/small-s…
A small number of samples can poison LLMs of any size
Anthropic research on data-poisoning attacks in large language modelswww.anthropic.com
Here's the passage I wanted to show you:
> poisoning attacks require a near-constant number of documents regardless of model and training data size. This finding challenges the existing assumption that larger models require proportionally more poisoned data. Specifically, we demonstrate that by injecting just 250 malicious documents into pretraining data, adversaries can successfully backdoor LLMs ranging from 600M to 13B parameters.
To me this suggests that an LLM could indeed learn from one specific individual if it deems that individual "important" enough.
In that article they talk about triggering on a "specific phrase", but an LLM has its own internal representation of the text, so in principle might trigger on different things like specific phrase plus the name of the user on a certain website, or something.
A new breed of analyzers
(See how I cleverly did not mention AI in the title!) You know we have seen more than our fair share of slop reports sent to the curl project so it seems only fair that I also write something about the state of AI when we get to enjoy some positive a…daniel.haxx.se

Ludovic :Firefox: :FreeBSD:
in reply to daniel:// stenberg:// • • •