Zach Bennoui reshared this.
Zach Bennoui reshared this.
Just realized: Whenever I read outrageous news about politics, my outrage comes second. First, my brain makes an attempt to find a perspective in which it might make sense to act like these morons do.
Thatās not healthy for my brain. But Iāve trained myself so well that I canāt seem to unlearn the reflex.
And this is the main reason why I have to avoid news these days. Of course itās also because of the helplessness and all the bad emotions. But mainly because āunderstandingā causes damage to my brain and soul.
Continue, or "try it now" a popup from Gmail now asks, offering to compose your next message with Gemini. I guess the tiny "x" is the "fuck no" button?
Yesterday I switched to Windows Terminal and PowerShell 7 from the old Windows Console Host and batch syntax, and I do somewhat feel like I've been asleep at the wheel for years.
Proper UTF-8 support, aliases, a profile to configure things at shell startup, command output capture, correct parsing of ANSI escape sequences... In short, things people should expect from a real shell.
Hopefully this doesn't prompt NVDA to start shitting the bed at every opportunity as it apparently does for many others.
If you know me, you'll know that I'm not a friend of AI - but like the original Luddites I am not against the technology per se, but the use of it to drive an exploitative societal development.
@pluralistic has put it more eloquently than I ever could. So, read this:
theguardian.com/us-news/ng-intā¦
AI is asbestos in the walls of our tech society, stuffed there by monopolists run amok. A serious fight against it must strike at its rootsCory Doctorow (The Guardian)
Fully blind software developer who loves making their own tools to solve problems. Primarily known for Paperback, an accessible and lightning fast ebook/document reader.GitHub
reshared this
The āļø FOSDEM 2026 Schedule āļø app for Android is now available:
š f-droid.org/packages/info.metaā¦
š play.google.com/store/apps/detā¦
š Search filters
š New session cards design
š Edge-to-edge support
š New settings options
#fahrplan #fosdem #fosdem2026 #opensource @fosdem @fosdempgday @fosdembsd
Conference program app for the FOSDEM conferencef-droid.org
Peter Vágner reshared this.
the whole ai-bro shtick about "ai democritizes art/programming/writing/etc" seemed always so bs to me, but i couldn't put it into words, but i think i now know how.
ai didn't democritize any of these things. People did. The internet did. if all these things weren't democritized and freely available on the internet before, there wouldn't have been any training data available in the first place.
the one single amazing thing that today's day and age brought us is, that you can learn anything at any time for free at your own pace.
like, you can just sit down, and learn sketching, drawing, programming, writing, basics in electronics, pcb design, singing, instruments, whatever your heart desires and apply and practice these skills. fuck, most devs on fedi are self taught.
the most human thing there is is learning and creativity. the least human thing there is is trying to automate that away.
(not to mention said tech failing at it miserably)
reshared this
It democratizes it by making it available for the people who can't / don't want to / don't have the time for learning it.
We're already seeing non-programmers successfully create quite substantial coding projects with AI, to an extend which surprises even me, who was a huge proponent for AI in coding from the start.
Same applies to art, there are many people who need or want art (small business owners, hobbyist game creators, wedding organizers, school teachers), but don't have the budget for the real thing.
Of course, many artists and programmers don't want this to happen and try to invent reasons why this is a bad idea, just as phone operators didn't want the phone company to "force" customers to make their own calls, and just as elevator drivers tried to come up with reasons why driverless elevators were unsafe.
@miki without trying to convince you of anything (your stance on ai is yours, i'm not trying to change it), I can assure you that the reasons why many developers see generating production code with AI as a bad idea are not made up.
I am all for exchanging ideas between folks with different opinions, but this had to be said.
I see putting a prompt into AI and hoping that the generated code is correct as a bad idea, especially in complex apps that have long-term maintainability considerations, or when security / money / lives are at stake.
For throwaway projects (think "secret santa style gift exchange for a local community with a few extra constraints, organized by somebody with 0 CS experience", vibe coding is probably fine.
For professional developers, LLMs can still be pretty useful. Even if you have to review the code manually, push back on stupidity, and give it direction on how to do things, not just what to do (which is honestly what I do for production codebases), it's still a force multiplier.
@miki that's a reasonable middle ground we can somewhat agree on.
I haven't seen AI-generated code being the "force multiplier" some folks swear by, especially with newer things like the config changes in pipewire last year, but i guess ymmv
I think we're painfully re-learning the lessons we learned in programming over the last 70 or so years with AI, just like crypto had to painfully re-learn the lessons that trad fi got to learn in the last five hundred years.
Yes, you can 20x your productivity with AI if you stop worrying at all about architecture and coding practices, just like you can 5x your productivity without AI if you do the same thing. Up to a point. Eventually, tech dept will rear its ugly head, and the initial gains in productivity will be lost due to the bad architectural decisions. Sometimes that
@miki
It democratizes it by making it available for the people who can't / don't want to / don't have the time for learning it.
No, I'm sorry, but it doesn't.
What it "democratises" is being an art director who commissions a machine to generate things derived from the (uncredited, un-compensated) work of others (whose lack of consent was gleefully violated).
Gutenberg democratised learning, with his movable-type press.
Encylopaedias took that a step further, and Wikipedia amped it up again.
Blogs and Youtube democratised the sharing of knowledge and skills.
All these things have enabled people to learn how to do a thing.
But if you typed in a description and got a picture in return, you did not create that picture. You commissioned it.
@KatS It democratizes in the public transit way (by making transport available to non-drivers), not in the car way (by making it easy).
And btw: all art is uncredited and a lot of it is unconsensual. Outside of academia, it's extremely rare to credit every single influence that an artist used, down to Da Vinci or the Gregorian chants, as long as snippets significant snippets aren't extracted directly from that work, something that AI only does when prompted.
@miki @KatS we're not talking about influences, but more akin to "retracing".
Besides, there are real implications regarding free software licenses and AI generated slop, so it's not exclusively a moral dilemma, but a legal one too.
legal != the right thing to do necessarily, but mangling a bunch of intellectual property that's not yours through a statistical computer program isn't exactly comparable with an aspiring artist learning to draw.
@miki Wow.
It'll make for more efficient communication in future if you make it explicitly clear that you're democratising the commissioning of things, and working hard to devalue artistry in all its forms.
Talking about "democratising art" is typically read as making it easier for people to make art.
This is what leads to this kind of convoluted exchange.
@KatS The more you know about LLMs, the more "calibrated" you are about where they work (and don't work) right now. People who don't know much about them are either hypesters (mmaking a company of a thousand LLMs and firing all their employees), or LLM deniers. Both are just as crazy.
I also see not just where LLMs are right now, but where they are going. We went from coding agents being basically a joke a year ago, to them semi-autonomously solving (some) complex mathematical problems and being used for boring gruntwork by world-class, fields-medal-winning mathematicians. They can now also solve an extremely complex GPU performance engineering task that Anthropic used as an interview question for the most brilliant engineers in that discipline, *better than any human given the same amount of time*.
They're still much better at small, well-scoped and bounded tasks than at large open-ended problems, but "small and well-scoped" went from "write me a linked list implementation unconnected to anything in my code" to "write me a small feature and follow the style of my codebase." In a year. What will happen in another year? 5 years? 10 years? God only knows, and he certainly isn't telling.
@KatS look @miki don't get me wrong but any time i've tried using LLMs for my work, which isn't just some fun side project but actual production-running code, LLMs have been way too unreliable. It also resulted in me knowing jack shit about my own code, which is poison for long term maintainability.
Since these models are just statistically determining the next most likely token based on training data and fine tuning, without any actual understanding or thought behind it, I seriously can't see this tech being reliable enough one day. (reliable compared to humans, i don't seek 100% reliable in this case, natural language is too imprecise for that anyways. i would expect "good enough" as "as good as a professional in the given field")
The other part of the equation is the amount of compute and electrical energy necessary to train and operate such a level, and on that level, there's no way in hell that shit is ever gonna be worth it, financially and environmentally.
i'm not expecting the "make job for phone operators easier", i expect the "when i dial a number, it should be at least as reliable and efficient at routing it correctly as a phone operator would be".
you can call me whatever you want, even llm denier if you need to, but autocorrect on steroids isn't worth exploiting other people's work or boiling our oceans.
@miki The last thing I think I can usefully add to this thread is that you sound very much like the kind of person Michael Crichton wrote about.
I recommend watching Westworld some time - the movie, that is. I've never seen the series based on it.
@miki @KatS if i memorize every possible answer to a specific test, i can pass too. doesn't mean i know shit about fuck.
There's no actual thinking or reasoning involved (and no, reasoning models don't actually "reason"), so yeah, an LLM isn't actually intelligent, it just shows how flawed our tests for intelligence are.
To get some actual intelligence, thinking or reasoning involved, I'd reckon we'd have to fundamentally change something in the architecture of LLMs, and use a fuckton more computing resources for a single model, and considering how much energy the current tech already wastes, and the whole shtick that made LLMs (and more broadly generative AI) work in the first place is "we discovered that there comes a point where the output gets better when we throw rediculous amounts of compute resources on the problem", and it's already getting super difficult to run and maintain.
Honestly, either you're unreasonably optimistic, or you've never taken a look at how things actually work under the hood, but I really recommend you to take a closer look at the technology you praise so much.
A couple things you could take a look at (without an AI summarizer, otherwise you'd learn jack shit):
Attention is all you need, which is the paper that sparked all that AI craze and the development of GPT models and The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity
, which takes a closer look and tests reasoning models to infer strengths and weaknesses of reasoning models with all sorts of levels in problem complexity.
Honestly, before you make any claims about where the tech could be and what it could do, you should have a look at how things actually work under the hood and have a rough idea of how things work, otherwise, no offense, you're just talking out of your arse.
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism.arXiv.org
@KatS I have very specifically said "unseen questions."
If memorizing answers was a viable strategy to pass that test, humans would have done so.
If you still believe that there's no possible use for a tool that can get gold on a never-before-used set of math olympiad question given a few hours of access to a reasonably powerful computer, and that the existence of that tool will have no interesting impact on the world... I don't know what to tell you.
@miki @KatS > If you still believe that there's no possible use for a tool that can get gold on a never-before-used set of math olympiad question given a few hours of access to a reasonably powerful computer, and that the existence of that tool will have no interesting impact on the world...
How reliable is that source? And if that's true, is it really reasonable to bet everything on this, and let this do all your work when a) you end up completely dependent on the tech and b) utterly destroy the environment in that process?
Real world problems may be less complex but might require much more context.
Oh, and don't get me started on accountability. There's a reason why curl is closing their bug bounty program.
@KatS Nothing is ever gonna work right, not even humans. Different technologies are at different points on the price-to-mistakes curve, our job is to find a combination that minimizes price while also minimizing mistakes and harm caused.
E.G. it is definitely true that humans are much, much better psychologists than LLMs, but LLLMs are free, much more widely available in abusive environments, speak your language, even if you are in a foreign country, and work at 4AM on a Saturday when you get dumped by your partner. Human psychologists do not. Very often, the choice isn't between an LLM and a human, the real choice is between an LLM and nothing (and the richer you are, the less true this is, hence the "class divide" in opinions about tech). And I'm genuinely unsure which option wins here, but considering the rate of change over the last 3 years, I woulndn't bet towards "nothing" winning for long.
@miki
Access to art doesn't need to be democraticatised. Smaller galleries are free to enter, the large ones have their entire collections online.
If one can't afford to buy it, then they shouldn't steal it via GenAI. Support a local artist or designer who is just starting out, send a message to an artist whose work you liked that you saw on Instagram. If you're a small business owner, design should be taken into account when doing a business plan. Same with anyone else who needs art.
If one doesn't have time to learn it, don't. The whole concept of using GenAi because you don't have time or skills reeks of entitlement.
reshared this
An important PSA for people who are active on #Bluesky and who, upon hearing that the ICE account was officially verified, are saying: "I will just block it."
Blocking on Bluesky is NOT PRIVATE: it's very easy to see who is blocking any account by visiting sites that list that information.
I took a screenshot from clearsky.app, listing all the accounts that are blocking ICE (I pixelated avatars and usernames for privacy purposes).
The safest bet is to mute (that info is private) š«
This is also true about Mastodon*, but Mastodon actively tries to hide that fact from users and muddy the waters.
* It's technically hidden to users but not the admins of the instances involved, but if you're a gov agency, you're presumably on your own instance, as seems to be the custom here for the "big players."
miki reshared this.
In a way, #Putin even got more then he ever could wish for
All for free by #Trump
Alliances shattered, internal threats, everyone really disliking the US, speaking about war within #NATO even
It's unbelievable how much damage that senile dic(tator) has done within a year
I really hope we learn from this.. But history shown otherwise I guess
This post by Bruce Schneier contains so many thoughtful soundbites:
> The question is not simply whether copyright law applies to AI. It is why the law appears to operate so differently depending on who is doing the extracting and for what purpose.
> Like the early internet, AI is often described as a democratizing force. But also like the internet, AIās current trajectory suggests something closer to consolidation.
schneier.com/blog/archives/202ā¦
More than a decade after Aaron Swartzās death, the United States is still living inside the contradiction that destroyed him. Swartz believed that knowledge, especially publicly funded knowledge, should be freely accessible.Bruce Schneier (Schneier on Security)
I like looking at this through the concept of "enjoyment", which was originally developed in Japan I believe.
From that point of view, copyright only applies to a work when it is used for "enjoyment", for its intended purpose. If the work is primarily entertainment, it applies when the consumer is using it to entertain themselves. If the work is educative, it applies when the consumer is using it to learn something. It does not apply when the work is used for a purpose completely unrelated to its creation, such as testing a CD player on an unusual CD, demonstrating the performance of a speaker system, training a language model to classify customer complaints etc.
(This isn't a legal perspective, not even quite in Japan I believe, but it's useful lens through which we can look at the world and which people can use to decide on policy).
As curl now supports TLS (mqtts), it is no longer necessary to list it as a limitation in the docs.GitHub
TL;DR Most EV batteries will last longer than the cars theyāre in. Battery degradation is at better (meaning: lower) rates than expected. Slow charging is better. Drive EV and donāt worry about your battery.
āOur 2025 analysis of over 22,700 electric vehicles, covering 21 different vehicle models, confirms that overall, modern EV batteries are robust and built to last beyond a typical vehicleās service life.ā
PQ leader says Legault's resignation further evidence of need for independent Quebec
cbc.ca/news/canada/montreal/quā¦
tl;dr: the leader of the PQ is full MAGA. He believe in Santa Claus. He believe that in the US dictatorship Quebec and it's francofascism would be safe. Remember MAGA implies hating anyone speaking something other than English.
RE: mstdn.social/@TechCrunch/11591ā¦
oh look, another AI chat tool pops on the block.
Attached: 1 image Confer is designed to look and feel like ChatGPT or Claude, but your conversations can't be used for training or advertising. https://techcrunch.TechCrunch (Mastodon š)
Why Poilievre and Carney Are Silent on Grokās Child Sexual Abuse
thetyee.ca/Opinion/2026/01/15/ā¦
The former is just in his cesspool, running is con. The latter is just an hypocrite coward elite that would have no problem with Internet legislation when they can't enforce the basics.
The Liberals are afraid of Trump. The Conservatives fear their base.Eric Van Rythoven (The Tyee)
Christopher Duffley
in reply to Briš„° • • •Christopher Duffley
in reply to Christopher Duffley • • •Christopher Duffley (@ChrisDuffley): @Bri Uh, do we know why $source$ is not working for post templates? It's strange hearing $source$ every time I want to see what the source of the post is, especially mainly locally. 14:51:26 from $source$.
Briš„°
in reply to Christopher Duffley • • •Christopher Duffley
in reply to Briš„° • • •Briš„°
in reply to Christopher Duffley • • •Christopher Duffley
in reply to Briš„° • • •Briš„°
in reply to Christopher Duffley • • •