Appeasement didn't work with Hitler.

Appeasement didn't work with Putin.

Appeasement isn't working with Trump.

When will Europe learn this most basic lesson about how to deal with imperialist bullies?

politico.eu/article/hit-back-d…

Accessibility is not an excuse. For some backstory: I was at lunch with my parents when they told me I could do classes online but would most likely have to pay for it to which I said, "I would have to do some checking to make sure the platform was accessible." My dad then said one of the worst things anyone could ever say, family or not. "Oh, there's always an excuse." Accessibility. is. not. an excuse. It might seem like it to someone who might not have to think about something like that, but this attitude is disgusting. Accessibility comes in many forms, and having to think about whether something is going to be accessible or not is so goddamn tiring and I don't like that thought being the first thing I say, but also, that's a reality I have to accept. Again I'll say, accessibility, no matter what form you're thinking about is no excuse. After this interaction, I'm left with -1 spoons and a desire to do nothing else for the day.

Sometimes you just have to jump in, yes not everything might be the best decision to make, but honestly getting a Raspberry Pi and not waiting for the perfect moment to start selfhosting when, I had better hardware, or more money, or better internet, or lived in my own home etc, was one of my best decisions 2025. I love beeing able to spin up seriously useful docker containers in seconds, and only today, I added Beszel and linkding to the list of useful tools amongst gitea, audiobookshelf, joplin. Not only does it safe money (to host a joplin server instead of paying for joplin cloud, for example) it's fun, it's satesfying, it takes almost no resources. Just had to say it again, this is good.
This entry was edited (3 days ago)

Anybody have ideas about where to host a podcast (low traffic)?

One thousand years ago, I started a thing on a podcast service that got acquired by Spotify. The podcast is served from Spotify today. I’d like to change that.

I did figure out a story for hosting the podcast feed directly from my Ghost dot org site, but it’s a bit too fiddly for me to want to take that on right now, especially with a back catalog of episodes I’d want to keep online (ideally without migrating each item manually).

Hello everyone and welcome to the official #FastSM Mastodon account. Automated release announcements as well as tips and tricks and support will be posted here. I hope you enjoy FastSM! You can find the GitHub repo here: github.com/masonasons/FastSM

reshared this

RE: mastodon.social/@NouranKhaledG…

If you are still thinking of us, please donate and share to help my family overcome this tough time

chuffed.org/project/121561-urg…


I'm a human being. I have dreams. But the genocide changed my dreams. In the past I had big dreams. But now all my dreams are to live a normal life.

What normal life means for me?

A normal life is to sleep peacefully in the night. To have a home where my family gathers. To eat healthy food and drink clean water. To meet friends in the university. That's it! Unfortunately these basics became big dreams

Please donate to help my family overcome this tough time

chuffed.org/project/121561-urg…


I'm a human being. I have dreams. But the genocide changed my dreams. In the past I had big dreams. But now all my dreams are to live a normal life.

What normal life means for me?

A normal life is to sleep peacefully in the night. To have a home where my family gathers. To eat healthy food and drink clean water. To meet friends in the university. That's it! Unfortunately these basics became big dreams

Please donate to help my family overcome this tough time

chuffed.org/project/121561-urg…

Muslim mindset: “I’m fasting, don’t eat in front of me or I might be tempted.”
Christian: practices self-control and doesn’t make a public show of fasting.

Muslim man: sees a woman who isn’t fully covered and says, “Cover yourself or I’ll be tempted.”
Christian man: sees the same thing and says, “I need to guard my heart and discipline my eyes so I don’t sin.”

Christianity deals with the heart. We emphasise self-discipline and self-control. Islam, on the other hand, tries to control the environment instead, asking others to change because the individual hasn’t learned to master himself.

When the heart is truly transformed, temptation loses its power. Self-control means taking responsibility for your own desires, not placing the burden on others. A disciplined heart governs the flesh, not the other way around.

What are your pain points, folks? Stuff that you hate doing or dealing with, or problems you can't find a good solution to? Stuff that other people might be frustrated with, too.

I'm looking for a way to make myself valuable to other people, as a way to both help people and also earn an income to feed my family in the process.

One thing I can do *really well* is create reliable software to automate rote tasks, generate financial/statistical/other reports, or calculate difficult solutions. Think it can't be done without LLMs? I might surprise you!

Throw me a bone!

Please boost for reach!

#PainPoints
#WishList
#Automation
#Reporting
#ProblemSolving
#FediHire
#GetFediBHired
#FediJob

in reply to Aaron

@NicksWorld So I just downloaded this out of curiosity, and it seems like they definitely did some good work on accessibility. Unfortunately on macOS when used with VoiceOver, it doesn't really behave like a standard Mac app in terms of the UI and VoiceOver doesn't work how you would expect. This completely makes sense as it's open source and likely not developed using something like Swift UI, but for Mac users I would honestly stick to iWork.

A good friend of mine needs a lot of help. Facing health challenges as well as eviction, she needs enough covered to keep her, her partner and their cats from becoming homeless. Payment is going to be due by February or they get evicted, and they need as much covered as possible. They've personally helped me out before in my time of need nearly 2 years ago; please help me boost and cover their costs, I'll be forever grateful!

gofund.me/d74ed73c7

#MutualAid #Seattle #Mutual #Aid #GoFundMe

Python in 2026:

- New code doesn't work, misses dependency
- Dependency can't be installed with old PIP
- PIP can't update itself, since it is too old
- Delete PIP, download PIP installer
- PIP installer is too new for old Python
- download old-style new PIP installer
- install new PIP
- install dependency

Now I'm sitting there wondering what the new code was supposed to solve. Forgot why I ever tried to change that thing.

We're having a bit of a rough weekend. Had some things yesterday that locked the system out. Nobody could switch and the fronter had no connection to our headspace. We don't assume today will be that much better, but we're hoping tomorrow and the rest of the week will be less of a mess. How's everyone else's time? As always we hope everyone is well and taking care of themselves. - Samara

OK, so all I can trace from my minidumps is, my crashes are happening with how I'm calling LibEspeak.dll. hmm. This engine may not be ready for awhile. I'm going to have to break down both X86 and x64 calling conventions for Espeak. At least it's open-source so this isn't hard just more work.
OK, looks like The issue is with eSpeak initialization: calling FreeLibrary in the destructor may unload the module while leaving the static variable espeak_initialized true. Huh. Reference counters, here we come.
This entry was edited (3 days ago)

Just realized: Whenever I read outrageous news about politics, my outrage comes second. First, my brain makes an attempt to find a perspective in which it might make sense to act like these morons do.

That‘s not healthy for my brain. But I‘ve trained myself so well that I can’t seem to unlearn the reflex.

And this is the main reason why I have to avoid news these days. Of course it’s also because of the helplessness and all the bad emotions. But mainly because „understanding“ causes damage to my brain and soul.

#actuallyAutistic
@autistics

Ok, @x0 will also be happy to know: I added two new language settings:
1) autoTieDiphthongs
When enabled, the frontend scans token sequences and if it sees: previous token is vowel/semivowel, current token is vowel/semivowel, current is NOT wordStart and NOT syllableStart (so we don’t smash hiatus) and not already tied, not lengthened, and the second vowel looks like a typical offglide candidate (high vowels like i, ɪ, u, ʊ, …)
…it marks them as tied internally (prev.tiedTo=true, cur.tiedFrom=true), so timing treats the second part as a short offglide.
The second setting is autoDiphthongOffglideToSemivowel. Optional, off by default. If enabled and autoTieDiphthongs is enabled, then when we auto-tie we also try to swap the offglide vowel to a semivowel: i/ɪ/ɨ -> j u/ʊ -> w - This is the “make the glide more obvious” switch. I hope these will help people.
@x0

Gosh though. People are really helping me add engine-level settings, that's exciting I guess. More settings, the better. The more we can expose through things people can tweak, great. I'll also be updating the phoneme editor later on because I like the idea of using a spin-edit box, and auto-defaulting paths, and a few things will be improved. It's also not considering rules when speaking text from language-specific data and that needs fixing. Bugs bugs.

Yesterday I switched to Windows Terminal and PowerShell 7 from the old Windows Console Host and batch syntax, and I do somewhat feel like I've been asleep at the wheel for years.

Proper UTF-8 support, aliases, a profile to configure things at shell startup, command output capture, correct parsing of ANSI escape sequences... In short, things people should expect from a real shell.

Hopefully this doesn't prompt NVDA to start shitting the bed at every opportunity as it apparently does for many others.

If you know me, you'll know that I'm not a friend of AI - but like the original Luddites I am not against the technology per se, but the use of it to drive an exploitative societal development.

@pluralistic has put it more eloquently than I ever could. So, read this:

theguardian.com/us-news/ng-int…

Sigh. Since we added a new setting, have eurpod.com/synths/nvSpeechPlay… - especially if you speak Portuguese, it might help. Or maybe it'll screw things up so bad your language won't sound the same. Who knows. Unlucky 13. Guess I wasn't supersticious enough to skip it. Ah well.
it ever sounds like there’s “no diphthong,” it’s usually because the boundary gap or timing makes the two parts separate, or the glide is too quiet. We just added a setting to skip boundary gaps for vowel-to-vowel transitions, which is basically the diphthong smoother. Dedicated diphthong phonemes are optional and mostly for extra fine control. To use this for your language, toggle segmentBoundarySkipVowelToVowel: true (also default) or false. This should give folks even more control over gaps, and you can mess around with the others in default.yaml for a given language to see if they help change prosody.
This entry was edited (3 days ago)

Paperback version 0.7.0 is out, with a huge changelog!
* Added table support for HTML and XHTML-based documents! Navigate between tables using T and Shift+T, and press Enter to view one in a webview.
* Added a basic web rendering feature! Press Ctrl+Shift+V to open the current section of your document in a web-based renderer, useful for content like complex formatting or code samples.
* Added a Russian translation, thanks Ruslan Gulmagomedov!
* Added a Clear All button to the All Documents dialog.
* The update checker now displays release notes when a new version is available.
* Updated Serbian translation.
* Updated Bosnian translation.
* Fixed restoring the window from the system tray.
* Fixed Yes/No button translations in confirmation dialogs.
* Fixed loading configs when running as administrator.
* Fixed comment handling in XML and HTML documents.
* Fixed TOC parsing in Epub 2 books.
* Fixed navigating to the next item with the same letter in the table of contents.
* Fixed the find dialog not hiding properly when using the next/previous buttons.
* Fixed epub TOCs occasionally throwing you to the wrong item.
* Fixed various whitespace handling issues in XML, HTML, and pre tags.
* Fixed off-by-one error in link navigation.
* Fixed some books having trailing whitespace on their lines.
* Fixed various parser issues.
* Bookmark-related menu items are now properly disabled when no document is open.
* The elements list is now properly disabled when no document is open.
* Improved list handling in various document formats.
* Improved the translation workflow for contributors.
* Many internal refactors, moving the majority of the application’s business logic from C++ to Rust for improved performance and maintainability.
Download: paperback.dev/downloads/
Sponsor on GitHub: github.com/sponsors/trypsynth
Donate to development through PayPal: paypal.me/tygillespie05
Enjoy!

The ⚙️ FOSDEM 2026 Schedule ⚙️ app for Android is now available:

🛒 f-droid.org/packages/info.meta…
🛒 play.google.com/store/apps/det…

🆕 Search filters
🆕 New session cards design
🆕 Edge-to-edge support
🆕 New settings options

#fahrplan #fosdem #fosdem2026 #opensource @fosdem @fosdempgday @fosdembsd

I work as an audiobook quality controller. My employer uses ClickUp to manage tasks. Unfortunately, the web interface is unintuitive and inaccessible. It contains unnamed elements, menus that expand in all kinds of ways, and similar issues. I wrote to their developers, but even after years nothing has been fixed. Fortunately, ClickUp has an open API. So I used vibecoding and now I have my own minimalist HTML application that displays my tasks, start and end dates, comments, and attachments, and allows me to post comments. I still can’t change task statuses yet—we’ll see if I manage to solve that with the help of GPT. Of course, this is not how a blind person should function in an ideal world, but it is still a way we can help ourselves. That said, I still need a server where my PHP scripts run; it could probably be done with Python as well, but PHP seemed simpler to me since it’s already running on my server. BTW it would be ideal if Clickup api documentation is one simple document which I can throw to gpt but it seems that chatgpt already know how to use it.

Peter Vágner reshared this.

the whole ai-bro shtick about "ai democritizes art/programming/writing/etc" seemed always so bs to me, but i couldn't put it into words, but i think i now know how.

ai didn't democritize any of these things. People did. The internet did. if all these things weren't democritized and freely available on the internet before, there wouldn't have been any training data available in the first place.

the one single amazing thing that today's day and age brought us is, that you can learn anything at any time for free at your own pace.

like, you can just sit down, and learn sketching, drawing, programming, writing, basics in electronics, pcb design, singing, instruments, whatever your heart desires and apply and practice these skills. fuck, most devs on fedi are self taught.

the most human thing there is is learning and creativity. the least human thing there is is trying to automate that away.

(not to mention said tech failing at it miserably)

reshared this

in reply to Tech Goblin Lucy 🦝

It democratizes it by making it available for the people who can't / don't want to / don't have the time for learning it.

We're already seeing non-programmers successfully create quite substantial coding projects with AI, to an extend which surprises even me, who was a huge proponent for AI in coding from the start.

Same applies to art, there are many people who need or want art (small business owners, hobbyist game creators, wedding organizers, school teachers), but don't have the budget for the real thing.

Of course, many artists and programmers don't want this to happen and try to invent reasons why this is a bad idea, just as phone operators didn't want the phone company to "force" customers to make their own calls, and just as elevator drivers tried to come up with reasons why driverless elevators were unsafe.

in reply to Tech Goblin Lucy 🦝

I see putting a prompt into AI and hoping that the generated code is correct as a bad idea, especially in complex apps that have long-term maintainability considerations, or when security / money / lives are at stake.

For throwaway projects (think "secret santa style gift exchange for a local community with a few extra constraints, organized by somebody with 0 CS experience", vibe coding is probably fine.

For professional developers, LLMs can still be pretty useful. Even if you have to review the code manually, push back on stupidity, and give it direction on how to do things, not just what to do (which is honestly what I do for production codebases), it's still a force multiplier.

in reply to Tech Goblin Lucy 🦝

I think we're painfully re-learning the lessons we learned in programming over the last 70 or so years with AI, just like crypto had to painfully re-learn the lessons that trad fi got to learn in the last five hundred years.

Yes, you can 20x your productivity with AI if you stop worrying at all about architecture and coding practices, just like you can 5x your productivity without AI if you do the same thing. Up to a point. Eventually, tech dept will rear its ugly head, and the initial gains in productivity will be lost due to the bad architectural decisions. Sometimes that

in reply to miki

@miki

It democratizes it by making it available for the people who can't / don't want to / don't have the time for learning it.


No, I'm sorry, but it doesn't.

What it "democratises" is being an art director who commissions a machine to generate things derived from the (uncredited, un-compensated) work of others (whose lack of consent was gleefully violated).

Gutenberg democratised learning, with his movable-type press.
Encylopaedias took that a step further, and Wikipedia amped it up again.
Blogs and Youtube democratised the sharing of knowledge and skills.
All these things have enabled people to learn how to do a thing.

But if you typed in a description and got a picture in return, you did not create that picture. You commissioned it.

@miki
in reply to Kat (post-Hallowe'en edition)

@KatS It democratizes in the public transit way (by making transport available to non-drivers), not in the car way (by making it easy).

And btw: all art is uncredited and a lot of it is unconsensual. Outside of academia, it's extremely rare to credit every single influence that an artist used, down to Da Vinci or the Gregorian chants, as long as snippets significant snippets aren't extracted directly from that work, something that AI only does when prompted.

in reply to miki

@miki @KatS we're not talking about influences, but more akin to "retracing".

Besides, there are real implications regarding free software licenses and AI generated slop, so it's not exclusively a moral dilemma, but a legal one too.

legal != the right thing to do necessarily, but mangling a bunch of intellectual property that's not yours through a statistical computer program isn't exactly comparable with an aspiring artist learning to draw.

in reply to Tech Goblin Lucy 🦝

@KatS Because I use it every day, and I can see how much it helps. And to be fair, it primarily helps people who get X done, not the doers of X. Just as automated telephones primarily help those who want to make phone calls (by making them cheaper, faster and much more convenient), not the phone operators who helped to make them in the past.
in reply to Tech Goblin Lucy 🦝

@KatS The more you know about LLMs, the more "calibrated" you are about where they work (and don't work) right now. People who don't know much about them are either hypesters (mmaking a company of a thousand LLMs and firing all their employees), or LLM deniers. Both are just as crazy.

I also see not just where LLMs are right now, but where they are going. We went from coding agents being basically a joke a year ago, to them semi-autonomously solving (some) complex mathematical problems and being used for boring gruntwork by world-class, fields-medal-winning mathematicians. They can now also solve an extremely complex GPU performance engineering task that Anthropic used as an interview question for the most brilliant engineers in that discipline, *better than any human given the same amount of time*.

They're still much better at small, well-scoped and bounded tasks than at large open-ended problems, but "small and well-scoped" went from "write me a linked list implementation unconnected to anything in my code" to "write me a small feature and follow the style of my codebase." In a year. What will happen in another year? 5 years? 10 years? God only knows, and he certainly isn't telling.

in reply to Kat (post-Hallowe'en edition)

@KatS look @miki don't get me wrong but any time i've tried using LLMs for my work, which isn't just some fun side project but actual production-running code, LLMs have been way too unreliable. It also resulted in me knowing jack shit about my own code, which is poison for long term maintainability.

Since these models are just statistically determining the next most likely token based on training data and fine tuning, without any actual understanding or thought behind it, I seriously can't see this tech being reliable enough one day. (reliable compared to humans, i don't seek 100% reliable in this case, natural language is too imprecise for that anyways. i would expect "good enough" as "as good as a professional in the given field")

The other part of the equation is the amount of compute and electrical energy necessary to train and operate such a level, and on that level, there's no way in hell that shit is ever gonna be worth it, financially and environmentally.

i'm not expecting the "make job for phone operators easier", i expect the "when i dial a number, it should be at least as reliable and efficient at routing it correctly as a phone operator would be".

you can call me whatever you want, even llm denier if you need to, but autocorrect on steroids isn't worth exploiting other people's work or boiling our oceans.

in reply to Tech Goblin Lucy 🦝

@KatS Autocorrect on steroids is basically GPT-3 tech. There's a lot more that goes into modern LLMs. A lot of the improvements are due to reinforcement learning, where LLMs learn to predict tokens that actually achieve some outcome, E.G. code that passes tests, answer that is judged "good" by a domain expert. There's still token prediction involved of course, but it somehow turns out that token prediction can get better scores than any human at (unseen) math olympiad questions. And people still say it's not in any way intelligent...
in reply to miki

@miki @KatS if i memorize every possible answer to a specific test, i can pass too. doesn't mean i know shit about fuck.

There's no actual thinking or reasoning involved (and no, reasoning models don't actually "reason"), so yeah, an LLM isn't actually intelligent, it just shows how flawed our tests for intelligence are.

To get some actual intelligence, thinking or reasoning involved, I'd reckon we'd have to fundamentally change something in the architecture of LLMs, and use a fuckton more computing resources for a single model, and considering how much energy the current tech already wastes, and the whole shtick that made LLMs (and more broadly generative AI) work in the first place is "we discovered that there comes a point where the output gets better when we throw rediculous amounts of compute resources on the problem", and it's already getting super difficult to run and maintain.

Honestly, either you're unreasonably optimistic, or you've never taken a look at how things actually work under the hood, but I really recommend you to take a closer look at the technology you praise so much.

A couple things you could take a look at (without an AI summarizer, otherwise you'd learn jack shit):

Attention is all you need, which is the paper that sparked all that AI craze and the development of GPT models and The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity
, which takes a closer look and tests reasoning models to infer strengths and weaknesses of reasoning models with all sorts of levels in problem complexity.

Honestly, before you make any claims about where the tech could be and what it could do, you should have a look at how things actually work under the hood and have a rough idea of how things work, otherwise, no offense, you're just talking out of your arse.

in reply to Tech Goblin Lucy 🦝

@KatS I have very specifically said "unseen questions."

If memorizing answers was a viable strategy to pass that test, humans would have done so.

If you still believe that there's no possible use for a tool that can get gold on a never-before-used set of math olympiad question given a few hours of access to a reasonably powerful computer, and that the existence of that tool will have no interesting impact on the world... I don't know what to tell you.

in reply to miki

@miki @KatS > If you still believe that there's no possible use for a tool that can get gold on a never-before-used set of math olympiad question given a few hours of access to a reasonably powerful computer, and that the existence of that tool will have no interesting impact on the world...

How reliable is that source? And if that's true, is it really reasonable to bet everything on this, and let this do all your work when a) you end up completely dependent on the tech and b) utterly destroy the environment in that process?

Real world problems may be less complex but might require much more context.

Oh, and don't get me started on accountability. There's a reason why curl is closing their bug bounty program.

in reply to Tech Goblin Lucy 🦝

@KatS Nothing is ever gonna work right, not even humans. Different technologies are at different points on the price-to-mistakes curve, our job is to find a combination that minimizes price while also minimizing mistakes and harm caused.

E.G. it is definitely true that humans are much, much better psychologists than LLMs, but LLLMs are free, much more widely available in abusive environments, speak your language, even if you are in a foreign country, and work at 4AM on a Saturday when you get dumped by your partner. Human psychologists do not. Very often, the choice isn't between an LLM and a human, the real choice is between an LLM and nothing (and the richer you are, the less true this is, hence the "class divide" in opinions about tech). And I'm genuinely unsure which option wins here, but considering the rate of change over the last 3 years, I woulndn't bet towards "nothing" winning for long.

Current AI Downsides:

> Stole all creative, intellectual works from everyone ever

> Eats so much power that they need tons of nuclear plants yesterday

> Eats up so much electricity that everybody else is priced out

> Eats up so much GPU & DRAM that everyone else is priced out

> Devours jobs like Ghibli's No-face

> Falsely identifies people as criminals who aren't

> Hallucinates legal briefs in your court case

> Destroys the validity of all video evidence in all courts everywhere

> Facetracks children playing at the park

> Generates infinite piles of dogshit spaghetti code that can't be read or revised

> Can't count to 100, doesn't know how many r's are in Strawberry

> Deep-fakes Martin Luther King Jr. stealing fried chicken, Studio Ghibli child porn

> Produces ugly, smeary, unappealing fake video that nobody likes.

Added: Consumes water at a rate that will desertify our entire planet.

Added: Completely destroys college education, both in terms of cheating and inability to read/write

Added: Makes all art suspected as fake, all art stealable and reguritated.

Added: Allows world leaders to fake their health, presence, & speaking capacity.

Added: Not even a Language Model.

Added: Fake/bullshit content and rampant chatbotting means the Internet is now mostly dead.

Added: AI warfare is inept and kills innocent/misidentified people. AI security bots are in the works for your home town.

Added: Allows for extortion, sextortion, scamming at a level never seen before.

Current AI upsides:

> Sam Altman is rich, I guess, idk.

This entry was edited (3 weeks ago)