I would like a 5.1 channel surround system I can easily plug into my laptop's USB/HDMI hub so I can listen to podcasts and music on something that isn't a headset. Used to be you could get that in a sub-$150 form factor with 3 analogue 1/8-inch connectors. Now, though, my only connectivity option looks to be HDMI. Everything I'm finding is a sound bar which, if past experience is any guide, means a complicated ecosystem where I'll need an app and an account and likely sighted help, because you can't just plug the damn thing in and get sound, you have to make sure you're not in bluetooth mode, or otherwise mash button combinations so your speakers actually do the thing.

Surely I'm not the last weirdo left who wants his computer to sound good without a headset? What are my options? There don't seem to be either plain speakers or non-sound-bar options--maybe a mini receiver that can handle the HDMI input, with enough physical buttons so I can press one to switch to HDMI? IME sound bars have like 3 buttons, and each does a dozen things which you only distinguish by seeing which light is lit or similar. Then there's my last sound bar, which at one point crashed so hard that I started actually seeing the 403s from what was apparently its onboard Nginx server. I really hate technology somedays.

Qwen taught me something useful today after I questioned something it wrote.

There is such thing as an transmission "erasure" vs a transmission "error"

- Errors: Unknown corruption of data bits during transmission (e.g., a 0 becomes a 1 or vice versa)

- Erasures: Complete loss of data packets/bits, where we know which specific positions are missing (the locations of missing data are known)

huh. I never knew.

I've come to a point in my life where I find speaking to be a ridiculously complicated and inefficient way of communicating. The funny thing is, because of extensive scripting and such, you wouldn't know it if you actually heard me speak, but the process of buffering words and formulating all the right things to say is just so very taxing.
Thus, as of late, I am speaking as little as I can get away with.

Tracking my checked luggage with Google Find Hub on my trip to Montreal worked surprisingly well.

Much better than ~2 years ago when I first got a Pebblebee tracker.

I also like that they come in different form factors. This time I got one in credit card format that slides into the existing luggage tag.

(#NotSponsored, obviously, but do get in touch if you want to send me absurd amounts of money for posting on #Mastodon.)

"Vorzeigeplattform für Inklusion vor dem Aus" - so lautet die heutige Schlagzeile bei ORF Tirol über das Projekt bidok

Lesen Sie den ganzen Artikel hier: tirol.orf.at/stories/3329364/

Unsere Bitte: Teilen Sie diesen Beitrag, damit möglichst viele Menschen erfahren, was auf dem Spiel steht.

Falls Sie es noch nicht getan haben, können Sie hier für bidok unterschreiben: tinyurl.com/bidok-unterschrift

#bidok #bidokbib #barrierefrei #barrierefreiheit #disabilitystudies #Tirol

Wrote up some thoughts about the proposed ban on the sale of TP-Link devices in the US.

The U.S. government is reportedly preparing to ban the sale of wireless routers and other networking gear from TP-Link Systems, a tech company that currently enjoys an estimated 50% market share among home users and small businesses. Experts say while the proposed ban may have more to do with TP-Link’s ties to China than any specific technical threats, much of the rest of the industry serving this market also sources hardware from China and ships products that are insecure fresh out of the box.

krebsonsecurity.com/2025/11/dr…

reshared this

Use Linux they say. It's easy for non-technical users and they never have to touch the command line they say.

I just spent 4 hours the other day digging around CLI and man pages and more trying to get my rsync and restic and system to do even kinda sorta system backup to cloud functionality that's built into Windows and MacOS, or offered as an easy client from your backup vendor. "Backup my home directory up to someplace". Because easy backup clients just aren't available or well supported for Linux.

No I'm not particularly interested in "oh you just have to go and find this random package that's in this repo and configure it and ...

I love and hate Linux sometimes.

This entry was edited (1 week ago)
in reply to Jess👾

As someone who recommends linux and chromebooks for "non-technical" users, I always attach a caveat.

Linux is great for the 2 ends of the bell curve.
If you're doing something super niche and you need full control and are willing to put in the work, obviously its great.

Its also great if your idea of a "computer" is just a facebook, YouTube and email machine. I have setup family members with either chrome books or Linux mint and it "just works" for them. Bonus points for keeping them better protected from malware.

But for anything in the middle, its probably gunna be more painful than other OSes.

in reply to Varx

An article published by LWN a few months ago describes a new, container-based, atomically updated desktop Linux distribution proposed for the European Union public sector. I expect the public-sector employees to be in your second category, with the distribution preconfigured to offer exactly the applications needed in their department or job role. Almost all Linux users I know are of course in your first category: the flexibility, configurability and control are the point so far as they're concerned, not an obstacle.

"#curl working as intended is a vulnerability"

Ok I paraphrased the title but this onslaught is a bit exhausting...

hackerone.com/reports/3418646

#curl

public.monster is a homage to the old web built on the new web. Inspired by sachajudd.com at @btconf → done in hours

~ bun.sh: all-in-one runtime
~ bunny.net: infra
~ hanko.io: auth

This would have been way harder in 1997

This entry was edited (1 week ago)

one of the most common security reports we get in #curl is claims of various CRLF injections where a user injects a CRLF into their own command lines and that's apparently "an attack".

We have documented this risk if you pass in junk in curl options but that doesn't stop the reporters from reporting this to us. Over and over.

Here's a recent one.

hackerone.com/reports/3418616

#curl
This entry was edited (1 week ago)

StreetComplete is a really fun and accessible way to contribute to OpenStreetMap from an Android device - walk around in your local neighbourhood (or anywhere really) and solve 'quests' by answering questions about the things around you!

You don't need to learn anything about mapping conventions, or infrastructure, or about the more complex mapping tools that exist for OpenStreetMap. The app will explain everything to you that you need to know, when you need to know it, and ask easily understandable questions with reference pictures for the answers.

The only setup needed is to make an OSM account and log into it from the app, so that it can upload your answers - and you can also do that at any later time, after trying out the app without an account for a while first. You can just install it and go outside right away!

The app doesn't need any cellular internet connection; it can work offline and synchronize your answers once you reach a place with eg. WiFi. It's also quite performant, and should run well even on lower-end phones. There is also a 'multiplayer' option that lets you split up in teams and each tackle different quests in the area.

streetcomplete.app/

#StreetComplete #OpenStreetMap

Part of the reason I’m so against LLM coding assistants is that it seems like it can do nothing but suck all of the joy and fulfilment out of work.

Like, for any task that requires skill, there’s some pleasure in using that skill and succeeding at it. Why would I want to automate it?

It’s like the thing of “automating my hobbies so I can spend more time doing the laundry”.

And obviously, yes, I do realise that my job doesn’t simply exist for the sake of me having fun. I don’t actually expect that to be a persuasive argument from a business perspective. But what makes the whole thing completely inexplicable to me is that this automation doesn’t even do a good job or speed things up at all.

All the code I’ve seen from LLMs has been total garbage. At best, it’s eventually come out with something as good as a human could do, except no faster, and through a process that’s far more annoying and unpleasant than simply doing the work manually.

There’s literally nothing in it for anybody (except the LLM companies, who get a subscription fee for doing something you could have easily done yourself, and who, when you complain that the results are bad, invent nonsense like “you have to have multiple LLMs all checking each other’s output” to wring more money out of you).

in reply to modulux

Long post

Sensitive content

in reply to ben

Long post

Sensitive content

in reply to James Scholes

@jscholes Yes. And spoon-feeding text to a screen reader should not be what developers primarily think of when they think of accessibility. The actual GUi should be made accessible through platform APIs. I know you know this, of course; I'm just stating it for the benefit of anyone watching who is outside the cottage industry of apps developed specifically for and usually by blind people.

@tunmi13 @fastfinge

in reply to 🇨🇦Samuel Proulx🇨🇦

@fastfinge @jscholes This is where AccessKit (accesskit.dev/) might help. Yeah, plugging a project I started. We did a proof of concept retrofitting it onto an old version of Unity a few years ago. github.com/AccessKit/the-inter… This was a mod to an existing open-source demo game; the Unity version was old even when we did it. And it was very hacky, as we had no cooperation from the engine. We haven't revisited this lately with modern Unity.
in reply to Matt Campbell

So would I. But the various game mods are developed by people mostly like me: hobbyists with jobs, and who are just skilled enough to find solutions and get things done. But without clear documentation and an easy to call API we can plug in, we're stuck. So I wouldn't expect this any time soon. All of the output systems in the above pole require one, maybe as many as three, lines of code to use.
in reply to 🇨🇦Samuel Proulx🇨🇦

@fastfinge @jscholes Sigh, yes, we need to fully document AccessKit, write bindings for more languages, and make sure the documentation is available for users of all the bindings. Unfortunately, my current funding for working on AccessKit doesn't cover either documentation or bindings.
in reply to 🇨🇦Samuel Proulx🇨🇦

@fastfinge @matt In this case by "OS semantics", I specifically meant live regions and have updated my previous response accordingly. To my knowledge, OSARA outputs a great deal of screen reader specific text without using a single screen reader library on Windows and it works perfectly.
This entry was edited (1 week ago)
in reply to James Scholes

It's possible my understanding could be out of date. I'd love a better way to do things. However, as far as I know, live regions require the window to have focus, and require the app to be a web app. That's just not the case for any one of my use-cases. Sometimes I'm using an apps built-in scripting language to add accessibility, sometimes I'm patching an app to send text to the screen reader, or sometimes I'm creating an entirely separate app to run in the background to read log files and output alerts that way. In none of these cases would live regions work.
in reply to 🇨🇦Samuel Proulx🇨🇦

@fastfinge @jscholes No, live regions are no longer exclusive to web apps. My understanding is that the application window has to be in the foreground, but the child window that contains the live region doesn't necessarily have to have focus. Paperback did this particularly elegantly by setting properties on a Win32 static text control.
in reply to Matt Campbell

Better, but still not going to work for 99 percent of mods. In general, you don't get to spawn a new window, or modify properties on existing ones. The only place I could make this work is adispeak; I can write a full C# DLL there and do whatever I want. But if I do that, I lose the ability to notify the user if they have the IRC client in the system tray, or even just on the taskbar. Far from ideal.
in reply to 🇨🇦Samuel Proulx🇨🇦

@fastfinge @jscholes If your mod can be active after the main window is created but before it's shown, you can bolt on accessibility by doing Win32 window subclassing on the main window. AccessKit includes code for doing the subclassing step, but you have to do it at exactly the right point in the window life cycle or it doesn't work reliably. That's the main problem we had with Unity.
in reply to Matt Campbell

And what happens if the main Window ever gets destroyed or recreated? While I can often hook into app startup, most mod frameworks don't allow detailed hooks into Window Creation. It's possible I'm missing things, and smarter people than me can come up with a way to make this generally viable. But based on my research and skill level, I just don't see a path to avoid screen reader libraries in the majority of cases. Live regions are only useful in the case where you're writing your own app from scratch or modifying an open source app, and you never need to alert the user to things when the foreground window doesn't have the focus. This is a vanishingly small number of cases. As far as I can see, screen reader API's, and robust libraries to call them, are going to be useful for years to come.
in reply to 🇨🇦Samuel Proulx🇨🇦

@fastfinge There are different use cases with various constraints.

I used the word "primary" on purpose in my first post. Right now, screen reader libraries are the first and often only thing reached for by developers of these abstraction libraries.

I would like to see a better abstraction library that keeps the ease of use while supporting multiple techniques. It could opt for the most reliable and user-friendly pattern by default based on information glean from its operating environment and some gentle hints from the developer.

E.g. you don't supply a window handle? There's no window for a live region so it falls back to SR libs. @matt @tunmi13

in reply to Matt Campbell

@matt User control is one reason live regions are a better idea than screen reader libs at least, because I can turn them off.

If an app has decided to shove stuff down the NVDA Controller Client DLL, there's nothing I can do about it. Other than maybe deleting the DLL or restricting access to it, at which point it's anybody's guess whether the app in question will go silent, crash, or switch to SAPI.

Of course, this begs the question about why screen readers don't have a permissions system. @fastfinge @tunmi13

in reply to James Scholes

As an example, the person I'm currently in a meeting with has three monitors. One for the meeting, one for social media and dashboards, and one for what she's working on. Invisible interfaces and alerts from non-foreground apps are the only way I have to be even slightly as fast as her. And I'm already slower at a lot of things, because of the nature of inaccessible GUIs, so further friction and speed decreases would not be acceptable at all. If I didn't have these features I guess I'd have to have three laptops and a mixer? I don't know.
in reply to James Scholes

And if you want to get a sense for how unsatisfactory live regions are, compare mudlet, that uses live regions to read new text with mushclient with mushreader, that uses the screen reader API. Notice how mudlet misses some text if it comes in too fast, doesn't always read text, and can't control if the text interrupts the previous text it sent or is added to the end of the queue. Mushreader has none of these problems.
in reply to Matt Campbell

If you want an easy and predictable game to test the differences, proceduralrealms.com is a good example. It works in both clients, it tends to dump multiple lines of text to the client at once, some of those lines have special characters, and within 20-30 minutes of playing with each client you'll notice the differences and the bits mudlet is missing.
in reply to James Scholes

@jscholes @fastfinge I also think, though I realize I might be in the minority on this, that screen readers should ignore MSAa alerts from non-foreground windows. Narrator ignores UIA notifications from non-foreground windows, except for a few OS components that are treated as exceptions; not sure about NVDA and JAWS.
in reply to James Scholes

@jscholes @fastfinge Based on a quick look at SRAL (which I also wasn't familiar with), I wouldn't recommend it. It does attempt to implement a dummy UI Automation provider as one option (along with the usual screen reader APIs), but that implementation shows poor understanding of how to use UIA.

Ouch. I was really happy to discover LibreOffice Impress Remote app for iOS - but the last update was in 2014 and it doesn't run on current iOS :(

Not reflected in the current docs it seems, ping @libreoffice

Any iOS developers with spare time wanting to get it up to speed? :)

libreoffice.org/download/impre…

#LibreOffice

What will the world be like in 10 years? Or 50? We don't know – but we all need to think about long-term data storage. Don't get locked out 🔒 of your own documents – instead, choose a format designed for long-term archiving: blog.documentfoundation.org/bl… #foss #openSource #freesoftware #openstandards