Skip to main content




A few parts on my Framework laptop crapped out and it finally needed to go into a repair shop for fixes and upgrades. Neatly, all the parts I needed were around $100, and the repair bill was ~$120, taking less than an hour to diagnose and fix.

Not that $220 is something everyone can afford, but I love that I got those repairs made quickly on a laptop, not exactly a form factor known for its fixability. I thought I'd regret abandoning the repairability of a desktop, but the fact that my Framework can be pulled apart, cleaned, and fixed makes it a much better option for me.



#FunFact: Meta has decided that our posting on them violating the GDPR would violate their "community standards" and deleted it several times on facebook.com..

reshared this



It's disappointing to see a desktop app that runs in the background described as lightweight, then find out that it's an Electron app. Yes, I also ship an Electron-based desktop app that runs in the background, but I do so grudgingly, and I, at least, don't claim that it's lightweight. But maybe I'm just turning into an old curmudgeon.
in reply to Matt Campbell

And I thought I was the only one who wanted to call bullshit every time I saw an Electron app want to claim it was lightweight.


Does anyone at mastodon have any accessibility contacts at PayPal?
Now that they are using hCaptcha I find myself locked out of both of my paypal accounts. Since their accessibility cooky doesn't want to work in any browser I'm going to probably have to use aira.
Or, has anyone had any luck using AI to beat this?
in reply to Doug

@Tamasg They use H captacha? I never had experienced it. Using paypal regualrly.


Hey British blind / partially sighted people of Mastodon from the UK who watch television, and I appreciate I'm now several layers deep as far as niche groups are concerned but anyway, what in your experience is the best configuration in terms of an accessible TV experience which also provides access to cable channels? I ask because trying to get audio description with the BBC, ITV, channel 4 etc apps is a pretty miserable experience and usually doesn't work with live programming anyway, so I want to avoid avoid avoid. I've heard good things about sky stream, very good things in fact, but no accessible Netflix / Amazon prime apparently, but I'm thinking this could be circumvented by having the box connected to a Samsung smart TV with its own built-in accessibility, which should in theory allow for seamless switching, but I'd like to hear your input. CC @fireborn @KaraLG84 @brian_hartgen @cachondo
in reply to Haily Merry

@stevenscott I'm glad if that's the case, but it should mean that if OFCOM put their foot down they've got even less of an excuse not to add the feature across platform.
in reply to Sean Randall

@stevenscott I can only assume given that literally nobody seems to provide live audio description on their mobile apps, there must be some sort of licencing restriction or something. I don't really see how you can take the time to let people access it on demand, but then just go, na, can't be bothered to implement it for live things, blind people don't know how to use clocks, they'll never be able to catch anything good at the right time anyway.


WTAF AI

Sensitive content

This entry was edited (1 month ago)
in reply to Andy Holmes

Sensitive content

This entry was edited (1 month ago)


If I ever make a band, I'll name it   just to annoy websites trying to write about it.

reshared this



I've been an AT trainer for a number of years, but some of my clients still shock the heck out of me! I was teaching a student how to use JAWS Picture smart. We were working with pictures of items on Amazon. I left him alone for five minutes to explore on his own. Well, explore he did. I came back and he was trying to use Picture Smart on a porn site! Ummm, if you want to do that at home, that's totally your business, but not something you should be doing during your AT training session!

Tamas G reshared this.

in reply to Jim D

I mean… finding the limits of the technology you’re being trained on is a good thing, no? If the trainer knows the limits, then they should tell the client? Why as blind people are we so prudish in general?



Writing my out of office message... UNTIL OCTOBER 28TH!!!!


I tried out the new Canvas with Chat GPT feature, it's really neat. I didn't think the code selecting and having GPT edit or explain something about it would work with a screen reader, but what you do is focus the multiline field with no label that pops up at the bottom of the page, and then you do text selection as normal. When you exit browse mode and move back into focus mode with #NVDASR you will be able to press an "ask GPT" button at the bottom of the page that appears. Quite nifty.
in reply to Tamas G

@pixelate Actually, you don't need to select text if you don't want to. You can directly ask the chat to explain a portion of the code and make the change, and that change will be reflected in the canvas instead of coming as a chat response.

Tamas G reshared this.



Die Staatsanwaltschaft erließ Strafbefehl wegen fahrlässiger Tötung gegen den Autofahrer, der #Natenom getötet hat.

:natenomblack: 🖤

Wichtiges Signal. Strafe mutet gering an. Täter legte trotzdem Einspruch ein.

Hoffentlich belebt es weiter die Debatte zur Sicherheit ALLER Verkehrsteilnehmenden. Hoffentlich auch die Forderung,den #Führerschein regelmäßig zu erneuern. Der Täter war 78.

Auch meine Mutti glaubte sich im hohen Alter fahrfit. Ein spektakulärer Unfall ging zum Glück glimpflich aus.

This entry was edited (1 month ago)
in reply to Der böse Hexe Njähähä 🧙‍♀️🪄⚡️

Weil #Fahrtüchtigkeit nicht am #Alter hängen muss, sondern genauso an vielen anderen Faktoren, wäre es wichtig, dass alle #KFZ-Fahrende den #Führerschein regelmäßig erneuern müssten. Auch komplett gesunde junge Menschen verhalten sich im #Straßenverkehr oft nicht angemessen, während sie sich im Recht wähnen. Ganz oft werden Vorfahrtsregeln falsch eingeschätzt, was zu hochgefährlichen Situationen führt! Deshalb könnte die verpflichtende Auffrischung die Sicherheit aller signifikant erhöhen. #Auto
This entry was edited (1 month ago)


lol. Matt Mullenweg told Automattic employees "if you don't like me, quit" and 159 (8%) did techcrunch.com/2024/10/04/159-…


Oh yeah and Firefox needs to be liberated from Mozilla. What a bad move.
(If the EU took some money they burn on blockchain experiments or dumb AI shit to fund a Firefox organization that would be a great use of my taxes!)
mastodon.social/@sarahjamielew…


Taxíkom domov chodím z niekoľkých dôvodov...
- šetrím čas
- nočný spoj trvá a nemám ho za rohom
- nočný spoj je grc niekedy
...ale mesačne naň miniem celkom značnú čiastku a už by som tomu chcel vyhnúť. Neviem ako.
This entry was edited (1 month ago)
in reply to Schmaker

@schmaker tiež ma to napadlo ale mám obavy z využitia a aj investície... na mol nie. Bicykel zvažujem tiež, plus by to bol pohyb navyše a na kolobežke nejazdím, trochu sa bojím o svoju bezpečnosť.
in reply to SuspiciousDuck

Jezdím obojí, zkusim ti to v rychlosti porovnat:

#eKoloběžka
- všude se vejdeš - převoz v kufru či busem je velkej benefit
- umí bejt kurevsky rychlá
- dopravíš se i když jsi línej
- byť se tolik nehejbeš, stále to znamená, že nesedíš v autě či buse
- v případě průseru se z koloběžky podstatně líp vystupuje, než z kola

#Kolo
- míň obratný, za to podstatně jistější v horších povětrnostních podmínkách
- když chceš, máš i nákladovej prostor
- podstatně víc pohybu (což může bejt výhoda i nevýhoda)
- u elektrokola si můžeš celkem volně vybírat mezi režimy "jsem úplně línej" a "chci jet na břišní puding"

Kolo mě baví, ale kdybych si z tohohle měl vybrat jedno, bude to koloběžka. Na koloběžce se totiž směle dopravíš i třeba nemocnej - to na kole úplně nechceš.

in reply to Schmaker

@schmaker vďaka za rozbor, uvidím. Neviem či sa mi bude chcieť šliapať na bicykli po 16h na nohách to je pravda..
in reply to Schmaker

@schmaker A jak koloběžka funguje v zimě? (sněhu/ledu/břečce /vodě? Ano, vím, globální oteplování, sníh&led už nebudou... Ale přece jen...😇)
This entry was edited (1 month ago)
in reply to Stevez

keď nad tým premýšľam bike by mi vyhovoval asi viac. Už vidím akoby som sa z tej kolobežky pekne vytrel... a pri stave našich komunikácií to vraj nie je také pohodlné (taxikár spomínal).
This entry was edited (1 month ago)




What about HeyTel? Anyone remember that one? To me, it always sounded like people were in a large cavern when they sent messages on that thing.
in reply to Mendi Evans

Yeah, only thing I didn't like about that app is it only saved the last 10 or so messages of any conversation.


Does anyone remember the Roger app? I loved that voice messaging app!
in reply to Mendi Evans

Yeah that app rocked for the short time we were able to use it.


So, a conversation got me to thinking. I was looking at my voice message apps last night. I was wondering, who uses these any more? I know telegram and whatsapp get used, but what about the following? zello, voxer, signal
in reply to Mendi Evans

Most peoeple use it as a secure replacement for texting or phone calls.



I wonder what menu in jaws I use to change the keyboard echo settings? #bBlind


As of 3 p.m. on October 4, 2024, the death toll from Hurricane /Tropical Storm Helene is 224.

Indiana 1
Virginia 2
Tennessee 12
Florida 21
Georgia 33
South Carolina 46
North Carolina 110

Missing / unaccounted for: 223+
North Carolina 200+
Tennessee 23

This entry was edited (1 month ago)


Having the computed value of CSS variable directly in the variable tooltip in @FirefoxDevTools is sooo good! Hope you'll enjoy this as much as I do, and be ready for more!

mozilla.social/@FirefoxDevTool…



This Amazon Early Prime Deal Gets You an Echo Pop and Smart Bulb for Just $18 cnet.com/deals/this-amazon-ear…


Also about excessive European regulations. It seems to be the reason why #ChatGPT advance voice chat doesn't work in the EU.


From 2021:

"A lot of the people who say COVID isn't that dangerous because "99.5% survive" get really upset when you point out that 99.5% survival is the same as 1/200 dying.

Statistical illiteracy isn't just embarrassing, it's dangerous."



2 dead, 1 in critical condition as major fire burns in Old Montreal

cbc.ca/news/canada/montreal/fi…

Same slumlord as the 2023 fire in the old Montréal that also had death. Coincidence?

Unknown parent

Hubert Figuière
@stephanie I was thinking that there should be a permit to be landlord and a revocation thereof. Nonwithstanding the criminal penalties.


Speaking of excessive European regulations. May I just accept all cookies on all websites please? No, I don't care how many partners you have, I don't care what you track about me, I have enough net higiene to avoid suspicious sites, and if I catch something, I take whole responsibility on myself. Please just stop displaying those annoying and sometimes poorly accessible windows to me!


An Apex 32 for $550 is slightly tempting, if only for use as a display with other devices. blindbargains.com/classifieds.…
in reply to Alex Hall

wow yeah that's tempting, think I got the one I have for like 850 or 900 but that was in 2020 or so. Doesn't sound like it's in bad shape for that price, incredible that these are now at the value M-powers were back when I got my Apex almost, LOL.
in reply to Tamas G

@Tamasg I managed to score an mPower for $65 on Ebay back in 2017 or 2018. My new desk setup doesn't have room for it, and it won't work as a wireless display at all, but it was a good display for many years. It still works, it just isn't convenient to use with my laptop where it is. Still, a working 32-cell display in any shell for $65 is something I'll never forget.
in reply to Tamas G

@Tamasg Main menu. Wordprocessor. KeyWord menu. Create a document. Folder name? Press enter for General.


New classified: BrailleNote Apex BT32 for sale
blindbargains.com/c/5988
* Classifieds are posted by users and not endorsed by Blind Bargains. Exercise care.


interesting. Try to control+right arrow through this series of words with #NVDASR and notice how it skips "boox palma": "sized Boox Palma e-reader’s on" - what sort of devilish regex pattern match would do this?
(update: as found by @jscholes
"The words are separated with a no-break space, instead of a standard one. Replace those, and it works fine: "sized Boox Palma e-reader’s on"" - this does work.)
This entry was edited (1 month ago)
in reply to James Scholes

@jscholes wow incredible, how did you figure out the no-break space there? When I read it character by character nothing odd is announced in the spacing so this one was just baffling. I figured this is the kind of stuff the unicode normalization data setting would control but maybe not.
in reply to Tamas G

I pressed the report current character command three times on a regular space from earlier in your post, and then on the one you were having trouble with. A standard space will be reported as 32/0x20, while a no-break space is 160/0xa0. Mostly though, I knew because it's happened to me before.



Debian has two new mirrors in India. Thanks to both CUSAT and NITC University Campus for hosting mirrors, helping distribute Debian lists.debian.org/debian-dug-in… #debian #debianindia


In a world where people use stochastic parrots to decompress their thoughts, more than ever, I value blog posts, emails ... that are succinct and go straight to the point.

If you're emailing me, please send me the input prompt instead of the bloated LLM response that adds no value.



This month in Servo…

⬅️✍️ right-to-left layout
🔮📩 <link rel=prefetch>
🔡🎨 faster fonts and WebGPU
📂📄 better tabbed browsing
🤖📱 Android nightlies

More details → servo.org/blog/2024/10/03/this…



this is, like a lot of my posts where I'm not replying to threads, concerning #blind users of #linux. With that out of the way, let's get into it

So, I was technically able to do this for some time now, around three days or so, but now I got enough courage to actually do it. This involves a lot of courage indeed, because this is my primary and only system, and the experiment in question could have cost me speech on the computer, which is a pretty big deal

So yeah, what did I do that's so dangerous? I stopped the pulseaudio server, entirely, and by that I mean the pulse pipewire compatibility layer, pipewire-pulse. Foolishly as it might sound from the outside, I tryed this to see if I could still get speech. Not so foolishly after all, and also in a show of spectacularly defying expectations, because I'm writing this now, I do, very much so, in fact, as an unintended consequence, my system feels snappier too, incredible, right?

As many of you are pretty technical, since using linux as a VI person kinda pushes you in that direction, it should not come as any surprise to you that speech dispatcher uses pulseaudio, aka pipewire-pulse, to play sound, because usually when that crashes, you get no speech and such. So then, how is this possible? no, I'm not using alsa, or any of the other audio backends in there. The rest of this huge post will be devoted to that question, as well as explaining some background around why this matters and how things were before. Note: this would probably only make a lot of positive change to a particular group of people, those who value the amount of latency of your audio systems, either because you're musicians, want to be, are working with specialised audio software, are using a complicated hardware setup with lots of nodes which have to be balanced in the right way, or just because you can, etc etc, those kinds of people. For most of you, this may be a novelty, may get some limited use out of it due to decreased cpu load and more fast feeling interfaces in your desktop environments, but for you, this is mostly a nice read and some hipped enthusiasm I suppose.

It all started when I was talking to afew linux musicians, if I recall correctly, also someone who might have been involved in a DAW's development. I was talking about accessibility, and finally arrived to someone telling me that speech dispatcher requests a very low latency, but then xruns quite a number of times, making pipewire try to compensate a lot. I could be misremembering, but they also said that it throws off latency measuring software, perhaps those inside their DAW as well, because the way speech dispatcher does things is making pw increase the graph latency.

So then, what followed were afew preliminary discussions in the gnome accessibility matrix room, about how audio backends work in spd. There, I found out that plugins mostly use a push model, meaning they push samples to the audio server when those become available, in variable sizes as well, after which the server can do the proper arrangements for some kind of sane playback of those. Incidentally or not, this is how a lot of apps work with regular pulseaudio, usually using, directly or indirectly, a library called libpulse_simple, which allows one to basically treat the audio device like some kind of file and have things done that way, where the library could also sometimes introduce buffering of its own before sending to pulse, etc. Of note here is that pulse still has callbacks, but a lot of apps don't use that way of doing things.

Back to the problem though, this was fine for pulse, more or less anyway, because pulse didn't symbolise the media graph in any way, there was simply no such concept there, you had only apps being connected to devices via streams, so there wasn't a way with which to get apps to syncronise their rates to something which can more or less be sent directly to the soundcard. So, when apps invariably started to diverge the rate at which they pushed samples to pulse, as far as I understand, pulse took the latency of the slowest stream and add it to everyone else to attempt to syncronise, because, after all, pulse would still have to mix and send everyone's frames to the soundcard, and because there either was no poling model or no one wanted to implement it, that was the best choice to make in such environments.

Enter low-latency software, #jack and #pipewire. Here, latency minimising is the most important thing, so samples have to be sent to the soundcard as soon as possible. This means that the strategy I outlined above wouldn't work here, which gets us neetly in the concept of an audio graph, which is basically all the sound sources that can play, or capture, in your systems, as well as exactly where sound is played to and captured from. Because of the low-latency factor however, this graph has to be poled all at once, and return samples similarly fast, in what the graph driver calls a cycle. The amount for which apps can buffer audio before they're called again, aka a graph cycle duration, is user-adjustable, in jack via the buffer size, in pipewire via the quantum setting. But then, what happens to apps which don't manage to answer as fast as they get called by the server? Simple, even simpler than the answer of pulse, alsa, etc to the problem, and their various heuristics to try to make sound smooth and insert silence in the right places. The answer is, absolutely nothing at all, if an app didn't finish returning its alotted buffer of samples, not one more or less than that, the app would be considered xrunning, either underrunning or overrunning based on the size of the samples buffer they managed to fill, and their audio, cutting abruptly with perhaps afew bits of uninitialised memory in the mix, is sent to the soundcard at a fixed time, with the others. This is why you might be hearing spd crackle weirdly in vms, that's why you hear sometimes other normal programs crackle for no good reason whatsoever. And you know, this is additive, because the crackling spreads through the entire graph, those samples play with distortion on the same soundcard with everything else, and everyone else's samples get kinda corrupted by that too. But obviously, if it can get worse, it will, unfortunately, for those who didn't just down arrow past this post. There are afew mechanisms of reducing the perceived crackling issues from apps which xrun a lot, for example apps with very low sample rates, like 16 khz, yes, phone call quality in 2024(speaking of speech dispatcher), can get resampled internally by the server, which may improve latency at the cost of a degraded quality you're gonna get anyways with such a sample rate, but also te cpu has to work more and the whole graph may again be delayed a bit, or if an app xruns a lot, it can either be disconnected forcefully by pipewire, or alternatively the graph cycle time is raised at runtime, by the user or a session manager acting on behalf of the user, to attempt to compensate, though it'll never go like regular pulse, but enough to throw off latency measuring and audio calibration software.

So, back to speech dispatcher. After hearing this stuff, as well as piecing together the above explanation from various sources, I talked with the main speech dispatcher maintainer in the gnome a11y room, and came to the conclusion that 1, the xrunning thing is either a pipewire issue or a bug in spd audio plugins which should be fixed, but more importantly B, that I must try to make a pipewire audio backend for spd, because pw is a very low-latency sound server, but also because it's the newest one and so on.

After about two weeks of churn and fighting memory corruption issues, because C is really that unsafe and I do appreciate rust more, and also I hate autotools with passion, now my pr is basically on the happy path, in a state where I could write a message with it as it is now. Even on my ancient system, I can feel the snappyness, this really does make a difference, all be it a small one, so can't wait till this will get to people.

If you will get a package update for speech dispatcher, and if you're on arch you will end up doing so sooner or later, make sure you check the changes, release notes, however your package repositories call that. If you get something saying that pipewire support was added, I would appreciate it if as many of you as possible would test it out, especially when low-latency audio stuff is required, see if the crackling you misteriously experienced from time to time with the pulse implementation goes away with this one. If there are issues, feel free to either open them against speech dispatcher, mention them here or in any other matrix rooms where both me and you are, dm me on matrix or here, etc. For the many adventurers around here, I recommend you test it early, by manually compiling the pull request, I think it's the last one, the one marked as draft, set audio output method to pipewire in speechd.conf, replace your system default with the one you just built by running make install if you feel even more adventurous , and have fun!

I tested this with orca, as well as other speech dispatcher using applications, for example kodi and retroarch, everything works well in my experience. If you're the debugging sort of person, and upon running your newly built speechd with PIPEWIRE_DEBUG=3, you get some client missed 1 wakeups errors, the pipewire devs tell me that's because of kernel settings, especially scheduler related ones, so if y'all want those to go away, you need to install a kernel configured for low-latency audio, for example licorix or however that is spelled, but there are others as well. I would suggest you ignore those and go about your day, especially since you don't see this unless you amp up the debugging of pipewire a lot, and even then it might still just be buggy drivers with my very old hardware.

In closing, I'd like to thank everyone in the gnome accessibility room, but in particular the spd maintainer, he helped me a lot when trying to debug issues related to what I understood from how spd works with its audio backends, what the fine print of the implicit contracts are, etc. Also, C is incredibly hard, especially at that scale, and I could say with confidence that this is the biggest piece of C code I ever wrote, and would definitely not want to repeat the experience for a while, none of the issues I encountered during these roughly two weeks of development and trubbleshooting would have happened in rust, or even go, or yeah, you get the idea, and I would definitely have written the thing in rust if I knew enough autotools to hack it together, but even so I knew that would have been much harder to merge, so I didn't even think of it. To that end though, lots of thanks to the main pipewire developer, he helped me when gdb and other software got me nowhere near solving those segfaults, or trubbleshooting barely intelligible speech with lots of sound corruption and other artefacts due to reading invalid memory, etc.

All in all, this has been a valuable experience for me, it has also been a wonderful time trying to contribute to one of the pillers of accessibility in the linux desktop, to what's usually considered a core component. To this day, I still have to internalise the fact that I did it in the end, that it's actually happening for real, but as far as I'm concerned, we have realtime speech on linux now, something I don't think nvda with wasapi is even close to approaching but that's my opinion I dk how to back up with any benchmarks, but if you know ways, I'm open to your ideas, or better, actual benchmark results between the pulse and pipewire backend would be even nicer, but I got no idea how to even begin with that.

Either way, I hope everyone, here or otherwise, has an awesome time testing this stuff out, because through all the pain of rangling C, or my skill issues, into shape, I certainly had a lot of fun and thrills developing it, and now I'm passing it on to you, and may those bugs be squished flat!

in reply to the esoteric programmer

This is really cool to see. I'm not running linux atm, but one of the things that really excited me about general progress on the platform was pipewire because I do a lot of audio work. What excited me less is that many people often advised to turn it off/go back to exclusively pulse if you're blind. Really cool to see there's work being done on this to really take advantage of pipewire for speech.


Re last: I usually hate YouTube tutorials with passion because of their inaccessibility ("enter this command — click click! — and then you get this — click click! — etc.), but this guy did a really great job. Yes, he didn't spell out literally every command, but first, he clearly stated that you can go to a blog article linked in the description with all the commands typed out in text, and second, the most critical commands are indeed voiced by him, and he also mentions capital letters when it's needed (like "dash capital R" for a -R parameter). Again, I can't say it's 100% accessible without the article, but a blind user won't lose the track of the video. #Accessibility @nextcloud