📣 Do-It-Blind (DIB) online Besprechung am Montag, 2. Februar, um 19:00 Uhr. Du bist eingeladen! bbb.metalab.at/rooms/joh-szv-o… Wöchentlich am Montag besprechen wir neue Formen der digitalen und inklusiven Zusammenarbeit. Mach mit! 🛠️ #make #blind #inklusion

yaxim 0.9.9c is live on Google Play.

~Major~ ~Minor~ Only change: added support for Android 15 in order to be allowed onto Google Play.

Change in privacy policy: Google insists on yaxim collecting email addresses (which it does not) because XMPP addresses look like email addresses (which they do).

Looks like I sneaked in a bug as well, that impairs the self-ping timers. Investigation is ongoing.

#xmpp #yaxim #android #GooglePlay

Dieser Schal hat mein Leben gerettet
Ich war #obdachlos – Es war Winter. Und sie stand einfach vor mir, mit großen Augen: „Du hast ja nicht einmal einen Schal!“. Und dann hat ein kleines Mädchen mir ihren geschenkt.

Ihren #Schal habe ich immer noch. Ich trage ihn manchmal, wenn ich Halsschmerzen habe. Dann habe ich das Gefühl, mehr zu sein als ein Problem.

ich hatte während des Studiums aus meiner Wohnung fliehen müssen. Die ganze Geschichte:

miss-jones.de/2024/01/30/obdac…

Folks, here's an NVDA add-on dev question (wxPython / accessibility). Please kindly boost for visibility.
I’m building an NVDA add-on with a lookup dialog that shows dictionary results. I tried embedding HTML inside the dialog (wxPython wx.html.HtmlWindow / wx.html2.WebView). NVDA often announces only “HTML window”, doesn’t read the content, and browse-mode features like NVDA+Space and single-letter navigation (e.g., H / Shift+H) don’t work reliably. Is there any recommended way to keep HTML content inside the same dialog (not opening a separate browseable message/window) while still making it accessible to NVDA—i.e., content is readable and (ideally) supports browse-mode style navigation? Any patterns, APIs, or known working approaches would be appreciated.
@NVAccess
This entry was edited (1 day ago)
in reply to Amir

Maybe - though it could also be that others have done what you've done and seen that no-one has done it previously - we have just (in alpha) updated Python and other dependencies, so it may be there is a new way to do what you need (I don't actually play with the code enough myself to get you an answer, I'd have to ask the devs and it's evening here now so I wouldn't get an answer until tomorrow at least, hence the push to ask in the groups)
in reply to Mew Projects

You will see this later. It does work perfectly and the recordings are of the correct length. What I did test was trying to record one programme while listening to another. That did not work. When the scheduled time occurred, even though the scheduled item was set to record only, it didn't allow the original stream to still be heard. That's not a big problem. I was just testing it to see what would happen.

I think the talk Simon (@S1m) and I gave on #UnifiedPush at #FOSDEM turned out very well. If you have 30 minutes and want to learn how push notifications in general and UnifiedPush in particular work, check out the recording.

gultsch.video/w/gRGZqKKvNBvvMe…


UnifiedPush - Push notifications. Decentralized and Open Source (FOSDEM26)


To understand how we can replace Google push notifications (FCM) with something open source and decentralized, we need to understand how they work and why they are needed in the first place. This talk explains the mechanics of push notifications and why, despite their potentially bad reputation, they are a more elegant solution than having every app maintain its own persistent server connection.

While open-source tools like microG can remove proprietary Google software from your Android phone, the actual notifications are still sent via Google's servers (Firebase Cloud Messaging).

UnifiedPush is a framework that allows push notifications to be delivered in a decentralized manner or through self-hosted servers. Numerous open-source Android apps already support UnifiedPush, including Tusky, Ltt.rs, Fedilab, DAVx⁵, Fennec, Element, and many more.

The presentation ends with a short demo on how to use UnifiedPush on Android.

Talk given at FOSDEM 2026 fosdem.org/2026/schedule/event…


reshared this

Well, it looks like there really isn’t a good way to present HTML content (not just plain text) inside an NVDA dialog. At least, if there is one, I haven’t seen anyone actually do it. Until I can find a proper solution, I’m reluctantly displaying the HTML content outside the dialog box. It’s far from ideal. If anyone knows a better approach, I’d really appreciate hearing about it.
@NVAccess

up at 2 AM discussing more implementation details and patterns with Claude, how we will structure the new voice profile mixing in the frontend rather than burdening the Python driver, ETC. Such is life. Some people really think AI coding is as easy as asking it to write it out, and maybe it can be for some context, but darn it if I don't break down how I expect the API and the implementation to exacting detail, it's going to muddle things up. I know enough C++ to get around, I know enough Python to get around. So I can tell it how to make the contracts and callback's shapes, ETC, how to rewrite what and where. Then I read the resulting work. At least these days it feels less like holding the hands of a junior engineer as much as maybe nearly senior-level one, so that does feel better. It does need probing to check certain lines or functions when it thinks you haven't done something you already have, but otherwise, we've come so far from 2020, it's astonishing
in reply to Andre Louis

@FreakyFwoof LOL! I've still seen GPT write Python in the middle of HTML code! Like, the thinking will suddenly turn to Python and it starts to insert functions in the middle of the darn HTML like nothing, no thought. Happeneed last week with me, I still laughed at that as it's been there since the early days. I'm almost thinking memory and history context pulls it in Python because it knows I've been working on that type of code, but then throwing an HTML thing at it still tilts the tuning to Python. Best educated guess. So I'm not surprised about returning the same file like that either, lol. Some things really never change :D
in reply to Tamas G

for all that having been said though, I am the happiest with this latest addon I've been. It took off in a way I never expected. People able to share music clips the same way sighted people share images and screenshots, just by copy/pasting the clipboard, so for all my annoyance, anger and hairr-pulling, I regret nothing.
You working on synth engines is much more important, useful and practical though, so you take the win haha
in reply to Bri🥰

@Bri @FreakyFwoof lol, I bet the output from something like that is great though, a bunch of not found commands, then it's like, "wait but it's listed in the build system! What am I doing wrong!" LOL. I see it a lot at work where we have a monorepo that holds both Android and iOS code, and if I'm careless with my prompt on which part of the repo to investigate it starts to dig into the wrong parts. Haha. Then it's like, "but the user wanted info on Android, this is IOS...." and I laugh outloud each time.
in reply to Andre Louis

@FreakyFwoof OMG still. It's manual grind work that really gets removed, even if it's just a profile swapper where I'm not the one renaming the JSONS or calling fan control with the argument to load another config. I don't mind editing the curves, although having the sample ones is nice. It's ironic with that app that you can go through the hardware detection wizard just fine, then when you're done, it turns into a monster soup of unlabeled mess, and choosing one of the list items for the categories can only be done with object nav reliably because it auto-switches to the "about" tab the moment you focus it. Crap like that is hard to code around though honestly, so even a small try counts.

A big and warm thank you hug to all the friends I met and talked to in Brussels this time. Two packed days of events before #FOSDEM including an awesome prize ceremony, then two intense days at ULB where I must have talked to more than a hundred persons. All the positivity, the appreciation, the smiles, the ideas, the energy.

I got to end-keynote the thing and then top it off with more drinks and countless friends - again.

I'm drained now, but in a good way. I'll be back next year.

Rant about internet upload speeds in general.

Sensitive content

in reply to Andre Louis

Rant about internet upload speeds in general.

Sensitive content

I'm so sad about SpeechPlayer still and just burnt out. But there's always more work to do, more phonemes to tune. It'll never sound good enough. I actually plan to introduce frontend overrides for Espeak prosody itself, so we'll strip them from the IPA and have our own prosody rules with that pass. It's in the planning stage of works, the implementation sketch side, along with supporting the new frontend params in phonemes. All of it will help things move in the right direction, but it is so far away.
in reply to Tamas G

I'm all in favor of perfectionism, but with the recognition that it's not really achievable for human beings. For people striving for real excellence, I don't want to sound pretentious but for "an artist" as opposed to a coder, of course it will never be good enough. Does that really matter when it's the best available, or a completely new thing? Also, if you're sick of it, why not take some time off it and come back to it after having gotten away from the routine? I speak as one who is not a coder, and absolutely not an artist, you know your own situation best. Having said that, you've accomplished a huge amount and the work has been unremitting. Why not take a few months off and come back to it fresh?

The European Commission is pushing hard to extend Chat Control 1.0 - allowing mass scanning of private messages without court orders for another two years. Contact your MEPs TODAY via fightchatcontrol.eu/ to defend your privacy and digital rights!

As a software developer I know that creating working software is difficult. It's so easy to introduce a small bug somewhere and then nothing works. This may however contribute to how annoyed I get when software has obvious quality of life gaps that are just so incredibly obvious and not at all difficult to implement.

My Marantz receiver has HEOS. They're actually making quite a big deal out of it. HEOS is a thing that can stream music. If I put on music via HEOS, then it immediately starts playing. Great! Except for the tiny fact that it takes the receiver 8 seconds to turn on. So when I put on a track, when the receiver is off, then I'm missing the first 8 seconds of the track. Or rather, I did, until I added a Wiim (in a misunderstood attempt at getting Tidal to work properly, which it still doesn't). Now, when I start playing something on the Wiim, #homeassistant turns on the receiver, pauses the playback for 8 seconds, and then resumes the music.

This is obvious people!

Using agile with JJ flex

Sensitive content

in reply to Arfy

Using agile with JJ flex

Sensitive content

@Arfy
in reply to Noah Tobias Carver 👨🏼‍🦯🇺🇦

@Noah Tobias Carver 👨🏼‍🦯🇺🇦 @Pitermach @Drew Mochak I think you can install google talkback into @GrapheneOS if you so wish. It's just that there is an open-source variant of talkback included from the start.
And the updates to talkback are coming in slightly slower pace, but it's not abandoned.
github.com/GrapheneOS/talkback…
in reply to Peter Vágner

@pvagner @pitermach @GrapheneOS Yeah there seem to be a few different open source TB ports floating around. There is this one, which attempts to de-google the source code:
github.com/talkback-foss-team/…

But that appears to be at TB 13, which is pretty old.

The latest Google source is at15.1
github.com/google/talkback/

And you can install what appears to be a straight build of this source from FDroid
f-droid.org/packages/com.andro…

Or, as mentioned, you can just get the latest TB from play store and you're off to the races, which is what I did.

in reply to Peter Vágner

@pvagner @pitermach @prism GrapheneOS comes with our own fork of the open source TalkBack which we'll be updating soon. We recently built our own high quality text-to-speech implementation with a model we made ourselves with open source training data because we couldn't find any high quality open source options. Our text-to-speech is going to launch on our App Store soon and then will be included in GrapheneOS to have it working out-of-the-box followed by setup wizard integration.
in reply to Jeffrey D. Stark

@JStark @prism That's the main reason why we built it. The purpose of it is providing a high quality modern implementation of text-to-speech using a model trained with entirely open source data while providing much better performance than existing options. The poor performance of the existing options results in very poor usability with TalkBack due to high latency. It also means they're wasting a lot of power. We're working on a final round of improvements prior to bundling it.
in reply to GrapheneOS

@JStark @prism We're also going to make our own speech-to-text implementation too. The priority is finishing polishing up text-to-speech so we can ship it in GrapheneOS as a default enabled implementation. Once that's finished, we can start work on other things including speech-to-text. We're going to make a high quality implementation for English text-to-speech and speech-to-text before getting to other languages. For other languages, we'll do it based on estimated GrapheneOS usage.
in reply to GrapheneOS

@JStark @prism Having English text-to-speech is the main thing we need for accessibility since that's enough for blind people with even only a basic understanding of English to install another text-to-speech app. Currently, that's likely nearly anyone using GrapheneOS. In the future we do want to have various other languages but it's something which can be handled via installing another app for TTS, STT, keyboard, etc. already and our focus is improving out-of-the-box accessibility.
in reply to Pitermach

@pitermach @prism Murena Fairphones aren't a safe option due to lack of basic privacy and security patches. They include their own invasive services including sending data to third party services from OpenAI and others without user consent. Despite the marketing, they have highly privileged integration for Google apps and services along with always connecting to Google servers anyway. Strongly recommend reading the info at discuss.grapheneos.org/d/24134… and the linked 3rd party articles.
in reply to GrapheneOS

@pitermach @prism Android 16 QPR2 was released to the Android Open Source Project on launch date. That's not why /e/ lags far behind on updates to AOSP. They lag even further behind on updates to the Linux kernel and other components including drivers and firmware. It's just not properly maintained and privacy/security are not prioritized despite the marketing. /e/ is a fork of LineageOS which lags behind Android releases itself but /e/ takes much longer to move to ship updates.

So, rather than watch the rest of the performers bow out of gigs, he's just going to close it down for construction? Can't wait to see what he does with the place. From WaPo:

"Trump plans to close Kennedy Center for about two years, starting in July. Under the proposal, the Kennedy Center could close on July 4, coinciding with America’s 250th anniversary."

“I have determined that The Trump Kennedy Center, if temporarily closed for Construction, Revitalization, and Complete Rebuilding, can be, without question, the finest Performing Arts Facility of its kind, anywhere in the World,” Trump wrote in a post on Truth Social. “In other words, if we don’t close, the quality of Construction will not be nearly as good, and the time to completion, because of interruptions with Audiences from the many Events using the Facility, will be much longer. The temporary closure will produce a much faster and higher quality result!”

washingtonpost.com/style/2026/…

This entry was edited (1 day ago)