I’m currently working on an interesting project. Last year, I met a former homeless man named Peter. He lost all his money to gambling. On the streets, he sold a street magazine, and later he started writing his own book. Today, most of his income comes from selling that book.
He wanted to create an audiobook. Since the budget was small, we did it like this: I gave him a Zoom H1N recorder. He locked himself in a relatively quiet room and gradually recorded the entire book. He sent me the raw material, which I ran through @Auphonic to remove background noise and room echo and to balance the loudness levels.
Now I just need to remove the mistakes and create the music background. It won’t be full studio quality — but honestly, I’ve heard “studio” recordings that sounded much worse than what we’re working on now.

Another nice talk at @OpenAlt starts at 12:00 in some 40 minutes.
Improve the world map with @MapComplete .
Watch it at this link if you like.
vhsky.cz/w/9Hdaqab9CvbPwhk1VnD…
Or see the schedule here: talks.openalt.cz/openalt-2025/…

reshared this

I've just discovered Michal Hrušecký is talking about @Delta Chat on the @OpenAlt conference.
The talk is in czech.
He had a similar talk a few weeks ago:
talks.openalt.cz/openalt-2025/…

OpenAlt reshared this.

I know this is not a support site or programming course but I can't figure this thing out. If you do know #rust perhaps you can give me a helping hand. I am trying to contribute to an app.

I'd like to parse ipv4 addresses given as command line argument values.
I have got two arguments accepting ipv4 address.
If I specify single such option all is fine.
If I specify both, I 'm getting error like this:

thread 'main' (624061) panicked at /home/peto/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/clap-3.2.25/src/parser/matches/arg_matches.rs:1879:13:
Must use `Arg::allow_invalid_utf8` with `_os` lookups at `[hash: A8F400C40154F09]`

This is simplified version of my code showcasing the issue:
```
use std::net::{IpAddr, Ipv4Addr};
use clap::{App, AppSettings, Arg, value_parser};

#[tokio::main]
async fn main() -> Result<(), Error> {
let mut app = App::new("Server APP")
.about("My super cool app")
.setting(AppSettings::DeriveDisplayOrder)
.setting(AppSettings::SubcommandsNegateReqs)
.arg(
Arg::with_name("socket")
.required(true)
.takes_value(true)
.long("socket")
.help("Unix socket path"),
)
.arg(
Arg::with_name("relayaddress")
.required(false)
.takes_value(true)
.long("relay-address")
.value_parser(value_parser!(Ipv4Addr))
.help("External relay ipv4 address used together with --listen-address to run behind a nat"),
)
.arg(
Arg::with_name("listenaddress")
.required(false)
.takes_value(true)
.long("listen-address")
.value_parser(value_parser!(Ipv4Addr))
.help("Local listen ipv4 address used together with --relay-address to run behind a nat"),
);
let matches = app.clone().get_matches();
if matches.is_present("relayaddress") & matches.is_present("listenaddress") {
let external_ip = IpAddr::V4(matches.get_one::<Ipv4Addr>("relayaddress").expect("Invalid address"));
let local_ip = IpAddr::V4(
matches.get_one::<Ipv4Addr>("listenaddress").expect("Invalid address"));
println!("Listening on local IP: {local_ip}");
println!("Relaying through external IP: {external_ip}");
}}
```

#rust #rustlang #programming #fedihelp

reshared this

in reply to Peter Vágner

I know this is not a support site or programming course but I can't figure this thing out. If you do know #rust perhaps you can give me a helping hand. I am trying to contribute to an app.

Sensitive content

in reply to Federico Mena Quintero

I know this is not a support site or programming course but I can't figure this thing out. If you do know #rust perhaps you can give me a helping hand. I am trying to contribute to an app.
@Federico Mena Quintero Oh, huge thanks for taking a look. Yes it's clap 3. It's not my decision, I'm attempting to contribute to an existing project so if I can make it work without major changes that might be helpfull. As I don't feel qualified for making decisions when it comes to this. I'm novice when it comes to #rust.
--socket argument is required, other two arguments are supposed to be used together and this condition is tested at runtime.
So if I specify all three command line arguments, I am always getting that error.
I have attempted using os::str and casting but the issue remains. I am simply compiling the app with cargo build --release.
Have you been just adding stuff I may have overlooked when trying to simplify for posting or did you actually changed something please?
in reply to Federico Mena Quintero

I know this is not a support site or programming course but I can't figure this thing out. If you do know #rust perhaps you can give me a helping hand. I am trying to contribute to an app.
@Federico Mena Quintero I've figured it out finally. The issue was not parsing ipv4 addresses but using matches.from_os() on the socket argument.
I have changed it to use std::path::PathBuf and it's working fine for me now.
Huge thanks for friendly hint and looking at my code.

Today I finally experienced Samsung’s HIYA spam blocking in action. It works perfectly — except that my food delivery courier couldn’t reach me. Clever guy though, he called again from a hidden number. I later found his repeated calls under “Other calls.” No idea why he ended up in the spam filter; it doesn’t seem like there’s a way to unblock him. Maybe adding him to my contacts would help…

This project looks promising. It looks like amazing replacement for Station playlist. Supports playlists, automation, looping. I sometimes play music during family events using SPL, maybe on next I will try this. Seems that my next procrastination moment will be with this. github.com/michaldziwisz/sara. Latest release at github.com/michaldziwisz/sara/…

reshared this

in reply to Paweł Masarczyk

@Piciok @pvagner I’m glad the project caught your interest. To be honest, I’ve put it on hold for now because I ran into problems I couldn’t quite overcome — at least not easily with AI. However, I’ll definitely come back to it someday, since I’ve always wanted to build my own live broadcasting system; I just lacked the coding skills. Yes, SARA will be designed primarily around the traditional model of a live radio DJ — using a mixer, sound card (preferably several), and absolutely no virtual mixers or anything like that.

I really want to test this


For a few years I've been aware of this website that purports to be able to unlock shopping cart wheels using the speaker on your phone, but i finally had an excuse to try and and i remembered in the moment.

A woman was outside the grocery store struggling to move her shopping car that was stuck because two of the wheels were locked

I remembered the website! So I put my phone near the wheels and played the sound. The wheels unlocked like magic. She was very happy. So cool.

begaydocrime.com/


If you are into #tranceMusic #uplifting #vocalTrance, have a listen to these stunning tracks

Paipy & Elles de Graaf - The Last Time

Driftmoon X XiJaro & Pitch - Rise Again

RAM & Arctic Moon & Stine Grove - A Billion Stars Above

in reply to ondrosik

@ondrosik I think this is a difference how #friendica generates the home timeline feed as compared to mastodon and other common #fediverse servers. Friendica also includes repies in the home timeline feed. There are apps that can filter out posts that are replies to other posts client side however TW Blue may not be able to do that I think.

Do you know that you can use Subtitle edit to transcribe audio? It has a relatively accessible guy so you can use Purfwiev's faster whisper xxl, cpp, cpp cublas, const-me. Longer post how to use it follows:

Installing Subtitle Edit


Download the program from the developer’s website. Navigate to the level 2 heading labeled “Files.”
If you want to install Subtitle Edit normally, download the first file, labeled setup.zip.
There is also a portable version available, labeled SE_version_number.zip.

If you decide to use the portable version, extract it and move on to the next section of this article. The installation itself is standard and straightforward.

A Note on Accessibility


NVDA cannot automatically obtain focus in lists.
To find out which item in the list is currently selected, move down with the arrow key to change the item, then press NVDA+TAB to hear which one is focused.

Initial Setup


  • In the menu bar, go to Video and activate Audio to text (Whisper).
  • When using this feature for the first time, the program may ask whether you want to download FFMPEG. This library allows Subtitle Edit to open many audio and video files, so confirm the download by pressing Yes.
  • Subtitle Edit will confirm that FFMPEG has been downloaded and then ask whether you want to download Purfwiev’s Faster Whisper – XXL. This is the interface for the Whisper model that we’ll use for transcription, so again confirm by pressing Yes.
  • The download will take a little while.
  • Once it’s complete, you’ll see the settings window. Press Tab until you reach the Languages and models section. In the list, select the language of your recording.
  • Press Tab to move to the Select model option, and then again to an unlabeled button.
  • After activating it, choose which model you want to use. Several models are available:
    • Small models require less processing power but are less accurate.
    • Large models take longer to transcribe, need more performance and disk space, but are more accurate.
      I recommend choosing Large-V3 at this step.


  • Wait again for the model to finish downloading.


Transcribing Your First Recording


  • Navigate to the Add button and press Space to activate it.
  • A standard file selection dialog will open. Change the file type to Audio files, find your audio file on the disk, and confirm.
  • Activate the Generate button.
  • Now, simply wait. The Subtitle Edit window doesn’t provide much feedback, but you can tell it’s working by the slower performance of your computer—or, if you’re on a laptop, by the increased fan noise.
  • When the transcription is done, Subtitle Edit will display a new window with an OK button.


We Got Subtitles, So One More Step


In the folder containing your original file, you’ll now find a new file with the .srt extension.
This is a subtitle file—it contains both the text and the timing information. Since we usually don’t need timestamps for transcription, we’ll remove them in Subtitle Edit as follows:

  • Press Ctrl+O (or go to File → Open) to bring up the standard open file dialog. Select the .srt file you just got.
  • In the menu bar, open File → Export → Plain text.
  • Choose Merge all lines, and leave Show line numbers and Show timecode unchecked.
  • Press Save as and save the file normally.

If you’re transcribing multiple recordings, it’s a good idea to close the current subtitle file by starting a new project using Ctrl+N or by choosing File → New.

Conclusion


Downloaded models can, of course, be reused, so future transcriptions will go faster.
In this example, I used Purfwiev’s Faster Whisper. If you want to use a different model, you can select it from the model list, and Subtitle Edit will automatically ask whether you’d like to download it.

Peter Vágner reshared this.

During last 3 months I am using VDO ninja for all my remote interwiev and podcast recordings. here is my article about it from the blind perspective, focused on accessibility and audio.

Have You Ever Wanted to Record an Interview or Podcast Online? You’ve probably faced a few challenges:
How to transmit audio in the highest possible quality?
How to connect in a way that doesn’t burden your guest with installing software?
And how to record everything, ideally into separate tracks?

The solution to these problems is offered by the open-source tool VDO Ninja.

What Is VDO Ninja


It’s an open-source web application that uses WebRTC technology. It allows you to create a P2P connection between participants in an audio or video call and gives you control over various transmission parameters.
You can decide whether the room will include video, what and when will be recorded, and much more.

In terms of accessibility, the interface is fairly easy to get used to — and all parameters can be adjusted directly in the URL address when joining.
All you need is a web browser, either on a computer or smartphone.

Getting Started


The basic principle is similar to using MS Teams, Google Meet, and similar services.
All participants join the same room via a link.
However, VDO Ninja distinguishes between two main types of participants: Guests and the Director.
While the guest has limited control, the director can, for example, change the guest’s input audio device (the change still must be confirmed by the guest).

A Few Words About Browsers


VDO Ninja works in most browsers, but I’ve found Google Chrome to be the most reliable.
Firefox, for some reason, doesn’t display all available audio devices, and when recording multiple tracks, it refuses to download several files simultaneously.

Let’s Record a Podcast


Let’s imagine we’re going to record our podcast, for example, Blindrevue.
We can connect using a link like this:

https://vdo.ninja/?director=Blindrevue&novideo=1&proaudio=1&label=Ondro&autostart=1&videomute=1&showdirector=1&autorecord&sm=0&beep

Looking at the URL more closely, we can see that it contains some useful instructions:
  • director – Defines that we are the director of the room, giving us more control. The value after the equals sign is the room name.
  • novideo – Prevents video from being transmitted from participants. This parameter is optional but useful when recording podcasts to save bandwidth.
  • proaudio – Disables effects like noise reduction, echo cancellation, automatic gain control, compression, etc., and enables stereo transmission.
    Be aware that with this setting, you should use headphones, as echo cancellation is disabled, and otherwise, participants will hear themselves.
  • label=Ondro – Automatically assigns me the nickname “Ondro.”
  • autostart – Starts streaming immediately after joining, skipping the initial setup dialog.
  • videomute – Automatically disables the webcam.
  • showdirector – Displays our own input control panel (useful if we want to record ourselves).
  • autorecord – Automatically starts recording for each participant as they join.
  • sm=0 – Ensures that we automatically hear every new participant without manually unmuting them.
  • beep – Plays a sound and sends system notification when new participants join (requires notification permissions).

For guests, we can send a link like this:

https://vdo.ninja/?room=Blindrevue&novideo=1&proaudio=1&label&autostart=1&videomute=1&webcam

Notice the differences:
  • We replaced director with room. The value must remain the same, otherwise the guest will end up in a different room.
  • We left label empty — this makes VDO Ninja ask the guest for a nickname upon joining.
    Alternatively, you can send personalized links, e.g., label=Peter or label=Marek.
  • The webcam parameter tells VDO Ninja to immediately stream audio from the guest’s microphone; otherwise, they’d need to click “Start streaming” or “Share screen.”


How to Join


Simply open the link in a browser.
In our case, the director automatically streams audio to everyone else.
Participants also join by opening their link in a browser.
If a nickname was predefined, they’ll only be asked for permission to access their microphone and camera.
Otherwise, they’ll also be prompted to enter their name.

Usually, the browser will display a permission warning.
Press F6 to focus on it, then Tab through available options and allow access.

Controls


The page contains several useful buttons:

  • Text chat – Toggles the text chat panel, also allows sending files.
  • Mute speaker output – Mutes local playback (others can still hear you).
  • Mute microphone – Mutes your mic.
  • Mute camera – Turns off your camera (enabled by default in our example).
  • Share screen / Share website – Allows screen or site sharing.
  • Room settings menu (director only) – Shows room configuration options.
  • Settings menu – Lets you configure input/output devices.
  • Stop publishing audio and video (director only) – Stops sending audio/video but still receives others.


Adjusting Input and Output Devices


To change your audio devices:

  1. Activate Settings menu.
  2. Press C to jump to the camera list — skip this for audio-only.
  3. Open Audio sources to pick a microphone.
  4. In Audio output destination, select your playback device. Press test button to test it.
  5. Close settings when done.


Director Options


Each guest appears as a separate landmark on the page.
You can navigate between them quickly (e.g., using D with NVDA).

Useful controls include:

  • Volume slider – Adjusts how loud each participant sounds (locally only).
  • Mute – Silences a guest for everyone.
  • Hangup – Disconnects a participant.
  • Audio settings – Adjusts their audio input/output remotely.


Adjusting Guest Audio


Under Audio settings, you can:

  • Enable/disable filters (noise gate, compressor, auto-gain, etc.).
  • View and change the guest’s input device — if you change it, a Request button appears, prompting the guest to confirm the change.
  • Change the output device, useful for switching between speaker and earpiece on mobile devices.


Recording


Our URL parameters define automatic recording for all participants.
Recordings are saved in your Downloads folder, and progress can be checked with Ctrl+J.

Each participant’s recording is a separate file.
For editing, import them into separate tracks in your DAW and synchronize them manually.
VDO Ninja doesn’t support single-track recording, but you can use Reaper or APP2Clap with a virtual audio device.

To simplify synchronization:

  1. Join as director, but remove autorecord.
  2. Wait for everyone to join and check audio.
  3. When ready, press Alt+D to edit the address bar.
  4. Add &autorecord, reload the page, and confirm rejoining.
  5. Recording now starts simultaneously for everyone.
  6. Verify this in your downloads.


Manual Recording


To start recording manually:

  1. Open Room settings menu.
  2. Go to the Room settings heading.
  3. Click Local record – start all.
  4. Check PCM recording (saves WAV uncompressed).
  5. Check Audio only (records sound without video).
  6. Click Start recording.


Important Recording Notes


  • Always verify that all guest streams are recording.
  • To end recordings safely, click Hangup for each guest or let them leave.
  • You can also toggle recording for each guest under More options → Record.
  • Files are saved as WEBM containers. If your editor doesn’t support it, you can convert them using the official converter.
  • Reaper can open WEBM files but may have editing issues — I prefer importing the OPUS audio file instead.


Recommended Reading


In this article, I’ve covered only a few features and URL parameters.
For more details, check the VDO Ninja Documentation.

1 / 2: Did you know @GNOME Files aka #nautilus has a nifty feature where it can batch rename files? Advanced features include adding sequential numbering, using placeholders and doing search and replace on the names of selected files. #ScreenReader #a11y is preserved.
In order to use it just select multiple files and find Rename item in the shift+F10 popup menu or simply press F2. Also... Don't be shy to press the add button in the batch rename dialog.

Yeah, I've updated my @Arch Linux to @GNOME 49.
There are some nifty #a11y related tweaks such as better labelling for gnome shell menus, refreshed settings UI, I like how presentation of various lists e.g. List of wireless networks is presented with screen reader including signal strength.

Thanks to everyone involved for the improvements.

Hello @GrapheneOS screen-reader users and other #a11y friends,

There was an interesting debate going on at the end of may where screen reader users were asking for #tts engine included with GrapheneOS base system.
grapheneos.social/@GrapheneOS/…

I understand this is very unlikely to change in the near future as I am not aware of a TTS system that is open-source and modern enough to be included.
@Accessible Android has a list of TTS engines sorted by language at this page: accessibleandroid.com/list-of-…
Except of eSpeak-ng and RHVoice there is another opensource app called SherpaTTS that can use Piper TTS and Coqui based voices at: github.com/woheller69/ttsEngin…
Including eSpeak-ng, RHVoice, SherpaTTS and the list of TTS engines mentioned by accessible android, is there a viable TTS engine or at least one that is close enough to be viable to get included in the foreseable future?

Another approach I have been thinking about is to add / inject the TTS app or any other app I'd like as a part of the install process. It turns out I am not the only one speculating about that idea and it's not practical and feasible either as it's also breaking the security model.
It's been discussed recently at: discuss.grapheneos.org/d/25899…

Another way on how to install an app on an android device would be using adb install from a computer. I am not definatelly sure on this but GrapheneOS does not allow enabling ADB on production builds. In order to instal a TTS app over ADB we'd need to find a way on how to install GrapheneOS with ADB preenabled on first run. This is a huge security hole as well.

There might be a way to build my own flavour of GrapheneOS, but that's too involved, I'd need very powerfull machine for the actual build process and I would again compromise security by either disabling or handling future updates on my own building each new release on my own.

So given the current state I am afraid we screen reader users are out of luck and there is no way to get this thing running on my own with no help from someone else.

The end result is that I'll either get security or I can look elsewhere to get accessibility.

Please am I getting it right or might I have overlooked something that might help me to install GrapheneOS on my own?

Thanks for reading to the end

LunaticStrayDog reshared this.

in reply to Peter Vágner

One of our full time developers is actively working on building our own text-to-speech and speech-to-text integration. It's where all of their effort is currently going. None of the available apps are suitable for inclusion. None are modern enough aside from Sherpa and it has issues including high latency making it unsuitable for use with TalkBack. Our own implementation is going to be significantly better.

ADB works fine on GrapheneOS but you'd have to enable it.

in reply to boredsquirrel

@Rhababerbarbar We're making our own implementation for inclusion in GrapheneOS. It will be similar in design to Sherpa but faster. It will initially just be English. People can still install Sherpa and other TTS implementations if they want them. We just need something available out-of-the-box for blind users to install GrapheneOS and also basic usability. It's fine if people need to install other TTS implementations for other languages, etc. but we can add that too.
in reply to GrapheneOS

I have asked a friend and @GrapheneOS community chat members for the help with initial setup and now I am fully configured with RHVoice as my current TTS of choice.
Except of one GPS navigation app I am used to everything is working fine for me including proprietary stuff for my work like Microsoft Teams, banking apps including Poštová banka, George and Revolut and the other apps I like such as Bitwarden as a password manager, Arcanechat, Conversations, ElementX, FairEmail, Open Key Chain for chatting and emailing, Antennapod, BubbleUPNP, Foobar 2000, Kore, Voice, NewPipe, ytdlnis for podcasts, music, audiobooks and videos, , Catima for lojalty cards and tickets, some other apps. For downloading apps I am mainly using F-droid and Aurora store. I am not signed into the google account but I am using play services for push notifications and other compatibility reasons for apps which need it.

Thanks for everything you are doing, it's fantastic and I like it verry much.

in reply to Peter Vágner

All location-based apps should work, but some may expect network location to be available which it isn't by default. You can enable Network location and Wi-Fi scanning in Settings > Location > Location services if you want network location without needing to use Google Play for location. See grapheneos.org/features#networ….

If you installed apps before sandboxed Google Play and they depend on it, you may need to reinstall the apps depending on it so they detect it properly.

I've just noticed piped.video can still be used for playing videos. It's just that the public instance at piped.video and some other instances require registration.
@Archos and friends, please have you explored ways to eventually host it at @Oscloud ?
I'd host it my-self but I don't have a spare machine at the location with suitable ipv6 range for being somewhat resilient to youtube throttling attempts.

Thanks for considering

Geeky stuff aside, let's enjoy the hottest friday party ever. @Marek Macko is on air again with his awesome show called #playgroundLive. #dance, #trance, a bit of #hardStyle, #eurodance, #90s and other party genres in an incredible live mix performance lasting a few hours spiced up with some random chat messages of fellow listeners and friends.

Fun project for friday evening. I am hosting some unmaintained web sites for a few friends and they help me to pay the hosting costs in return.
Now I have found out I need php 7.4 for some of these and it's not readily available so building php 7.4.33 on up to date arch linux.

I am afraid this will no longer be possible in the future. How do you deal with this? Can I run PHP in some kind of lightweight container?

More and more I am looking through @Delta Chat apps and resources I believe this should become number one messenger of choice for screen reader users.
The developers are constantly improving its #a11y. It's secure from the start of using.
Additionally the desktop chat has under gone an #accessibility audit and accessibility issues are clearly documented in public.
I am not sure other messenger style app on the planet has such dedicated commitment to accessibility ever.

github.com/deltachat/deltachat…

in reply to Paweł Masarczyk

@Paweł Masarczyk While testing I have used gmail or my own classic email account. It worked but I have understood using #chatmail servers is what makes it most attractive. I think I have read a blogpost from someone a few months ago explaining this very well that inspired me, however I can't find that article in my browser history right now. CC @Cleverson @Delta Chat
in reply to Delta Chat

@Paweł Masarczyk @Cleverson I know you are looking for a way to stay in contact with people you are already connected to using traditional email. Still I would recommend creating a #chatmail account on your relay of choice just for testing so you can start with an empty profile and you'll get to experience the @Delta Chat the way it has been meant to. Then as an exercise continue with other more advanced scenarios such as classic email login.
The #chatmail based onboarding is really very simple, there is nothing to do wrong.

Playing with various technologies tonight. Managed to setup a brand new debian 12 systemd-nspawn container. Now looking through various documentation and other internet sources to combine it to a working #chatmail relay server. The thing is I only have a single IP address on the host system so I need to adapt it so the nginx running on the host either redirects or proxies to the container. Plus I guess I need to share the certificate in some way so postfix and dovecot can use it too.