He wanted to create an audiobook. Since the budget was small, we did it like this: I gave him a Zoom H1N recorder. He locked himself in a relatively quiet room and gradually recorded the entire book. He sent me the raw material, which I ran through @Auphonic to remove background noise and room echo and to balance the loudness levels.
Now I just need to remove the mistakes and create the music background. It won’t be full studio quality — but honestly, I’ve heard “studio” recordings that sounded much worse than what we’re working on now.
Improve the world map with @MapComplete .
Watch it at this link if you like.
vhsky.cz/w/9Hdaqab9CvbPwhk1VnD…
Or see the schedule here: talks.openalt.cz/openalt-2025/…
Konference OpenAlt 2025 – D105
Vítejte na konferenci OpenAlt 2025. 🗓️ 👍 Program a hodnocení: https://www.openalt.cz/2025/program/VHSky
reshared this
The talk is in czech.
He had a similar talk a few weeks ago:
talks.openalt.cz/openalt-2025/…
DeltaChat - konečně inovativní IM OpenAlt 2025
S IM se roztrh pytel. Klasickěmu XMPP dnes konkurují služby jako Matrix, Signal, Telegram, WhatsApp a další. Je tu však jedna služba, která v této záplavě komunikátorů vyniká - DeltaChat.talks.openalt.cz
OpenAlt reshared this.
Peter Vágner likes this.
Peter Vágner reshared this.
I'd like to parse ipv4 addresses given as command line argument values.
I have got two arguments accepting ipv4 address.
If I specify single such option all is fine.
If I specify both, I 'm getting error like this:
thread 'main' (624061) panicked at /home/peto/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/clap-3.2.25/src/parser/matches/arg_matches.rs:1879:13:
Must use `Arg::allow_invalid_utf8` with `_os` lookups at `[hash: A8F400C40154F09]`This is simplified version of my code showcasing the issue:
```
use std::net::{IpAddr, Ipv4Addr};
use clap::{App, AppSettings, Arg, value_parser};
#[tokio::main]
async fn main() -> Result<(), Error> {
let mut app = App::new("Server APP")
.about("My super cool app")
.setting(AppSettings::DeriveDisplayOrder)
.setting(AppSettings::SubcommandsNegateReqs)
.arg(
Arg::with_name("socket")
.required(true)
.takes_value(true)
.long("socket")
.help("Unix socket path"),
)
.arg(
Arg::with_name("relayaddress")
.required(false)
.takes_value(true)
.long("relay-address")
.value_parser(value_parser!(Ipv4Addr))
.help("External relay ipv4 address used together with --listen-address to run behind a nat"),
)
.arg(
Arg::with_name("listenaddress")
.required(false)
.takes_value(true)
.long("listen-address")
.value_parser(value_parser!(Ipv4Addr))
.help("Local listen ipv4 address used together with --relay-address to run behind a nat"),
);
let matches = app.clone().get_matches();
if matches.is_present("relayaddress") & matches.is_present("listenaddress") {
let external_ip = IpAddr::V4(matches.get_one::<Ipv4Addr>("relayaddress").expect("Invalid address"));
let local_ip = IpAddr::V4(matches.get_one::<Ipv4Addr>("listenaddress").expect("Invalid address"));
println!("Listening on local IP: {local_ip}");
println!("Relaying through external IP: {external_ip}");
}}
```
reshared this
Sensitive content
--socket argument is required, other two arguments are supposed to be used together and this condition is tested at runtime.
So if I specify all three command line arguments, I am always getting that error.
I have attempted using os::str and casting but the issue remains. I am simply compiling the app with cargo build --release.
Have you been just adding stuff I may have overlooked when trying to simplify for posting or did you actually changed something please?
I have changed it to use std::path::PathBuf and it's working fine for me now.
Huge thanks for friendly hint and looking at my code.
LiveATC Recordings | LiveATC.net
LiveATC.Net Recordings - interesting air traffic communications captured by LiveATC userswww.liveatc.net
Peter Vágner reshared this.
Peter Vágner likes this.
GitHub - michaldziwisz/sara: Simple Accessible Radio Automation
Simple Accessible Radio Automation. Contribute to michaldziwisz/sara development by creating an account on GitHub.GitHub
reshared this
Peter Vágner likes this.
Peter Vágner likes this.
Peter Vágner reshared this.
If you are into #tranceMusic #uplifting #vocalTrance, have a listen to these stunning tracks
Paipy & Elles de Graaf - The Last Time
Driftmoon X XiJaro & Pitch - Rise Again
RAM & Arctic Moon & Stine Grove - A Billion Stars Above
VOCAL TRANCE: Paipy & Elles de Graaf - The Last Time [Amsterdam Trance] + LYRICS
For more Trance: https://RazNitzan.lnk.to/RNMSpotifySubscribe to our Youtube Channel: @RazNitzanMusic Download or Stream: https://RazNitzan.lnk.to/TheLastTim...YouTube
Peter Vágner likes this.
Peter Vágner likes this.
Do you know that you can use Subtitle edit to transcribe audio? It has a relatively accessible guy so you can use Purfwiev's faster whisper xxl, cpp, cpp cublas, const-me. Longer post how to use it follows:
Installing Subtitle Edit
Download the program from the developer’s website. Navigate to the level 2 heading labeled “Files.”
If you want to install Subtitle Edit normally, download the first file, labeled setup.zip.
There is also a portable version available, labeled SE_version_number.zip.
If you decide to use the portable version, extract it and move on to the next section of this article. The installation itself is standard and straightforward.
A Note on Accessibility
NVDA cannot automatically obtain focus in lists.
To find out which item in the list is currently selected, move down with the arrow key to change the item, then press NVDA+TAB to hear which one is focused.
Initial Setup
- In the menu bar, go to Video and activate Audio to text (Whisper).
- When using this feature for the first time, the program may ask whether you want to download FFMPEG. This library allows Subtitle Edit to open many audio and video files, so confirm the download by pressing Yes.
- Subtitle Edit will confirm that FFMPEG has been downloaded and then ask whether you want to download Purfwiev’s Faster Whisper – XXL. This is the interface for the Whisper model that we’ll use for transcription, so again confirm by pressing Yes.
- The download will take a little while.
- Once it’s complete, you’ll see the settings window. Press Tab until you reach the Languages and models section. In the list, select the language of your recording.
- Press Tab to move to the Select model option, and then again to an unlabeled button.
- After activating it, choose which model you want to use. Several models are available:
- Small models require less processing power but are less accurate.
- Large models take longer to transcribe, need more performance and disk space, but are more accurate.
I recommend choosing Large-V3 at this step.
- Wait again for the model to finish downloading.
Transcribing Your First Recording
- Navigate to the Add button and press Space to activate it.
- A standard file selection dialog will open. Change the file type to Audio files, find your audio file on the disk, and confirm.
- Activate the Generate button.
- Now, simply wait. The Subtitle Edit window doesn’t provide much feedback, but you can tell it’s working by the slower performance of your computer—or, if you’re on a laptop, by the increased fan noise.
- When the transcription is done, Subtitle Edit will display a new window with an OK button.
We Got Subtitles, So One More Step
In the folder containing your original file, you’ll now find a new file with the .srt extension.
This is a subtitle file—it contains both the text and the timing information. Since we usually don’t need timestamps for transcription, we’ll remove them in Subtitle Edit as follows:
- Press Ctrl+O (or go to File → Open) to bring up the standard open file dialog. Select the .srt file you just got.
- In the menu bar, open File → Export → Plain text.
- Choose Merge all lines, and leave Show line numbers and Show timecode unchecked.
- Press Save as and save the file normally.
If you’re transcribing multiple recordings, it’s a good idea to close the current subtitle file by starting a new project using Ctrl+N or by choosing File → New.
Conclusion
Downloaded models can, of course, be reused, so future transcriptions will go faster.
In this example, I used Purfwiev’s Faster Whisper. If you want to use a different model, you can select it from the model list, and Subtitle Edit will automatically ask whether you’d like to download it.
Peter Vágner likes this.
Peter Vágner reshared this.
like this
reshared this
Still I like it.
Thanks for sharing!
Peter Vágner reshared this.
During last 3 months I am using VDO ninja for all my remote interwiev and podcast recordings. here is my article about it from the blind perspective, focused on accessibility and audio.
Have You Ever Wanted to Record an Interview or Podcast Online? You’ve probably faced a few challenges:
How to transmit audio in the highest possible quality?
How to connect in a way that doesn’t burden your guest with installing software?
And how to record everything, ideally into separate tracks?
The solution to these problems is offered by the open-source tool VDO Ninja.
What Is VDO Ninja
It’s an open-source web application that uses WebRTC technology. It allows you to create a P2P connection between participants in an audio or video call and gives you control over various transmission parameters.
You can decide whether the room will include video, what and when will be recorded, and much more.
In terms of accessibility, the interface is fairly easy to get used to — and all parameters can be adjusted directly in the URL address when joining.
All you need is a web browser, either on a computer or smartphone.
Getting Started
The basic principle is similar to using MS Teams, Google Meet, and similar services.
All participants join the same room via a link.
However, VDO Ninja distinguishes between two main types of participants: Guests and the Director.
While the guest has limited control, the director can, for example, change the guest’s input audio device (the change still must be confirmed by the guest).
A Few Words About Browsers
VDO Ninja works in most browsers, but I’ve found Google Chrome to be the most reliable.
Firefox, for some reason, doesn’t display all available audio devices, and when recording multiple tracks, it refuses to download several files simultaneously.
Let’s Record a Podcast
Let’s imagine we’re going to record our podcast, for example, Blindrevue.
We can connect using a link like this:
https://vdo.ninja/?director=Blindrevue&novideo=1&proaudio=1&label=Ondro&autostart=1&videomute=1&showdirector=1&autorecord&sm=0&beepLooking at the URL more closely, we can see that it contains some useful instructions:
- director – Defines that we are the director of the room, giving us more control. The value after the equals sign is the room name.
- novideo – Prevents video from being transmitted from participants. This parameter is optional but useful when recording podcasts to save bandwidth.
- proaudio – Disables effects like noise reduction, echo cancellation, automatic gain control, compression, etc., and enables stereo transmission.
Be aware that with this setting, you should use headphones, as echo cancellation is disabled, and otherwise, participants will hear themselves. - label=Ondro – Automatically assigns me the nickname “Ondro.”
- autostart – Starts streaming immediately after joining, skipping the initial setup dialog.
- videomute – Automatically disables the webcam.
- showdirector – Displays our own input control panel (useful if we want to record ourselves).
- autorecord – Automatically starts recording for each participant as they join.
- sm=0 – Ensures that we automatically hear every new participant without manually unmuting them.
- beep – Plays a sound and sends system notification when new participants join (requires notification permissions).
For guests, we can send a link like this:
https://vdo.ninja/?room=Blindrevue&novideo=1&proaudio=1&label&autostart=1&videomute=1&webcamNotice the differences:
- We replaced director with room. The value must remain the same, otherwise the guest will end up in a different room.
- We left label empty — this makes VDO Ninja ask the guest for a nickname upon joining.
Alternatively, you can send personalized links, e.g.,label=Peterorlabel=Marek. - The webcam parameter tells VDO Ninja to immediately stream audio from the guest’s microphone; otherwise, they’d need to click “Start streaming” or “Share screen.”
How to Join
Simply open the link in a browser.
In our case, the director automatically streams audio to everyone else.
Participants also join by opening their link in a browser.
If a nickname was predefined, they’ll only be asked for permission to access their microphone and camera.
Otherwise, they’ll also be prompted to enter their name.
Usually, the browser will display a permission warning.
Press F6 to focus on it, then Tab through available options and allow access.
Controls
The page contains several useful buttons:
- Text chat – Toggles the text chat panel, also allows sending files.
- Mute speaker output – Mutes local playback (others can still hear you).
- Mute microphone – Mutes your mic.
- Mute camera – Turns off your camera (enabled by default in our example).
- Share screen / Share website – Allows screen or site sharing.
- Room settings menu (director only) – Shows room configuration options.
- Settings menu – Lets you configure input/output devices.
- Stop publishing audio and video (director only) – Stops sending audio/video but still receives others.
Adjusting Input and Output Devices
To change your audio devices:
- Activate Settings menu.
- Press C to jump to the camera list — skip this for audio-only.
- Open Audio sources to pick a microphone.
- In Audio output destination, select your playback device. Press test button to test it.
- Close settings when done.
Director Options
Each guest appears as a separate landmark on the page.
You can navigate between them quickly (e.g., using D with NVDA).
Useful controls include:
- Volume slider – Adjusts how loud each participant sounds (locally only).
- Mute – Silences a guest for everyone.
- Hangup – Disconnects a participant.
- Audio settings – Adjusts their audio input/output remotely.
Adjusting Guest Audio
Under Audio settings, you can:
- Enable/disable filters (noise gate, compressor, auto-gain, etc.).
- View and change the guest’s input device — if you change it, a Request button appears, prompting the guest to confirm the change.
- Change the output device, useful for switching between speaker and earpiece on mobile devices.
Recording
Our URL parameters define automatic recording for all participants.
Recordings are saved in your Downloads folder, and progress can be checked with Ctrl+J.
Each participant’s recording is a separate file.
For editing, import them into separate tracks in your DAW and synchronize them manually.
VDO Ninja doesn’t support single-track recording, but you can use Reaper or APP2Clap with a virtual audio device.
To simplify synchronization:
- Join as director, but remove
autorecord. - Wait for everyone to join and check audio.
- When ready, press Alt+D to edit the address bar.
- Add
&autorecord, reload the page, and confirm rejoining. - Recording now starts simultaneously for everyone.
- Verify this in your downloads.
Manual Recording
To start recording manually:
- Open Room settings menu.
- Go to the Room settings heading.
- Click Local record – start all.
- Check PCM recording (saves WAV uncompressed).
- Check Audio only (records sound without video).
- Click Start recording.
Important Recording Notes
- Always verify that all guest streams are recording.
- To end recordings safely, click Hangup for each guest or let them leave.
- You can also toggle recording for each guest under More options → Record.
- Files are saved as WEBM containers. If your editor doesn’t support it, you can convert them using the official converter.
- Reaper can open WEBM files but may have editing issues — I prefer importing the OPUS audio file instead.
Recommended Reading
In this article, I’ve covered only a few features and URL parameters.
For more details, check the VDO Ninja Documentation.
reshared this
Peter Vágner likes this.
Peter Vágner reshared this.
In order to use it just select multiple files and find Move to new folder item in the shift+F10 popup menu.
In order to use it just select multiple files and find Rename item in the shift+F10 popup menu or simply press F2. Also... Don't be shy to press the add button in the batch rename dialog.
Peter Vágner likes this.

Yeah, I've updated my @Arch Linux to @GNOME 49.
There are some nifty #a11y related tweaks such as better labelling for gnome shell menus, refreshed settings UI, I like how presentation of various lists e.g. List of wireless networks is presented with screen reader including signal strength.
Thanks to everyone involved for the improvements.
Hello @GrapheneOS screen-reader users and other #a11y friends,
There was an interesting debate going on at the end of may where screen reader users were asking for #tts engine included with GrapheneOS base system.
grapheneos.social/@GrapheneOS/…
I understand this is very unlikely to change in the near future as I am not aware of a TTS system that is open-source and modern enough to be included.
@Accessible Android has a list of TTS engines sorted by language at this page: accessibleandroid.com/list-of-…
Except of eSpeak-ng and RHVoice there is another opensource app called SherpaTTS that can use Piper TTS and Coqui based voices at: github.com/woheller69/ttsEngin…
Including eSpeak-ng, RHVoice, SherpaTTS and the list of TTS engines mentioned by accessible android, is there a viable TTS engine or at least one that is close enough to be viable to get included in the foreseable future?
Another approach I have been thinking about is to add / inject the TTS app or any other app I'd like as a part of the install process. It turns out I am not the only one speculating about that idea and it's not practical and feasible either as it's also breaking the security model.
It's been discussed recently at: discuss.grapheneos.org/d/25899…
Another way on how to install an app on an android device would be using adb install from a computer. I am not definatelly sure on this but GrapheneOS does not allow enabling ADB on production builds. In order to instal a TTS app over ADB we'd need to find a way on how to install GrapheneOS with ADB preenabled on first run. This is a huge security hole as well.
There might be a way to build my own flavour of GrapheneOS, but that's too involved, I'd need very powerfull machine for the actual build process and I would again compromise security by either disabling or handling future updates on my own building each new release on my own.
So given the current state I am afraid we screen reader users are out of luck and there is no way to get this thing running on my own with no help from someone else.
The end result is that I'll either get security or I can look elsewhere to get accessibility.
Please am I getting it right or might I have overlooked something that might help me to install GrapheneOS on my own?
Thanks for reading to the end
LunaticStrayDog reshared this.
One of our full time developers is actively working on building our own text-to-speech and speech-to-text integration. It's where all of their effort is currently going. None of the available apps are suitable for inclusion. None are modern enough aside from Sherpa and it has issues including high latency making it unsuitable for use with TalkBack. Our own implementation is going to be significantly better.
ADB works fine on GrapheneOS but you'd have to enable it.
Peter Vágner likes this.
@GrapheneOS Thanks for the positive info and nice sounding prompt reply.
Now I need to make up my mind if I should find someone else who will install current release for me and install TTS or use something else I can tinker with such as lineage in the mean time.
Huge thanks
What is there apart from #SherpaTTS that is fast and supports many languages?
What components do you want to replace? My only issue currently is the need for multilingual models (german-english) as otherwise it is unusable for me.
I have asked a friend and @GrapheneOS community chat members for the help with initial setup and now I am fully configured with RHVoice as my current TTS of choice.
Except of one GPS navigation app I am used to everything is working fine for me including proprietary stuff for my work like Microsoft Teams, banking apps including Poštová banka, George and Revolut and the other apps I like such as Bitwarden as a password manager, Arcanechat, Conversations, ElementX, FairEmail, Open Key Chain for chatting and emailing, Antennapod, BubbleUPNP, Foobar 2000, Kore, Voice, NewPipe, ytdlnis for podcasts, music, audiobooks and videos, , Catima for lojalty cards and tickets, some other apps. For downloading apps I am mainly using F-droid and Aurora store. I am not signed into the google account but I am using play services for push notifications and other compatibility reasons for apps which need it.
Thanks for everything you are doing, it's fantastic and I like it verry much.
All location-based apps should work, but some may expect network location to be available which it isn't by default. You can enable Network location and Wi-Fi scanning in Settings > Location > Location services if you want network location without needing to use Google Play for location. See grapheneos.org/features#networ….
If you installed apps before sandboxed Google Play and they depend on it, you may need to reinstall the apps depending on it so they detect it properly.
GrapheneOS features overview
Overview of GrapheneOS features differentiating it from the Android Open Source Project (AOSP).GrapheneOS
Peter Vágner likes this.
I've just noticed piped.video can still be used for playing videos. It's just that the public instance at piped.video and some other instances require registration.
@Archos and friends, please have you explored ways to eventually host it at @Oscloud ?
I'd host it my-self but I don't have a spare machine at the location with suitable ipv6 range for being somewhat resilient to youtube throttling attempts.
Thanks for considering
We're not currently planning to replace it on Oscloud, but Piped seems like a better option – maybe in the future.
@archos
Peter Vágner likes this.
Fun project for friday evening. I am hosting some unmaintained web sites for a few friends and they help me to pay the hosting costs in return.
Now I have found out I need php 7.4 for some of these and it's not readily available so building php 7.4.33 on up to date arch linux.
I am afraid this will no longer be possible in the future. How do you deal with this? Can I run PHP in some kind of lightweight container?
More and more I am looking through @Delta Chat apps and resources I believe this should become number one messenger of choice for screen reader users.
The developers are constantly improving its #a11y. It's secure from the start of using.
Additionally the desktop chat has under gone an #accessibility audit and accessibility issues are clearly documented in public.
I am not sure other messenger style app on the planet has such dedicated commitment to accessibility ever.
github.com/deltachat/deltachat…
Issues from Accessibility audit · Issue #4743 · deltachat/deltachat-desktop
We recently got an accessibility quick scan from HAN. They took some time to discuss/test the app with us in a call and gave us a report. treefit and wofwca also made notes during the call, this is...GitHub
Anban Govender likes this.
reshared this
The #chatmail based onboarding is really very simple, there is nothing to do wrong.
My next mission is getting the location streaming to work and play with some realtime apps.
Hmm, encoding #braille into music tones has recently been featured in the #braille200. I think it's nice for fun. Still I'm wondering if some of you might be able to understand it in real time.
MapComplete
in reply to Peter Vágner • • •Peter Vágner likes this.