Oops, forgot to let our followers here know that we released 0.87.1 last week! This version includes a fix for a small bug that crept into 0.87.0 that prevented syncing of weightlifting workouts on Garmins.

But the bigger news is that we also managed to get reproducible builds at F-Droid in place in time for this release!

As usual, more details in our blog post: gadgetbridge.org/blog/release-…

in reply to Gadgetbridge

You mention that you're publishing both your self-signed and the F-Droid signed build on F-Droid? Do you have more details about how that works and what you had to do to set that up?

I've wanted to have Catima be RB on F-Droid without breaking existing updates for quite a while, but I didn't really manage to make any progress when trying to talk to the @fdroidorg team, so I'd love to know how you got this working :)

We're looking for interesting questions around @matrix , its history, its technology, statistics and fun facts for The #MatrixConf2025 Pub [quizzz]!

Do you have suggestions? Please share them with the conference team in the following form: forms.gle/6tbry4Zdzb1fYVfx5 or contact us at #events-wg:matrix.org

This entry was edited (1 day ago)

Its always funny to see people who say "I did this with the help of AI because noone else seems to have done it before, and I didn't know how to do it either, so I used AI for that."
Thing is, the fact that AI could do it for you basically means that it has been done before and AI trained on it.
What you actually wanted to say is: "I spent some time rebuilding someone else's work because I wasn't able to find it on Google."
I know this is overdramatized, but also not totally wrong.

Matt Campbell reshared this.

in reply to Toni Barth

You are partially correct, but this is an oversimplification of how an AI model, for example a LLM works. It can, and does, use data that it got during its training phase, but that's not the entire story, otherwise it'd be called a database that regurgitates what it was trained on. On top of the trained data there's zero-shot learning, for example to figure out a dialect of a language it hasn't been trained on, based on statistical probability of weights from the trained data, as well as combine existing patterns into new patterns, thus coming up with new things, which are arguably part of creativity.

What it can't do though is, and this is very likely what you mean, it can't go outside it's pre-trained patterns. For example, if you have a model that was trained on dragons and another model that was trained on motorcycles, if you combine those two models, they can come up with a story where a dragon rides a motorcycle, even though that story has not been part of its training data. What it can't do is come up with a new programming language because that specific pattern does not exist. So the other part of creativity where you'd think outside the box is a no go. But a lot of people's boxes are different and they are very likely not as vast as what the models were trained on, and that's how an AI model can be inspiring.

This is why a lot of composers feel that AI is basically going to take over eventually, because they will have such a vast amount of patterns that a director, trailer library editor, or other content creator will be satisfied with the AI's results. The model's box will be larger than any human's.

reshared this

in reply to Erion

@erion @menelion Most of the generative capabilities of an LLM come from linear algebra (interpolation), and statistical grammar compression. We can bound the capabilities of a model by considering everything that can be achieved using these tools: I've never seen this approach overestimate what a model is capable of.

"Zero-shot learning" only works as far as the input can be sensibly embedded in the parameter space. Many things, such as most mathematics, can't be viewed this way.

in reply to wizzwizz4

It never will, because modern LLMs are far more capable.

They rely on non-linear activation functions (like ReLU, GELU, etc.) after the linear transformations. If the network were purely linear, it could only learn linear relationships, regardless of its depth. The non-linearities are what allow the network to learn complex, non-linear mappings and interactions between inputs.

There's also scaling, arguably an internal world model, being context-aware (which is definitely not something linear). If anything, this would underestimate a model.

reshared this

in reply to Erion

@erion @menelion I'm aware that models are non-linear functions, but they operate over elements of a linear ("vector") space. Each layer can be viewed as a non-linear map between vector spaces. Think "dog is to cat as puppy is to ???": given a suitable embedding, that's a linear algebra problem. This is responsible for most of the observed "intelligence" of LLMs, and for phenomena like vgel.me/posts/seahorse/.
This entry was edited (1 hour ago)
in reply to wizzwizz4

If you think of how the self-attention mechanism dynamically and non-linearly re-weights every input vector based on its full context, essentially setting up relationships needed for things like chain-of-thought reasoning, planning and deep contextual understanding, you can't reduce a model's intelligence to mere vector arithmetic. In the past, this was absolutely true and you could rely on it, no problem, but nowadays models go far beyond, having at least a hundred or more layers. Hence why I said that this will always underestimate a model. If you look at smaller models that came out in the last year alone, based on what you estimate, they should be far less capable than they really are.

reshared this

in reply to Toni Barth

It's not totally wrong, but I feel like maybe it's a slight oversimplification. LLMs don't just outright copy the training data, that's why it's called generative AI. That doesn't mean they will never reproduce anything in the training set, but they are very good at synthesizing multiple concepts from that data and turning them into something that technically didn't exist before.

If you look at something like Suno, which is using an LLM architecture under the hood, you're able to upload audio and have the model try to "cover" that material. If I upload myself playing a chord progression/melody that I made up, the model is able to use it's vast amount of training data to reproduce that chord progression/melody in whatever style.

It would be really important for everyone to read about the theory of appeasement and how it has *never* worked.

--

The catastrophes of World War II and the Holocaust have shaped the world’s understanding of appeasement. The diplomatic strategy is often seen as both a practical and a moral failure.

Today, based on archival documents, we know that appeasing Hitler was almost certainly destined to fail. Hitler and the Nazis were intent upon waging an offensive war and conquering territory. But it is important to remember that those who condemn Chamberlain often speak with the benefit of hindsight. Chamberlain, who died in 1940, could not possibly have foreseen the scale of atrocities committed by the Nazis and others during World War II.

---

We have the hindsight today. Let's not make the same mistakes.

encyclopedia.ushmm.org/content…

in reply to Rui Batista

Eu diria que quando um cidadão normal deixa de saber explicar como se contabilizam os votos de uma eleição já não há lugar para vergonha na democracia, porque a democracia já não existe. É esse o risco do voto eletrónico. É preciso encontrar uma solução para que quem não vê também possa votar, mas essa solução não pode colocar em risco a democracia.
This entry was edited (1 hour ago)
in reply to Chris 🌱

I had the same thought as I'm currently in the process of choosing a vacuum. Wanted to go for a robot, but decided I'd still need to clean manually once a week, so it's probably best to start with a manual thing. Thinking about a wet-dry vacuum so I don't have to wipe the floor separately, but then again — I have two large carpets. No clue what I'm gonna do, but I sure am excited.

During last 3 months I am using VDO ninja for all my remote interwiev and podcast recordings. here is my article about it from the blind perspective, focused on accessibility and audio.

Have You Ever Wanted to Record an Interview or Podcast Online? You’ve probably faced a few challenges:
How to transmit audio in the highest possible quality?
How to connect in a way that doesn’t burden your guest with installing software?
And how to record everything, ideally into separate tracks?

The solution to these problems is offered by the open-source tool VDO Ninja.

What Is VDO Ninja


It’s an open-source web application that uses WebRTC technology. It allows you to create a P2P connection between participants in an audio or video call and gives you control over various transmission parameters.
You can decide whether the room will include video, what and when will be recorded, and much more.

In terms of accessibility, the interface is fairly easy to get used to — and all parameters can be adjusted directly in the URL address when joining.
All you need is a web browser, either on a computer or smartphone.

Getting Started


The basic principle is similar to using MS Teams, Google Meet, and similar services.
All participants join the same room via a link.
However, VDO Ninja distinguishes between two main types of participants: Guests and the Director.
While the guest has limited control, the director can, for example, change the guest’s input audio device (the change still must be confirmed by the guest).

A Few Words About Browsers


VDO Ninja works in most browsers, but I’ve found Google Chrome to be the most reliable.
Firefox, for some reason, doesn’t display all available audio devices, and when recording multiple tracks, it refuses to download several files simultaneously.

Let’s Record a Podcast


Let’s imagine we’re going to record our podcast, for example, Blindrevue.
We can connect using a link like this:

https://vdo.ninja/?director=Blindrevue&novideo=1&proaudio=1&label=Ondro&autostart=1&videomute=1&showdirector=1&autorecord&sm=0&beep

Looking at the URL more closely, we can see that it contains some useful instructions:
  • director – Defines that we are the director of the room, giving us more control. The value after the equals sign is the room name.
  • novideo – Prevents video from being transmitted from participants. This parameter is optional but useful when recording podcasts to save bandwidth.
  • proaudio – Disables effects like noise reduction, echo cancellation, automatic gain control, compression, etc., and enables stereo transmission.
    Be aware that with this setting, you should use headphones, as echo cancellation is disabled, and otherwise, participants will hear themselves.
  • label=Ondro – Automatically assigns me the nickname “Ondro.”
  • autostart – Starts streaming immediately after joining, skipping the initial setup dialog.
  • videomute – Automatically disables the webcam.
  • showdirector – Displays our own input control panel (useful if we want to record ourselves).
  • autorecord – Automatically starts recording for each participant as they join.
  • sm=0 – Ensures that we automatically hear every new participant without manually unmuting them.
  • beep – Plays a sound and sends system notification when new participants join (requires notification permissions).

For guests, we can send a link like this:

https://vdo.ninja/?room=Blindrevue&novideo=1&proaudio=1&label&autostart=1&videomute=1&webcam

Notice the differences:
  • We replaced director with room. The value must remain the same, otherwise the guest will end up in a different room.
  • We left label empty — this makes VDO Ninja ask the guest for a nickname upon joining.
    Alternatively, you can send personalized links, e.g., label=Peter or label=Marek.
  • The webcam parameter tells VDO Ninja to immediately stream audio from the guest’s microphone; otherwise, they’d need to click “Start streaming” or “Share screen.”


How to Join


Simply open the link in a browser.
In our case, the director automatically streams audio to everyone else.
Participants also join by opening their link in a browser.
If a nickname was predefined, they’ll only be asked for permission to access their microphone and camera.
Otherwise, they’ll also be prompted to enter their name.

Usually, the browser will display a permission warning.
Press F6 to focus on it, then Tab through available options and allow access.

Controls


The page contains several useful buttons:

  • Text chat – Toggles the text chat panel, also allows sending files.
  • Mute speaker output – Mutes local playback (others can still hear you).
  • Mute microphone – Mutes your mic.
  • Mute camera – Turns off your camera (enabled by default in our example).
  • Share screen / Share website – Allows screen or site sharing.
  • Room settings menu (director only) – Shows room configuration options.
  • Settings menu – Lets you configure input/output devices.
  • Stop publishing audio and video (director only) – Stops sending audio/video but still receives others.


Adjusting Input and Output Devices


To change your audio devices:

  1. Activate Settings menu.
  2. Press C to jump to the camera list — skip this for audio-only.
  3. Open Audio sources to pick a microphone.
  4. In Audio output destination, select your playback device. Press test button to test it.
  5. Close settings when done.


Director Options


Each guest appears as a separate landmark on the page.
You can navigate between them quickly (e.g., using D with NVDA).

Useful controls include:

  • Volume slider – Adjusts how loud each participant sounds (locally only).
  • Mute – Silences a guest for everyone.
  • Hangup – Disconnects a participant.
  • Audio settings – Adjusts their audio input/output remotely.


Adjusting Guest Audio


Under Audio settings, you can:

  • Enable/disable filters (noise gate, compressor, auto-gain, etc.).
  • View and change the guest’s input device — if you change it, a Request button appears, prompting the guest to confirm the change.
  • Change the output device, useful for switching between speaker and earpiece on mobile devices.


Recording


Our URL parameters define automatic recording for all participants.
Recordings are saved in your Downloads folder, and progress can be checked with Ctrl+J.

Each participant’s recording is a separate file.
For editing, import them into separate tracks in your DAW and synchronize them manually.
VDO Ninja doesn’t support single-track recording, but you can use Reaper or APP2Clap with a virtual audio device.

To simplify synchronization:

  1. Join as director, but remove autorecord.
  2. Wait for everyone to join and check audio.
  3. When ready, press Alt+D to edit the address bar.
  4. Add &autorecord, reload the page, and confirm rejoining.
  5. Recording now starts simultaneously for everyone.
  6. Verify this in your downloads.


Manual Recording


To start recording manually:

  1. Open Room settings menu.
  2. Go to the Room settings heading.
  3. Click Local record – start all.
  4. Check PCM recording (saves WAV uncompressed).
  5. Check Audio only (records sound without video).
  6. Click Start recording.


Important Recording Notes


  • Always verify that all guest streams are recording.
  • To end recordings safely, click Hangup for each guest or let them leave.
  • You can also toggle recording for each guest under More options → Record.
  • Files are saved as WEBM containers. If your editor doesn’t support it, you can convert them using the official converter.
  • Reaper can open WEBM files but may have editing issues — I prefer importing the OPUS audio file instead.


Recommended Reading


In this article, I’ve covered only a few features and URL parameters.
For more details, check the VDO Ninja Documentation.

reshared this

Announcing AudioCapture. A win32 application to capture audio from a process and save it to an audio file. Full disclosure: This was written with Claude Code. Why? Because I'm not an experienced c++ programmer, however I saw an idea for an app and no one else was going to write it, so I did it myself this way. The full code is available, so if you wish to contribute, feel free. Download: github.com/masonasons/AudioCap… Code: github.com/masonasons/AudioCap…

reshared this

LB: SaaS is real! it means it's run at a large enough scale that someone who gives remotely a shit carries a pager and fixes it and remotely maintains it

we live in a society, the problem is abusive tech companies running non commodity services with no data portability or real regulations in general. the solution is not computer prepperism (self hosting)! we live in a society.

hyperindividualism temptations come from society not working properly

This entry was edited (11 hours ago)

For those new Fedizens arriving from #Bluesky, here’s a little introduction to our lord and saviour, John Mastodon.

And remember to always, and I mean always, add #AltText to your images!

John Mastodon (ft. Andre Louis[@Onj])
~ Dgar

“When the darkness fell on blue
In the year of twenty-two
The chanting started to ring true
The call went out to me and you
Join Mastodon
Join Mastodon

How could they have ever known
Their words were forming sacred tones
The ancient forces in the stone
The summoning of flesh and bone
John Mastodon
John Mastodon

John Mastodon, they love their mum
They have alt-text written on
The tattoo on their arm
Of a hairy pachyderm
John Mastodon
John Mastodon

Join Mastodon x8

The chants grew loud as we watched on
The coming of the chosen one
They emerged triumphant from
The ancient portal of Gargron
John Mastodon
John Mastodon

Linking people across the earth
They lead them to the Fediverse
Their admin army show their worth
Shouting loudly in their mirth
Join Mastodon
Join Mastodon

John Mastodon, they’re the one
Who takes the corporate socials on
They give their code to everyone
For every platform you might run
John Mastodon
Join Mastodon

Join Mastodon x8

Sweating blood and guts and tears
And fighting bots and billionaires
An artist and engineer
With indie songs in his ears
John Mastodon
John Mastodon

Ditch the birds and book of faces
Leave behind the corporations.
Tooting old computer cases
Open source on all our bases
Join Mastodon
Join Mastodon

John Mastodon knows all the tricks
Of every distro of Linux
They stay engaged in politics
They fought an army of Fediverse chicks
John Mastodon
John Mastodon”

dgar.bandcamp.com/track/john-m…

#DgarMusic #DgarRadio #Indie #Music #Bandcamp #JohnMastodon

reshared this

in reply to Andre Louis

@FreakyFwoof @jcsteh @KaraLG84 It's one of those weird ones where I know most people will like/subscribe if I don't say it, but then, it's become such a staple that I do wonder how much it's hurting creators that don't give out the reminder. To what degree you want people who need to be told to subscribe to you in order for them to actually do it is another matter but particularly for small creators I don't think beggers get to be choosers that much :)
See also, why are subscribing and pressing the notification bell two different actions anyway :P
in reply to Florian

@jcsteh @KaraLG84 The other thing pressing 'Like' does is fine-tune your video choices youtube will feed you so you hurt both yourself and creator by not using it, unless you're happy with AI slop and not much else, as your next recommendation.
I said weeks ago on here that my algorithm on youtube is so extremely finely dialed in, that 95% of the next videos that come up are things I would choose to watch. That is because I press Like on so, so many videos. It genuinely does matter.
in reply to Andre Louis

@FreakyFwoof @jcsteh @KaraLG84 I think that is the insidious thing though ... it kinda does for some people. Like, sure, my feed has a bunch of garbage in it, but it also generally has things I like without me needing to scroll too much and I think a lot of people just consider that the standard way of things. Basically, it's not as good as it could be, but it's good enough. And if it's good enough, no need to change anything is there?
I forget, is there a hotkey to like videos? On SPotify I've grown fast friend with shift+alt+b :)
in reply to Andre Louis

@FreakyFwoof @jcsteh @KaraLG84 Meanwhile I just use foo_youtube over here, which can't even like and most of the time isn't signed in unless age restricted, and don't even touch the recommendations feed. But I'm the odd one out. I subscribe to a few channels but primarily just have some playlists I periodically look at with channels in them to view.
in reply to Florian

@jcsteh @KaraLG84 Good ratio here for example.
My NI video which is now out of date called 'What to install first' has 58K views, 607 likes.
My 'how to format your drive as APFS' has 44K views but 638 likes.
A slightly higher like-to-watch ratio, so that tells me that I must have gotten something correct with that.
It isn't just inane bullshit, you see?
Knowing that, it can help me to decide that hey, this kind of tutorial thing was worth it, should do more.
If it had that many views and only say 15 likes, I'd be more concerned.
in reply to Andre Louis

@FreakyFwoof @jcsteh @KaraLG84 Those tutorials, that are actually well recorded and produced definitely do fill a niche I think, I far prefer a youtube video over a random MP3 in someone's dropbox where the speaker is only in the left track for the entire time, the filename is clearly the Audio Hijack Pro default filename for a source and the music is 5 times louder than the speaker :)

I was recently looking at Framework for a new x86 laptop, as I believed the company to be reasonably aligned with my values (e.g. pro-repair, pro-FOSS, pro-humanity). But others have warned me that they are now supporting Hyprland, Omarchy, etc. They support these projects led by people who hold alt-right views, in the name of building a “big tent” coalition.

The problem, however, is that building a “big tent” coalition, by design, requires some form of value alignment.

community.frame.work/t/framewo…

Alpine is a “big tent”, for example, but people who want to harm members of our community aren’t welcome.

This isn’t hard.

Needless to say, I won’t be buying a Framework laptop anytime soon, which makes me sad.

A poll, about microwaves!

Assuming you have a microwave oven, does it have a digital display and buttons, or a number of dials only? Just got a new microwave, with dials as I hate the digital beeps, and a friend was surprised that it had dials and not a screen. Over here I think mine is quite normal and common and it's the type I always get! The "microwave is done" sound also comes from a physical bell, which is nice.

Share for science, should you care to.

  • Buttons plus display (14%, 1 vote)
  • Buttons, no display (14%, 1 vote)
  • Dials plus display (28%, 2 votes)
  • Dials, no display (42%, 3 votes)
  • ✨ No microwave participation club ✨ (0%, 0 votes)
7 voters. Poll end: in 16 hours

in reply to Sini Tuulia

As a blind person, I find that most household appliances fall on a bimodal distribution when it comes to accessibility. You have the cheap stuff with dials and buttons which is perfectly accessible, you have the super expensive stuff with apps and smart speaker integrations (though EU law certainly doesn't do us any good here). THe middle, with touch screens and no internet connectivity, is the most challenging of them all.
in reply to Mikołaj Hołysz

@miki Oh yeah, that makes sense. I prefer the tactile and simple mechanical things in everything but my phone and computer, and even then I prefer a clacky mechanical keyboard and physical mouse versus touch screen. Even my sewing machine is as old as I am, with nothing but dials!

I figure the speech controlled ones are super nice when they work flawlessly, and exceptionally a hassle to troubleshoot when they don't.

in reply to Sini Tuulia

There are some weird restrictions about that in EU law. I don't understand which devices this applies to specifically, but there's some restrictions on being able to remotely start devices which contain a heating element. Many devices that have an app only let you use it to adjust settings, but the start button has to be pushed on the (inaccessible) screen. Some gadgets don't have this restrictions, so I don't really know when it applies and when it doesn't.

Time for serious questions on #Reform and #Russia

(In March, the New York Times reported that “one of the biggest corporate donors to the #populist #ReformUK party has sold almost $2million worth of transmitters, cockpit equipment, antennas and other sensitive technology to a major supplier of #Moscow’s #blacklisted state weapons agency”.)

heraldscotland.com/politics/vi…

I am collecting texts and resources on the "Haitian Revolution" for my Human Rights class and I thought I'd share a few things with you.
First of all, this Graphic Novel (online), which helps you understand what happened: "The Slave Revolution That Gave Birth to Haiti" thenib.com/haitian-revolution/

And this short article which also gives an overview - both written by historian Laurent Dubois, who is an expert on the field: aeon.co/essays/why-haiti-shoul…

#HumanRights #Haiti #HaitianRevolution #Slavery

in reply to daniel:// stenberg://

I've been thinking about your post a lot, especially after seeing such tools at my $dayjob. I'm biased due to their ethical impact, but even without it I consider them, on average, harmful. I write code, I make sure it works for my usage, I write tests, I do linters and static analysis, I do a peer review to share the knowledge and get external inputs. And then this thing, supposedly state of the art, goes over my code, mansplains it to me and finds either a false-positive (I wonder who removed false-positives from the lists you've got?), or a nit ("don't forget to add an extra check here!", "The comment is stale!"), or a misguided optimisation possibly introducing new bugs. I spend lots of time thinking over those useless blanket reports that ultimately don't matter because I have empirical evidence that my code works for my use case.

I have seen so called AI tooling generating "helpful reports", but they cannot replace decent tooling and tests. And yet some people replace their LSPs with LLMs :/

in reply to Nina Kalinina

@nina_kali_nina all analyzer tools, including compilers, give a certain amount of false positives. I don't think we should expect AI tools to be any different. As long as frequency is manageable and there are decent ways to inhibit them.

The AI tools I've mentioned recently don't seem to have much more false positives than the state of the art static code analyzers we use also do.

Asking in English in hope that it'll reach more people! I'm french, windows user and unfortunately not playing locally. Do you know of some software a deafie could use to game with her pals? One that could transcript what people say in their mics? Not necessarily free, I'm willing to pay for something that works well.

Asking for a me.

Please, boost so a girl can play with her friends 🥰

#accessibility #Steam #gaming #discord #disability

This entry was edited (5 hours ago)

reshared this

The word 'Vrede' jumped out at me from this 'Peace' installation. 'Vrede' is Danish for anger, fury, wrath. I wondered if it was an artistic provocation. But it seemed too confined to chance, that someone who happens to understand Danish happens to see this German artwork. So I looked it up and learned that 'vrede' is Dutch for 'peace'.

Vrede. Peace in Dutch. Wrath in Danish. I wonder if there's a word for words like these, that mean the opposite in different languages.

pixelfed.social/p/Rudini/88129…