Search
Items tagged with: PIpeWire
After upgrading to #pipewire version 1.2.5. Finally, I am back to an audio system that works as well (or better!) than my old hand-tweaked setup with JACK1 and PulseAudio.
Unlike that system, this one is fully integrated - things using any of the various Linux audio APIs are all visible and routable to each other.
Latency is the same or better as I got with JACK1.
Truly excellent. Good job @pipewire
(and for those who don't know, I wrote JACK1, with the help of a lot of amazing people).
this is, like a lot of my posts where I'm not replying to threads, concerning #blind users of #linux. With that out of the way, let's get into it
So, I was technically able to do this for some time now, around three days or so, but now I got enough courage to actually do it. This involves a lot of courage indeed, because this is my primary and only system, and the experiment in question could have cost me speech on the computer, which is a pretty big deal
So yeah, what did I do that's so dangerous? I stopped the pulseaudio server, entirely, and by that I mean the pulse pipewire compatibility layer, pipewire-pulse. Foolishly as it might sound from the outside, I tryed this to see if I could still get speech. Not so foolishly after all, and also in a show of spectacularly defying expectations, because I'm writing this now, I do, very much so, in fact, as an unintended consequence, my system feels snappier too, incredible, right?
As many of you are pretty technical, since using linux as a VI person kinda pushes you in that direction, it should not come as any surprise to you that speech dispatcher uses pulseaudio, aka pipewire-pulse, to play sound, because usually when that crashes, you get no speech and such. So then, how is this possible? no, I'm not using alsa, or any of the other audio backends in there. The rest of this huge post will be devoted to that question, as well as explaining some background around why this matters and how things were before. Note: this would probably only make a lot of positive change to a particular group of people, those who value the amount of latency of your audio systems, either because you're musicians, want to be, are working with specialised audio software, are using a complicated hardware setup with lots of nodes which have to be balanced in the right way, or just because you can, etc etc, those kinds of people. For most of you, this may be a novelty, may get some limited use out of it due to decreased cpu load and more fast feeling interfaces in your desktop environments, but for you, this is mostly a nice read and some hipped enthusiasm I suppose.
It all started when I was talking to afew linux musicians, if I recall correctly, also someone who might have been involved in a DAW's development. I was talking about accessibility, and finally arrived to someone telling me that speech dispatcher requests a very low latency, but then xruns quite a number of times, making pipewire try to compensate a lot. I could be misremembering, but they also said that it throws off latency measuring software, perhaps those inside their DAW as well, because the way speech dispatcher does things is making pw increase the graph latency.
So then, what followed were afew preliminary discussions in the gnome accessibility matrix room, about how audio backends work in spd. There, I found out that plugins mostly use a push model, meaning they push samples to the audio server when those become available, in variable sizes as well, after which the server can do the proper arrangements for some kind of sane playback of those. Incidentally or not, this is how a lot of apps work with regular pulseaudio, usually using, directly or indirectly, a library called libpulse_simple, which allows one to basically treat the audio device like some kind of file and have things done that way, where the library could also sometimes introduce buffering of its own before sending to pulse, etc. Of note here is that pulse still has callbacks, but a lot of apps don't use that way of doing things.
Back to the problem though, this was fine for pulse, more or less anyway, because pulse didn't symbolise the media graph in any way, there was simply no such concept there, you had only apps being connected to devices via streams, so there wasn't a way with which to get apps to syncronise their rates to something which can more or less be sent directly to the soundcard. So, when apps invariably started to diverge the rate at which they pushed samples to pulse, as far as I understand, pulse took the latency of the slowest stream and add it to everyone else to attempt to syncronise, because, after all, pulse would still have to mix and send everyone's frames to the soundcard, and because there either was no poling model or no one wanted to implement it, that was the best choice to make in such environments.
Enter low-latency software, #jack and #pipewire. Here, latency minimising is the most important thing, so samples have to be sent to the soundcard as soon as possible. This means that the strategy I outlined above wouldn't work here, which gets us neetly in the concept of an audio graph, which is basically all the sound sources that can play, or capture, in your systems, as well as exactly where sound is played to and captured from. Because of the low-latency factor however, this graph has to be poled all at once, and return samples similarly fast, in what the graph driver calls a cycle. The amount for which apps can buffer audio before they're called again, aka a graph cycle duration, is user-adjustable, in jack via the buffer size, in pipewire via the quantum setting. But then, what happens to apps which don't manage to answer as fast as they get called by the server? Simple, even simpler than the answer of pulse, alsa, etc to the problem, and their various heuristics to try to make sound smooth and insert silence in the right places. The answer is, absolutely nothing at all, if an app didn't finish returning its alotted buffer of samples, not one more or less than that, the app would be considered xrunning, either underrunning or overrunning based on the size of the samples buffer they managed to fill, and their audio, cutting abruptly with perhaps afew bits of uninitialised memory in the mix, is sent to the soundcard at a fixed time, with the others. This is why you might be hearing spd crackle weirdly in vms, that's why you hear sometimes other normal programs crackle for no good reason whatsoever. And you know, this is additive, because the crackling spreads through the entire graph, those samples play with distortion on the same soundcard with everything else, and everyone else's samples get kinda corrupted by that too. But obviously, if it can get worse, it will, unfortunately, for those who didn't just down arrow past this post. There are afew mechanisms of reducing the perceived crackling issues from apps which xrun a lot, for example apps with very low sample rates, like 16 khz, yes, phone call quality in 2024(speaking of speech dispatcher), can get resampled internally by the server, which may improve latency at the cost of a degraded quality you're gonna get anyways with such a sample rate, but also te cpu has to work more and the whole graph may again be delayed a bit, or if an app xruns a lot, it can either be disconnected forcefully by pipewire, or alternatively the graph cycle time is raised at runtime, by the user or a session manager acting on behalf of the user, to attempt to compensate, though it'll never go like regular pulse, but enough to throw off latency measuring and audio calibration software.
So, back to speech dispatcher. After hearing this stuff, as well as piecing together the above explanation from various sources, I talked with the main speech dispatcher maintainer in the gnome a11y room, and came to the conclusion that 1, the xrunning thing is either a pipewire issue or a bug in spd audio plugins which should be fixed, but more importantly B, that I must try to make a pipewire audio backend for spd, because pw is a very low-latency sound server, but also because it's the newest one and so on.
After about two weeks of churn and fighting memory corruption issues, because C is really that unsafe and I do appreciate rust more, and also I hate autotools with passion, now my pr is basically on the happy path, in a state where I could write a message with it as it is now. Even on my ancient system, I can feel the snappyness, this really does make a difference, all be it a small one, so can't wait till this will get to people.
If you will get a package update for speech dispatcher, and if you're on arch you will end up doing so sooner or later, make sure you check the changes, release notes, however your package repositories call that. If you get something saying that pipewire support was added, I would appreciate it if as many of you as possible would test it out, especially when low-latency audio stuff is required, see if the crackling you misteriously experienced from time to time with the pulse implementation goes away with this one. If there are issues, feel free to either open them against speech dispatcher, mention them here or in any other matrix rooms where both me and you are, dm me on matrix or here, etc. For the many adventurers around here, I recommend you test it early, by manually compiling the pull request, I think it's the last one, the one marked as draft, set audio output method to pipewire in speechd.conf, replace your system default with the one you just built by running make install
if you feel even more adventurous , and have fun!
I tested this with orca, as well as other speech dispatcher using applications, for example kodi and retroarch, everything works well in my experience. If you're the debugging sort of person, and upon running your newly built speechd with PIPEWIRE_DEBUG=3, you get some client missed 1 wakeups errors, the pipewire devs tell me that's because of kernel settings, especially scheduler related ones, so if y'all want those to go away, you need to install a kernel configured for low-latency audio, for example licorix or however that is spelled, but there are others as well. I would suggest you ignore those and go about your day, especially since you don't see this unless you amp up the debugging of pipewire a lot, and even then it might still just be buggy drivers with my very old hardware.
In closing, I'd like to thank everyone in the gnome accessibility room, but in particular the spd maintainer, he helped me a lot when trying to debug issues related to what I understood from how spd works with its audio backends, what the fine print of the implicit contracts are, etc. Also, C is incredibly hard, especially at that scale, and I could say with confidence that this is the biggest piece of C code I ever wrote, and would definitely not want to repeat the experience for a while, none of the issues I encountered during these roughly two weeks of development and trubbleshooting would have happened in rust, or even go, or yeah, you get the idea, and I would definitely have written the thing in rust if I knew enough autotools to hack it together, but even so I knew that would have been much harder to merge, so I didn't even think of it. To that end though, lots of thanks to the main pipewire developer, he helped me when gdb and other software got me nowhere near solving those segfaults, or trubbleshooting barely intelligible speech with lots of sound corruption and other artefacts due to reading invalid memory, etc.
All in all, this has been a valuable experience for me, it has also been a wonderful time trying to contribute to one of the pillers of accessibility in the linux desktop, to what's usually considered a core component. To this day, I still have to internalise the fact that I did it in the end, that it's actually happening for real, but as far as I'm concerned, we have realtime speech on linux now, something I don't think nvda with wasapi is even close to approaching but that's my opinion I dk how to back up with any benchmarks, but if you know ways, I'm open to your ideas, or better, actual benchmark results between the pulse and pipewire backend would be even nicer, but I got no idea how to even begin with that.
Either way, I hope everyone, here or otherwise, has an awesome time testing this stuff out, because through all the pain of rangling C, or my skill issues, into shape, I certainly had a lot of fun and thrills developing it, and now I'm passing it on to you, and may those bugs be squished flat!
Is there a deep reason why #PipeWire doesn't prefer my plugged-in HDMI sink over my non-plugged-in on-board codec? I was going to rant, but I'm thinking that perhaps I've missed something about the PipeWire philosophy. and I'm open to learning something new about Linux audio.
I'm only annoyed by this because the machine in question is #NixOS with Impermanence; I haven't yet told it how to persist sound settings between boots, so it can't. I recognize that PipeWire's answer of "just open up whatever you *used to use* and reassign the inputs *once*" is a great answer.
omgubuntu.co.uk/2024/09/tauon-…
Okay, thank you #fediverse! My #pipewire / #cider / #ubuntu woes are solved 😎
Thanks to @stevenixon, @BradRubenstein and @korvroffe for suggesting a combination of mkchromecast (to discover and connect to the chromecast devices on my network and create a virtual audio device) and qpwgraph for building a virtual patch bay to route audio from apps to devices (similar to loopback on macOS)
A little #PIpeWire tip I learned today. If you want to use OBS Studio with a Google Meet session in Firefox. Start OBS Studio first, if not Firefox will set a format for the camera that OBS Studio do not support and you get a black box in OBS Studio. The other way is fine.
Wim is working on a proper solution, adding video conversion to PipeWire, but for now the ordering do matter.
phoronix.com/news/NVIDIA-560.2…
#linux #pipewire #nvidia
NVIDIA 560 Linux Driver Beta Released - Defaults To Open GPU Kernel Modules
NVIDIA today released their first Linux beta driver in the new R560 driver release branchwww.phoronix.com
blogs.gnome.org/uraeus/2024/06…
#linux #fedora #instructlab #granite #articifialintelligence #gnome #pipewire #toolbx
If you use #pipewire for cameras you can now (in the upcoming 1.2) enforce specific rotations via node rules. This is useful on devices with rotated cameras that don't use a DT and #libcamera or for testing (e.g. to find out the correct rotation of a phone camera). The rotation is respected by an increasing number of apps, notably #gstreamer based ones (like Snapshot - but not Cheese) and #firefox (if you enable PW cameras via `media.webrtc.camera.allow-pipewire`).
See gitlab.freedesktop.org/pipewir…
Webcam is upside down when pipewire is running. (#4034) · Issues · PipeWire / pipewire · GitLab
PipeWire version (pipewire --version): 1.0.7. Distribution and distribution version (PRETTY_NAME from /etc/os-release): Ubuntu Oracular Oriole (development branch). Desktop Environment: Sway Wayland...GitLab
#firefox #webrtc using #pipewire and #libcamera (with softwareISP) on a #thinkpadx13s - it finally works
The required patches will also make things work for a bunch of #linuxmobile devices.
flathub.org/apps/io.github.dim…
#linux #pipewire #flatpak #multimedia
💻 Last week I had a great time at the #GStreamer #hackfest ! I decided to take the opportunity there to hack on the #PipeWire GStreamer elements and here's my story: gkiagia.gr/2024-06-04-hacking-…
#GstHackfest @gstreamer @pipewire
Hacking on the PipeWire GStreamer elements
Last week I attended the GStreamer spring hackfest in Thessaloniki. It was very nice to meet all the usual people again, as it’s been a while (I last attended a GStreamer event in 2022), and we had a great time!George Kiagiadakis
Streaming: TSB at Faircamp! | THE SMILING BUDDHAS
The Smiling Buddhas at the base [records] Faircamp!www.base.at
github.com/libsdl-org/SDL/pull…
#pipewire #linux #sdl
camera: add PipeWire camera support by wtay · Pull Request #9723 · libsdl-org/SDL
The PipeWire camera will enumerate the pipewire Video/Source nodes with their formats. When capturing is started, a stream to the node will be created and frames will be captured. Description Exi...GitHub
Just want to quickly share with #linuxmobile folks that the new #libcamera softwareISP does indeed work with the #librem5 - and with a #PipeWire + #GStreamer pipeline. Here's a first image running Warp (from Flathub).
There's still some stuff to iron out to make this work reliably and ship to users - but things are falling into place.
📆 Next Thursday at #EmbeddedOSSummit in Seattle, don't miss my colleague's Julian Bouzas presentation on #WirePlumber smart filters! There is also going to be a live stream, in case you are not attending in person.
Learn more at: eoss24.sched.com/event/1aBG9
#osssummit #embedded #pipewire
WirePlumber 0.5: Bringing Smart Audio Filters to PipeWire - Julian Bouzas, Collabora
WirePlumber is the default session manager of PipeWire, the multimedia server that has become the standard for low-latency audio, Bluetooth, video capture and many more use cases on modern Linux systems.eoss24.sched.com
As promised, a basic migration guide on how to transform your #WirePlumber 0.4 config files into the 0.5 format is now available! 🥳
A long pending post from @sanchayan on how we implemented ALSA compress offload support in @pipewire
asymptotic.io/blog/pipewire-co…
#PipeWire #LinuxAudio #Linux #audio #embedded #IOT
asymptotic.io ~ Supporting ALSA compressed offload in PipeWire
Boutique open source consulting firm, specialising in multimedia and other low-level systems software.asymptotic.io
I put together recording setup guide for Fedora Linux using OBS Studio. Only GUI, not messing with config files.
📢 WirePlumber 0.5.0 is out! 🎉 Get it while it's hot: gitlab.freedesktop.org/pipewir…
#PipeWire #WirePlumber #release #announcement
0.5.0 · PipeWire / wireplumber · GitLab
Changes: Bumped the minimum required version of PipeWire to 1.0.2, because we make use of the 'api.bluez5.internal' property of the BlueZ monitor (GitLab
pipewiresrc element does not report it's latency (#30) · Issues · PipeWire / pipewire · GitLab
Created by: ndufresne I was testing pipewire, notice that I was using Fedora 27 released version, I notice that all video frame were late. It...GitLab
blogs.gnome.org/uraeus/2024/03…
#pipewire #linux #fedora
OBS Studio 30.1 Released with AV1 Support for VA-API, #PipeWire Camera Source, and Much More 9to5linux.com/obs-studio-30-1-…
@pipewire #Linux #OpenSource #FreeSoftware
OBS Studio 30.1 Released with AV1 Support for VA-API, PipeWire Camera Source - 9to5Linux
OBS Studio 30.1 open-source screencasting and streaming app is now available for download with PipeWire Camera source and AV1 VA-API support.Marius Nestor (9to5Linux)
The video of the talk from Wim Taymans about #PipeWire at #fosdem is released.
video:
video.fosdem.org/2024/ub4132/
talk:
fosdem.org/2024/schedule/event…
What a fantastic talk! Really helpful and interesting, although I follow the progress of the project quite closely.
Audio quality was much better than last year.
phoronix.com/news/OBS-Studio-3…
#linux #pipewire #fedora #obsstudio #libcamera
OBS Studio 30.1 Beta Released With AV1 For VA-API & AV1 For WebRTC/WHIP Output
Following the release of OBS Studio 30.0 last November, OBS Studio 30.1 Beta 1 was released today as what will be the next feature release for this open-source software that is popular with livestreamers and other game streaming / desktop recording p…www.phoronix.com
PipeWire 1.0 Officially Released » Linux Magazine
PipeWire was created to take the place of the oft-troubled PulseAudio and has finally reached the 1.0 status as a major update with plenty of impro...Linux Magazine
linuxunplugged.com/538 #linux #linuxunplugged #pipewire
Surprisingly Smooth Transition
PipeWire hits 1.0, and Wim Taymans joins us to reflect on the smooth success of PipeWire. Plus the details on the first NixCon North America, and more.LINUX Unplugged
Fedora Magazine got an interview up of Wim Taymans about the PipeWire 1.0 release and plans going forward.
fedoramagazine.org/pipewire-1-…
#linux #pipewire #fedora #audio
PipeWire 1.0 - An interview with PipeWire creator Wim Taymans - Fedora Magazine
With PIpeWire hitting its 1.0 release we speak with project lead Wim Taymans about what has been achieved and where we go from here.Christian Fredrik Schaller (Fedora Project)
@dino 0.4.3 just got released with some exciting improvements for #LinuxMobile
1. Several fixes for touch input, making audio/video calls actually usable on phones
2. Fixes for video support so devices with #libcamera / #pipewire support like the #PinePhonePro work now
3. The app is now recognized as mobile friendly on #Phosh
4. Stricter #Flatpak sandbox - no device/all any more
The new version is available on #Flathub and lots of distro repos.
This is @halfmexican; GNOME Outreachy student excited that their effort to make a modern and sandboxed Camera demo for Workbench has paid off 🛠️
Well done! 🎉
Thanks @philn and @slomo for your help!
#GNOME #Outreachy #development #students #GStreamer #PipeWire #Flatpak #freedesktop #libcamera #GTK