in reply to feld

@feld
Back when OSS was designed, keeping an output buffer filled to avoid stuttering or reading from a microscope input source before the ring buffer looped and you lost samples was hard. You basically wanted to read or write whenever you had cycles because otherwise you couldn’t keep up.

Since then, computers have become a lot faster and now sound is a very low data rate device. Rather than hammering the sound device as often as you can, you want to be told the microphone buffer has passed some watermark level (so you can process a reasonable number of samples at once) or the sound output buffer is below a watermark level (so you can give it a few ms more samples to write for interactive things, or a few seconds for things like music playback).

Things like music are great for this because you can decode a few tens of second and then sleep in a kevent loop just passing a new chunk to the device whenever it has a decent amount of space.

@feld
in reply to feld

@feld @david_chisnall
More generally: #kqueue still has several ragged edges, compared to poll/select.

tty0.social/@JdeBP/11457405478…

tty0.social/@JdeBP/11457514245…

Every little helps in order to fill in all of these gaps.

#FreeBSD

Periodic self-repetition: As a data librarian I can say that "AI" is not a matter of personal preference -- whether you like it or not, or whether you have found some use that you think is useful. It actively destroys organized knowledge, and therefore it actively destroys civilization.

Whenever someone looks for a human written text and can't find it because statistical near variants have been created and indexed, whenever "AI" "hallucinates" a reference, knowledge has been destroyed.

I think background music in public places -- stores, hair salons, dentist's offices, etc. -- might be generally a bad idea. It's impossible to pick music that pleases everyone, we can listen to music as much as we want in private, and background music tends to just add to the noise (on that last point I'm reminded of this song: youtube.com/watch?v=yzEncLnmUe…).

Was thinking about this as my mother and I were at Great Clips waiting to get our hair cut. I'm guessing she didn't like the music.

in reply to Matt Campbell

It's a coordination / discrimination problem. It ultimately adds more profits than it takes away, hence why business owners won't stop doing it on their own. While the positive impact is spread through society, the negative impact is mostly concentrated in a few particularly sensitive individuals. This makes it an ideal target for regulation, and one of a few situations where I actually think regulation makes sense.

**Tears of joy alert!!! "On Alaska's frozen shoreline, oil rig workers made a discovery that stopped them cold—a walrus calf, alone and wailing, separated from his mother in waters over 50 miles away. Most walrus pups don't survive 24 hours without maternal contact. This one had already been crying for days.
The Alaska SeaLife Center team didn't hesitate. They designed something unprecedented: round-the-clock "cuddle therapy." Staff members now work in rotating shifts, bottle-feeding every three hours while cradling the 85-pound infant against their chests, mimicking the constant warmth he'd know from his mother. They hum. They rock. They never leave him alone. The transformation has been miraculous. Within weeks, the calf—who arrives limp and dehydrated—now nuzzles into his caregivers' arms, makes happy chirping sounds, and has gained 12 pounds. He recognizes voices. He reaches for familiar faces. Sometimes survival is just about showing up with love."
#Alaska #AlaskaSeaLifeCenter #WalrusPup #Love

"Canada Post exists to serve people, not shareholders, just like other many essential services that 'cost' Canadians millions per day. Think about it: Long-term care and personal support workers cost Canadian taxpayers millions a day. Should we close the old folks’ homes and put our seniors on the street? Public transit costs millions a day. Should we fire the bus drivers and make everyone walk? Public school support staff — crossing guards, lunch monitors, custodians — cost us millions a day. Do we fire them all and make those lazy teachers do everything?"

Maybe we'd finally be 'productive,' or 'ambitious,' or 'competitive' enough, then.

I’m a letter carrier. Canada Post exists to serve people, not shareholders, just like other essential services
thestar.com/opinion/letters-to…

archive.is/cchRs

Bored this Sunday? Use your downtime to learn How to Synth! Dive into the wonderful world of making weird synthesizer noises with my simple, hands-on guide. Still a work in progress, but there's plenty there to get you started!

etherdiver.com/how-to-synth-a-…

#synthesizer #SoundDesign

Паглядзіце дакументалку пра НРМ.

youtu.be/e49klkZZHXw?si=NE1GHO…

for you hax0rs: Google "AI" is currently vulnerable to prompt injection by "ASCII smuggling"—this is when you convert ASCII to Unicode tag characters, rendering them invisible to the user but visible to the LLM. here's how it's done:
gist.github.com/Shadow0ps/a7dc…

here's someone using this to make Google Calendar display spoofed information about a meeting:
firetail.ai/blog/ghosts-in-the…

others say summarising functions were affected too, so I wonder if you can add tag texts to your website and poison the Google so-called "AI summary" anti-feature.

ChatGPT filters out tag character but, usefully, Google is refusing to, so unless they get a backlash this might be a fun exploit to explore: pivot-to-ai.com/2025/10/11/goo…

Well, well, I guess this had to happen eventually. Got my first music offer rejected by a film studio working on a documentary because "we got this covered by AI, but thank you very much for your kind offer."

The rejection itself doesn't really make me sad, this is perfectly normal and especially now the competition is huge. What makes me sad though is that this is a pretty big name in the industry, so it'll only get worse from here on. I sincerely hope I'm wrong.

This entry was edited (1 day ago)

The NATO phonetic alphabet that we use today had numerous permutations along the way.
This video chronicles that development.
youtu.be/UAT-eOzeY4M?si=35f3ZM…
#communications #HamRadio #AmateurRadio
in reply to Chris Smart

Alpha said “Bravo, Charlie” and Delta Echoed the sentiment. We danced the Foxtrot at one of those Golf Hotels (it was in India) when Juliet (who had put
on a few Kilos) gave a Lima bean to Mike. Last November, Oscar’s Papa went to Quebec to meet Romeo. He wore a Sierra and Tango coloured Uniform. Meanwhile
Victor drank Whiskey as he looked at the X-ray of the Yankee, whose arm was broken by a Zulu.

The Best Press Release Writing Principals in AI era


Stumble upon this brilliant article that I think everyone can learn from it

For people working with media and PR, it's quite easy to spot AI generated press releases. AI output is wordy, repeating to the point of annoying. Without human revision, it's hard to read.

But even before the AI era, many press releases are full of jargons that are quite difficult to read.
The reason is simple and cleverly pointed out by this article I read on PR News Releaser: “Think Like a Reader” is the Best Press Release Strategy -- ... they’re written for the wrong audience... disconnect between what companies want to say and what readers actually want to read.

How true is that.

Even highly educated people would appreciate a press release written in simple words and clear explanations, not just generic self-praising, self-promotion sentences.

The article also provide clever strategies on how to convince your boss that writing to the reader is the right way to compose a press release. Check it out.

in reply to Terence Eden

I am being slightly disingenuous here.

Some of the many advantages of LLMs over SO is that the LLM is friendly and fast.

That's why Kids Today™ prefer Discord to forums. Someone replies immediately. You don't have to wait for an answer and check back.

LLMs are rarely rude. It can be terrifying to make yourself vulnerable and admit in public you don't know something basic. Especially when humans are mean and judgemental.

Everyone knows the weekends are the best time to push important updates, right?

From Jeep Wrangler forum: Did anyone else have a loss of drive power after today's OTA Uconnect update?

On my drive home I abruptly had absolutely no acceleration, the gear indicator on the dash started flashing, the power mode indicator disappeared, an alert said shift into park and press the brake + start button, and the check engine light and red wrench lights came on. I was still able to steer and brake with power steering and brakes for maybe 30 seconds before those went out too. After putting it into park and pressing the brake and start button it started back up and I could drive it normally for a little bit, but it happened two more times on my 1.5 mi drive home.

Source: x.com/StephenGutowski/status/1…

More here: jlwranglerforums.com/forum/thr…

and here: news.ycombinator.com/item?id=4…

Oops, forgot to let our followers here know that we released 0.87.1 last week! This version includes a fix for a small bug that crept into 0.87.0 that prevented syncing of weightlifting workouts on Garmins.

But the bigger news is that we also managed to get reproducible builds at F-Droid in place in time for this release!

As usual, more details in our blog post: gadgetbridge.org/blog/release-…

in reply to Gadgetbridge

You mention that you're publishing both your self-signed and the F-Droid signed build on F-Droid? Do you have more details about how that works and what you had to do to set that up?

I've wanted to have Catima be RB on F-Droid without breaking existing updates for quite a while, but I didn't really manage to make any progress when trying to talk to the @fdroidorg team, so I'd love to know how you got this working :)

in reply to Sylvia

@SylvieLorxu Sure, no problem! In the end it wasn't difficult because our build process was already reproducible. So we only had to find the correct way to update our F-Droid metadata.

High level it works like this:
- build&sign the apk
- extract signatures with "fdroid signatures <out_filename>"
- create MR in F-Droid like this: gitlab.com/fdroid/fdroiddata/-…

For our next release, we won't have autoupdates. We'll need to create an fdroid MR with a copy of the last metadata and the new signatures.

We're looking for interesting questions around @matrix , its history, its technology, statistics and fun facts for The #MatrixConf2025 Pub [quizzz]!

Do you have suggestions? Please share them with the conference team in the following form: forms.gle/6tbry4Zdzb1fYVfx5 or contact us at #events-wg:matrix.org

This entry was edited (2 days ago)

Its always funny to see people who say "I did this with the help of AI because noone else seems to have done it before, and I didn't know how to do it either, so I used AI for that."
Thing is, the fact that AI could do it for you basically means that it has been done before and AI trained on it.
What you actually wanted to say is: "I spent some time rebuilding someone else's work because I wasn't able to find it on Google."
I know this is overdramatized, but also not totally wrong.

Matt Campbell reshared this.

in reply to Toni Barth

You are partially correct, but this is an oversimplification of how an AI model, for example a LLM works. It can, and does, use data that it got during its training phase, but that's not the entire story, otherwise it'd be called a database that regurgitates what it was trained on. On top of the trained data there's zero-shot learning, for example to figure out a dialect of a language it hasn't been trained on, based on statistical probability of weights from the trained data, as well as combine existing patterns into new patterns, thus coming up with new things, which are arguably part of creativity.

What it can't do though is, and this is very likely what you mean, it can't go outside it's pre-trained patterns. For example, if you have a model that was trained on dragons and another model that was trained on motorcycles, if you combine those two models, they can come up with a story where a dragon rides a motorcycle, even though that story has not been part of its training data. What it can't do is come up with a new programming language because that specific pattern does not exist. So the other part of creativity where you'd think outside the box is a no go. But a lot of people's boxes are different and they are very likely not as vast as what the models were trained on, and that's how an AI model can be inspiring.

This is why a lot of composers feel that AI is basically going to take over eventually, because they will have such a vast amount of patterns that a director, trailer library editor, or other content creator will be satisfied with the AI's results. The model's box will be larger than any human's.

reshared this

in reply to Erion

@erion @menelion Most of the generative capabilities of an LLM come from linear algebra (interpolation), and statistical grammar compression. We can bound the capabilities of a model by considering everything that can be achieved using these tools: I've never seen this approach overestimate what a model is capable of.

"Zero-shot learning" only works as far as the input can be sensibly embedded in the parameter space. Many things, such as most mathematics, can't be viewed this way.

in reply to wizzwizz4

It never will, because modern LLMs are far more capable.

They rely on non-linear activation functions (like ReLU, GELU, etc.) after the linear transformations. If the network were purely linear, it could only learn linear relationships, regardless of its depth. The non-linearities are what allow the network to learn complex, non-linear mappings and interactions between inputs.

There's also scaling, arguably an internal world model, being context-aware (which is definitely not something linear). If anything, this would underestimate a model.

reshared this

in reply to Erion

@erion @menelion I'm aware that models are non-linear functions, but they operate over elements of a linear ("vector") space. Each layer can be viewed as a non-linear map between vector spaces. Think "dog is to cat as puppy is to ???": given a suitable embedding, that's a linear algebra problem. This is responsible for most of the observed "intelligence" of LLMs, and for phenomena like vgel.me/posts/seahorse/.
This entry was edited (1 day ago)
in reply to wizzwizz4

If you think of how the self-attention mechanism dynamically and non-linearly re-weights every input vector based on its full context, essentially setting up relationships needed for things like chain-of-thought reasoning, planning and deep contextual understanding, you can't reduce a model's intelligence to mere vector arithmetic. In the past, this was absolutely true and you could rely on it, no problem, but nowadays models go far beyond, having at least a hundred or more layers. Hence why I said that this will always underestimate a model. If you look at smaller models that came out in the last year alone, based on what you estimate, they should be far less capable than they really are.

reshared this

in reply to Erion

@erion A model having "a hundred or more layers" doesn't make anything I said less true. "Chain-of-thought reasoning" isn't reasoning. I absolutely can dismiss claims of "a model's intelligence", because I have not once lost this argument when concrete evidence has come into play, and people have been saying for years that I should.

Can you give me an example of something you think a "smaller model that came out in the last year" can do, that you think I would predict it can't?

in reply to wizzwizz4

Take a Gemma model for example, say 2b. Any linear prediction will not be able to predict how emergent capabilities will behave when faced with a complex task, simply because they don't work linearly, especially after crossing a scale threshold, rather than improving with each additional parameter.

You can see this as Gemma 2b can outperform larger models, which for you should not be possible. The model's intelligence is not a simple, additive function of its vector size, but a complex product of the billions of highly non-linear interactions created by the full architecture, making a purely linear prediction inadequate.

in reply to Erion

I shall humour you all! He's a timelord, he understands all this very well. Now, the serious part. You are giving baseless just to prove you're right, @erion at least gives actual examples. Not an AI person, but we definitely know whose side is better protected in the argument sense of things. I don't need an AI to read, after all. Unless neural engine voices count as one. :) So, here's your humour and here's us tired of people's baseless argument and points. Not a good xp, btw. And here's the fact that there's neither right or wrong in this.
in reply to Winter blue tardis

meta-argument

Sensitive content

in reply to wizzwizz4

meta-argument

Sensitive content

in reply to Winter blue tardis

@tardis @erion It's not about open-mindedness. We can bound the behaviour of any given architecture mathematically: this sets limits on the capabilities of a particular system.

You're right that the training data does not inherently constrain the behaviour of the model, but other things do: those are what I was referring to.

By "linear", I was specifically referring to the field of mathematics called "linear algebra", not to the metaphor of "staying in a lane".

in reply to wizzwizz4

If we accept it uses linear algebra, and linear algebra includes indices, vectors, matrices, and others, alright. But you forget it tries to predict, therefore, there are concepts of probability, statistics and calculous. Then there is deep learning, and training in one very big, very specific areas. The internet says, however, those are from last year, I am pretty sure AI has developed beyond these concepts, a specially linear ones, in order to choose. That's where reasoning and chain of thought probably enter. And wrile stuff like GPT might think linear, stuff like Gemma kind of does not. Although, I have managed to confuse them all, heavily. Because of probability, and because of the fact they cannot go beyond their reasoning, they are not great at say, I go to a friend, and tell them about the conversation with two other friends. That's where it starts showing, because humans are non-linear, we are an exception, and we all, besides the fundamental logic, analyze and reason differently. Our brain probably also relies on probability from decision A vs. B, by analyzing the consequences and aftermath in both, so even we use the probability and statistics concepts, but we are free to go out of them, and take a different path because of that exception of free will.And yes, my conversation analysis also confused both gemma and gpt. I don't know why I did it, I just decided what if? And boom. However, because patterns and your say of linear algebra, AI is very, very good at those. So, it's not only linear algebra, that's not where it ends, probability, statistics, averages, medians, calculous, those are all different concepts of math it uses in too, besides functions, instruction's, etc. And then there's deep learning which I have nothing to say about, because I haven't found anything on it.
in reply to wizzwizz4

Well, I think that's where we agree to disagree.

A model's intelligence can be somewhat predicted via linear algebra, I don't doubt this, but there are other factors that if you ignore, you will not get a correct prediction.

For example, Single linear layer operations cannot describe operations that take place accross multiple layers at the same time, hence the nonlinearity of a model. If you explain everything as just an operation per layer, you lose the complexity that this non-linearity gives you, you aare essentially oversimplifying it. All the things I mentioned contribute to this.

There are operations that take place in the non-linear subspace, especially for complex tasks, e.g. to compose multiple steps of an operation (reasoning).

Look at how the performance of smaller local models can completely fail at this, but the moment you hit a size treshold, its accuracy suddenly jumps.

To clarify, I don't disagree with you, model complexity just happens to be increasing and now you need multiple ways to measure a model's intelligence.

in reply to Toni Barth

It's not totally wrong, but I feel like maybe it's a slight oversimplification. LLMs don't just outright copy the training data, that's why it's called generative AI. That doesn't mean they will never reproduce anything in the training set, but they are very good at synthesizing multiple concepts from that data and turning them into something that technically didn't exist before.

If you look at something like Suno, which is using an LLM architecture under the hood, you're able to upload audio and have the model try to "cover" that material. If I upload myself playing a chord progression/melody that I made up, the model is able to use it's vast amount of training data to reproduce that chord progression/melody in whatever style.

It would be really important for everyone to read about the theory of appeasement and how it has *never* worked.

--

The catastrophes of World War II and the Holocaust have shaped the world’s understanding of appeasement. The diplomatic strategy is often seen as both a practical and a moral failure.

Today, based on archival documents, we know that appeasing Hitler was almost certainly destined to fail. Hitler and the Nazis were intent upon waging an offensive war and conquering territory. But it is important to remember that those who condemn Chamberlain often speak with the benefit of hindsight. Chamberlain, who died in 1940, could not possibly have foreseen the scale of atrocities committed by the Nazis and others during World War II.

---

We have the hindsight today. Let's not make the same mistakes.

encyclopedia.ushmm.org/content…

in reply to Rui Batista

Eu diria que quando um cidadão normal deixa de saber explicar como se contabilizam os votos de uma eleição já não há lugar para vergonha na democracia, porque a democracia já não existe. É esse o risco do voto eletrónico. É preciso encontrar uma solução para que quem não vê também possa votar, mas essa solução não pode colocar em risco a democracia.
This entry was edited (1 day ago)