The NATO phonetic alphabet that we use today had numerous permutations along the way.
This video chronicles that development.
youtu.be/UAT-eOzeY4M?si=35f3ZM…
#communications #HamRadio #AmateurRadio
in reply to Chris Smart

Alpha said “Bravo, Charlie” and Delta Echoed the sentiment. We danced the Foxtrot at one of those Golf Hotels (it was in India) when Juliet (who had put
on a few Kilos) gave a Lima bean to Mike. Last November, Oscar’s Papa went to Quebec to meet Romeo. He wore a Sierra and Tango coloured Uniform. Meanwhile
Victor drank Whiskey as he looked at the X-ray of the Yankee, whose arm was broken by a Zulu.

The Best Press Release Writing Principals in AI era


Stumble upon this brilliant article that I think everyone can learn from it

For people working with media and PR, it's quite easy to spot AI generated press releases. AI output is wordy, repeating to the point of annoying. Without human revision, it's hard to read.

But even before the AI era, many press releases are full of jargons that are quite difficult to read.
The reason is simple and cleverly pointed out by this article I read on PR News Releaser: “Think Like a Reader” is the Best Press Release Strategy -- ... they’re written for the wrong audience... disconnect between what companies want to say and what readers actually want to read.

How true is that.

Even highly educated people would appreciate a press release written in simple words and clear explanations, not just generic self-praising, self-promotion sentences.

The article also provide clever strategies on how to convince your boss that writing to the reader is the right way to compose a press release. Check it out.

in reply to Terence Eden

I am being slightly disingenuous here.

Some of the many advantages of LLMs over SO is that the LLM is friendly and fast.

That's why Kids Today™ prefer Discord to forums. Someone replies immediately. You don't have to wait for an answer and check back.

LLMs are rarely rude. It can be terrifying to make yourself vulnerable and admit in public you don't know something basic. Especially when humans are mean and judgemental.

Everyone knows the weekends are the best time to push important updates, right?

From Jeep Wrangler forum: Did anyone else have a loss of drive power after today's OTA Uconnect update?

On my drive home I abruptly had absolutely no acceleration, the gear indicator on the dash started flashing, the power mode indicator disappeared, an alert said shift into park and press the brake + start button, and the check engine light and red wrench lights came on. I was still able to steer and brake with power steering and brakes for maybe 30 seconds before those went out too. After putting it into park and pressing the brake and start button it started back up and I could drive it normally for a little bit, but it happened two more times on my 1.5 mi drive home.

Source: x.com/StephenGutowski/status/1…

More here: jlwranglerforums.com/forum/thr…

and here: news.ycombinator.com/item?id=4…

Oops, forgot to let our followers here know that we released 0.87.1 last week! This version includes a fix for a small bug that crept into 0.87.0 that prevented syncing of weightlifting workouts on Garmins.

But the bigger news is that we also managed to get reproducible builds at F-Droid in place in time for this release!

As usual, more details in our blog post: gadgetbridge.org/blog/release-…

in reply to Gadgetbridge

You mention that you're publishing both your self-signed and the F-Droid signed build on F-Droid? Do you have more details about how that works and what you had to do to set that up?

I've wanted to have Catima be RB on F-Droid without breaking existing updates for quite a while, but I didn't really manage to make any progress when trying to talk to the @fdroidorg team, so I'd love to know how you got this working :)

We're looking for interesting questions around @matrix , its history, its technology, statistics and fun facts for The #MatrixConf2025 Pub [quizzz]!

Do you have suggestions? Please share them with the conference team in the following form: forms.gle/6tbry4Zdzb1fYVfx5 or contact us at #events-wg:matrix.org

This entry was edited (1 day ago)

Its always funny to see people who say "I did this with the help of AI because noone else seems to have done it before, and I didn't know how to do it either, so I used AI for that."
Thing is, the fact that AI could do it for you basically means that it has been done before and AI trained on it.
What you actually wanted to say is: "I spent some time rebuilding someone else's work because I wasn't able to find it on Google."
I know this is overdramatized, but also not totally wrong.

Matt Campbell reshared this.

in reply to Toni Barth

You are partially correct, but this is an oversimplification of how an AI model, for example a LLM works. It can, and does, use data that it got during its training phase, but that's not the entire story, otherwise it'd be called a database that regurgitates what it was trained on. On top of the trained data there's zero-shot learning, for example to figure out a dialect of a language it hasn't been trained on, based on statistical probability of weights from the trained data, as well as combine existing patterns into new patterns, thus coming up with new things, which are arguably part of creativity.

What it can't do though is, and this is very likely what you mean, it can't go outside it's pre-trained patterns. For example, if you have a model that was trained on dragons and another model that was trained on motorcycles, if you combine those two models, they can come up with a story where a dragon rides a motorcycle, even though that story has not been part of its training data. What it can't do is come up with a new programming language because that specific pattern does not exist. So the other part of creativity where you'd think outside the box is a no go. But a lot of people's boxes are different and they are very likely not as vast as what the models were trained on, and that's how an AI model can be inspiring.

This is why a lot of composers feel that AI is basically going to take over eventually, because they will have such a vast amount of patterns that a director, trailer library editor, or other content creator will be satisfied with the AI's results. The model's box will be larger than any human's.

reshared this

in reply to Erion

@erion @menelion Most of the generative capabilities of an LLM come from linear algebra (interpolation), and statistical grammar compression. We can bound the capabilities of a model by considering everything that can be achieved using these tools: I've never seen this approach overestimate what a model is capable of.

"Zero-shot learning" only works as far as the input can be sensibly embedded in the parameter space. Many things, such as most mathematics, can't be viewed this way.

in reply to wizzwizz4

It never will, because modern LLMs are far more capable.

They rely on non-linear activation functions (like ReLU, GELU, etc.) after the linear transformations. If the network were purely linear, it could only learn linear relationships, regardless of its depth. The non-linearities are what allow the network to learn complex, non-linear mappings and interactions between inputs.

There's also scaling, arguably an internal world model, being context-aware (which is definitely not something linear). If anything, this would underestimate a model.

reshared this

in reply to Erion

@erion @menelion I'm aware that models are non-linear functions, but they operate over elements of a linear ("vector") space. Each layer can be viewed as a non-linear map between vector spaces. Think "dog is to cat as puppy is to ???": given a suitable embedding, that's a linear algebra problem. This is responsible for most of the observed "intelligence" of LLMs, and for phenomena like vgel.me/posts/seahorse/.
This entry was edited (40 minutes ago)
in reply to wizzwizz4

If you think of how the self-attention mechanism dynamically and non-linearly re-weights every input vector based on its full context, essentially setting up relationships needed for things like chain-of-thought reasoning, planning and deep contextual understanding, you can't reduce a model's intelligence to mere vector arithmetic. In the past, this was absolutely true and you could rely on it, no problem, but nowadays models go far beyond, having at least a hundred or more layers. Hence why I said that this will always underestimate a model. If you look at smaller models that came out in the last year alone, based on what you estimate, they should be far less capable than they really are.

reshared this

in reply to Toni Barth

It's not totally wrong, but I feel like maybe it's a slight oversimplification. LLMs don't just outright copy the training data, that's why it's called generative AI. That doesn't mean they will never reproduce anything in the training set, but they are very good at synthesizing multiple concepts from that data and turning them into something that technically didn't exist before.

If you look at something like Suno, which is using an LLM architecture under the hood, you're able to upload audio and have the model try to "cover" that material. If I upload myself playing a chord progression/melody that I made up, the model is able to use it's vast amount of training data to reproduce that chord progression/melody in whatever style.

It would be really important for everyone to read about the theory of appeasement and how it has *never* worked.

--

The catastrophes of World War II and the Holocaust have shaped the world’s understanding of appeasement. The diplomatic strategy is often seen as both a practical and a moral failure.

Today, based on archival documents, we know that appeasing Hitler was almost certainly destined to fail. Hitler and the Nazis were intent upon waging an offensive war and conquering territory. But it is important to remember that those who condemn Chamberlain often speak with the benefit of hindsight. Chamberlain, who died in 1940, could not possibly have foreseen the scale of atrocities committed by the Nazis and others during World War II.

---

We have the hindsight today. Let's not make the same mistakes.

encyclopedia.ushmm.org/content…