in reply to Terence Eden

I am being slightly disingenuous here.

Some of the many advantages of LLMs over SO is that the LLM is friendly and fast.

That's why Kids Today™ prefer Discord to forums. Someone replies immediately. You don't have to wait for an answer and check back.

LLMs are rarely rude. It can be terrifying to make yourself vulnerable and admit in public you don't know something basic. Especially when humans are mean and judgemental.

Everyone knows the weekends are the best time to push important updates, right?

From Jeep Wrangler forum: Did anyone else have a loss of drive power after today's OTA Uconnect update?

On my drive home I abruptly had absolutely no acceleration, the gear indicator on the dash started flashing, the power mode indicator disappeared, an alert said shift into park and press the brake + start button, and the check engine light and red wrench lights came on. I was still able to steer and brake with power steering and brakes for maybe 30 seconds before those went out too. After putting it into park and pressing the brake and start button it started back up and I could drive it normally for a little bit, but it happened two more times on my 1.5 mi drive home.

Source: x.com/StephenGutowski/status/1…

More here: jlwranglerforums.com/forum/thr…

and here: news.ycombinator.com/item?id=4…

Oops, forgot to let our followers here know that we released 0.87.1 last week! This version includes a fix for a small bug that crept into 0.87.0 that prevented syncing of weightlifting workouts on Garmins.

But the bigger news is that we also managed to get reproducible builds at F-Droid in place in time for this release!

As usual, more details in our blog post: gadgetbridge.org/blog/release-…

in reply to Gadgetbridge

You mention that you're publishing both your self-signed and the F-Droid signed build on F-Droid? Do you have more details about how that works and what you had to do to set that up?

I've wanted to have Catima be RB on F-Droid without breaking existing updates for quite a while, but I didn't really manage to make any progress when trying to talk to the @fdroidorg team, so I'd love to know how you got this working :)

We're looking for interesting questions around @matrix , its history, its technology, statistics and fun facts for The #MatrixConf2025 Pub [quizzz]!

Do you have suggestions? Please share them with the conference team in the following form: forms.gle/6tbry4Zdzb1fYVfx5 or contact us at #events-wg:matrix.org

This entry was edited (1 day ago)

Its always funny to see people who say "I did this with the help of AI because noone else seems to have done it before, and I didn't know how to do it either, so I used AI for that."
Thing is, the fact that AI could do it for you basically means that it has been done before and AI trained on it.
What you actually wanted to say is: "I spent some time rebuilding someone else's work because I wasn't able to find it on Google."
I know this is overdramatized, but also not totally wrong.

Matt Campbell reshared this.

in reply to Erion

@erion @menelion Most of the generative capabilities of an LLM come from linear algebra (interpolation), and statistical grammar compression. We can bound the capabilities of a model by considering everything that can be achieved using these tools: I've never seen this approach overestimate what a model is capable of.

"Zero-shot learning" only works as far as the input can be sensibly embedded in the parameter space. Many things, such as most mathematics, can't be viewed this way.

in reply to wizzwizz4

It never will, because modern LLMs are far more capable.

They rely on non-linear activation functions (like ReLU, GELU, etc.) after the linear transformations. If the network were purely linear, it could only learn linear relationships, regardless of its depth. The non-linearities are what allow the network to learn complex, non-linear mappings and interactions between inputs.

There's also scaling, arguably an internal world model, being context-aware (which is definitely not something linear). If anything, this would underestimate a model.

André Polykanine reshared this.

It would be really important for everyone to read about the theory of appeasement and how it has *never* worked.

--

The catastrophes of World War II and the Holocaust have shaped the world’s understanding of appeasement. The diplomatic strategy is often seen as both a practical and a moral failure.

Today, based on archival documents, we know that appeasing Hitler was almost certainly destined to fail. Hitler and the Nazis were intent upon waging an offensive war and conquering territory. But it is important to remember that those who condemn Chamberlain often speak with the benefit of hindsight. Chamberlain, who died in 1940, could not possibly have foreseen the scale of atrocities committed by the Nazis and others during World War II.

---

We have the hindsight today. Let's not make the same mistakes.

encyclopedia.ushmm.org/content…

in reply to Chris 🌱

I had the same thought as I'm currently in the process of choosing a vacuum. Wanted to go for a robot, but decided I'd still need to clean manually once a week, so it's probably best to start with a manual thing. Thinking about a wet-dry vacuum so I don't have to wipe the floor separately, but then again — I have two large carpets. No clue what I'm gonna do, but I sure am excited.