An important PSA for people who are active on #Bluesky and who, upon hearing that the ICE account was officially verified, are saying: "I will just block it."

Blocking on Bluesky is NOT PRIVATE: it's very easy to see who is blocking any account by visiting sites that list that information.

I took a screenshot from clearsky.app, listing all the accounts that are blocking ICE (I pixelated avatars and usernames for privacy purposes).

The safest bet is to mute (that info is private) 😫

In a way, #Putin even got more then he ever could wish for

All for free by #Trump

Alliances shattered, internal threats, everyone really disliking the US, speaking about war within #NATO even

It's unbelievable how much damage that senile dic(tator) has done within a year

I really hope we learn from this.. But history shown otherwise I guess

#USPol

This post by Bruce Schneier contains so many thoughtful soundbites:

> The question is not simply whether copyright law applies to AI. It is why the law appears to operate so differently depending on who is doing the extracting and for what purpose.

> Like the early internet, AI is often described as a democratizing force. But also like the internet, AI’s current trajectory suggests something closer to consolidation.

schneier.com/blog/archives/202…

in reply to Jamie Gaskins

I like looking at this through the concept of "enjoyment", which was originally developed in Japan I believe.

From that point of view, copyright only applies to a work when it is used for "enjoyment", for its intended purpose. If the work is primarily entertainment, it applies when the consumer is using it to entertain themselves. If the work is educative, it applies when the consumer is using it to learn something. It does not apply when the work is used for a purpose completely unrelated to its creation, such as testing a CD player on an unusual CD, demonstrating the performance of a speaker system, training a language model to classify customer complaints etc.

(This isn't a legal perspective, not even quite in Japan I believe, but it's useful lens through which we can look at the world and which people can use to decide on policy).

I'm wasting water and energy, on having GPT compare the SpeechPlayer code from Espeak's integration to the one standalone. What makes it sound different? What makes me prefer the standalone Speechplayer to the one inside Espeak? Why why why. I still don't know, but I've been trying both side by side and comparing. And despite mine having any language over-articulations right now (perhaps "combobox could be less open-mouthed), I still prefer it. Why? Why? Why! It's the same DSP. I checked the code, same 9 files. Same wave generator concept. So what changed.
This entry was edited (1 hour ago)
in reply to Tamas G

I've been testing several words and strings by writing them in the text input, then pressing speak and examining the IPA output, but I don't yet know how to deal with specific speech details. For example: the diphthong "ai" sounds perfect, but in the diphtong "ei" I can hardly hear the semivowel I, despite nothing changes in the IPA output except for the first vowel from a to e. Also, the diphthong "ui" has an undesired gap between the two vowels.
in reply to Cleverson

@clv1 Thanks for testing, this is really helpful.
The “ay” style diphthong sounds clear because the first vowel and the second vowel are very far apart, so the slide into the second part is obvious. With the “eh to ee” style diphthong, those two vowels are much closer, so the second part can be hard to notice even though the text-to-IPA output looks basically the same.
This one we can mostly solve in the language packs: we can treat the second part as a glide (a “y” sound) instead of a full vowel, and we can also tweak the “eh” vowel a little so it doesn’t sit so close to “ee.” Both changes make the second part stand out more.
The “oo-ee” style diphthong having a gap is usually a different issue: our synthesizer adds a tiny boundary pause between sound segments to keep speech crisp. That helps with many consonants, but it can accidentally create a hole between two vowels. Packs can reduce that pause, but that affects everything in the language. The clean fix is in the engine: detect vowel-to-vowel transitions and don’t insert that boundary pause there, just blend smoothly. I'll build that into the next update.

TL;DR Most EV batteries will last longer than the cars they’re in. Battery degradation is at better (meaning: lower) rates than expected. Slow charging is better. Drive EV and don’t worry about your battery.

„Our 2025 analysis of over 22,700 electric vehicles, covering 21 different vehicle models, confirms that overall, modern EV batteries are robust and built to last beyond a typical vehicle’s service life.“

geotab.com/blog/ev-battery-hea…

#GoodNews #EV #Battery

This entry was edited (3 hours ago)

PQ leader says Legault's resignation further evidence of need for independent Quebec

cbc.ca/news/canada/montreal/qu…

tl;dr: the leader of the PQ is full MAGA. He believe in Santa Claus. He believe that in the US dictatorship Quebec and it's francofascism would be safe. Remember MAGA implies hating anyone speaking something other than English.

#cdnpoli #qcpoli

This entry was edited (1 hour ago)

Why Poilievre and Carney Are Silent on Grok’s Child Sexual Abuse

thetyee.ca/Opinion/2026/01/15/…

The former is just in his cesspool, running is con. The latter is just an hypocrite coward elite that would have no problem with Internet legislation when they can't enforce the basics.

#cdnpoli

I think people are going to like this SAPI engine. Although be prepared for 17 times 4 voices added to your list? What's that. Uh, math. 68. So 68 voices at once? Yeah. We use a tokenizer though to create voices, so they're only made for the languages you have. I suppose someone could remove all language files from their pack and just have the languages they care about, then your voice list will shrink. That's the idea. Dynamic tokenization, what a concept. We read it and only expose the voices available not a predetermined list of languages.
in reply to Noel Romey

@ner Oooh. Yeah, the frontend doesn't expose any synthesize functions yet, mainly because that creates a direct-dependency on LibEspeak to be compiled into the DLL - that's when we get more into GPLV2 VS V3 scuffles. Ugh. This way I can link Espeak outside the DLL alongside it, just use the SAPI wrapper to delegate the communication between both rather than my frontend having a synthesize method. For now it's our best shot if we want the language flexibility of modern Espeak's IPA tokenization, sadly. But it's not a bad alternative.