A 🇩🇪 iniciou a sua transição para o software livre :opensource:

"Os gabinetes governamentais de Schleswig-Holstein migraram do Microsoft Exchange e do Outlook para o Open Xchange e o @thunderbird ."

"A fase final envolve a migração do #Windows :windows: para uma distribuição #Linux ." :linux:

pplware.sapo.pt/linux/alemanha…

#OpenSource #Microsoft #outlook #software #SoftwareLibre #LibreOffice #win10 #Win11 #FOSS #FLOSS #Alemanha

"I'm a programmer with a Fediverse account. I spend *most* of my programming hours on this OS:"

Please consider boosting for a more statistically significant result.

#poll #programming #operatingsystems

  • Microsoft Windows (14%, 111 votes)
  • MacOS (33%, 249 votes)
  • Linux or Unix (51%, 386 votes)
  • Other (Please comment.) (0%, 3 votes)
749 voters. Poll end: 2 weeks ago

This entry was edited (3 weeks ago)

reshared this

in reply to Scott Francis

> The final thing interesting for the triple ratchet is that it nicely combines the best of both worlds. Between two users, you have a classical DH-based ratchet going on one side, and fully independently, a KEM-based ratchet is going on. Then, whenever you need to encrypt something, you get a key from both, and mix it up to get the actual encryption key. So, even if one ratchet is fully broken, be it because there is now a quantum computer, or because somebody manages to break either elliptic curves or ML-KEM, or because the implementation of one is flawed, or..., the Signal message will still be protected by the second ratchet. In a sense, this update can be seen, of course simplifying, as doubling the security of the ratchet part of Signal, and is a cool thing even for people that don't care about quantum computers.

as has been pointed out by DJB, nobody seems to be able to trust the new PQC crypto which is why we have to do these hybrids. They're actively resisting an attempt by the NSA to have non-hybrid PQC being standardized

Which leaves me feeling like this isn't as amazing as they want you to believe. They did a lot of work because they can't trust the new crypto to fully replace the classical.

Periodic reminder: It is not because a method is marked as deprecated that it suddenly stop working. No it just mean that the method is marked for removal in a future release. Your code is not broken. It is guaranteed to work until that future release. You have the time to plan the migration to the new method or workflow. Please don't got into panick mode over a deprecation. #PHP #OSS #maintenance #deprecation

Also, it's totally not legal to upload it. Which is quite unfortunate. That's why I'm doing the 100 download / 24 hour limit for this one. Darn. I thought you could, but nope, it's considered redistribution, which is like, a big no-no. Of course no idea how that applies after it's out-of-support, but will I take that risk? Probably not, at least for now I won't host a full Win10 ISO just yet on my own personal site that's super traceable back to me.
I expect this upload to take hours. probably 2 or 3? At only 5-10 MBPS upload? Yeah it's not good.
This entry was edited (3 weeks ago)

Wow, a file downloading service that lets people download as I'm uploading? What a weird concept! I wonder how this works. Because the file downloader sees the full file size at the start too, so does your download slow to the speed of my upload? Cause on Xfinity, let me tell you, it's crap. But if someone wants to download as I upload? Be my guest, cause this link expires in 24 hours or after 100 downloads, ha. wormhole.app/PpboXr#Qee8hu0BID…
This entry was edited (3 weeks ago)

October of 1987 was when the Braille 'n Speak was first released. This device had a major impact on the community and on my life on several levels, as it helped to launch my career in this field. I'm working on a blog post celebrating this which will hopefully get posted to the Blazie Technologies blog in the next day or so. @BlazieTech #blind
This entry was edited (3 weeks ago)

CEO of Signal talking about sexualizing Pikachu was not on my bingo card
RT: mastodon.world/users/Mer__edit…

Help steph, a trans woman, & her girlfriend move to a safer, trans-inclusive, protective state - away from where they are currently (Ripon). Safety for trans people there has become increasingly questionable and uncertain and they would need as much help as possible from us as they find temporary jobs and plan the move. Steph has a youtube channel as well that I'll link below.

The donations will be used for: rent, tickets, transportation, HRT etc.

Goal: 11424/14000

GoFundMe:

gofundme.com/f/support-stephs-…

Youtube:

youtube.com/@teletraaniv?si=bF…

Tags: #GTFOmyState #crowdfunding #crowdfund #fundraiser #fundraising #MutualAid #TransCrowdFund

> A Milwaukee Brewers 'Karen' has been fired after making a racist comment to a Dodgers fan

> Her employer, Milwaukee-based staffing company ManpowerGroup, confirmed to reporters Wednesday that she was no longer with the company. Additionally, according to reports, Make-A-Wish Wisconsin says Kobylarczyk also resigned from her role on its board of directors.

You just know those brown kids weren't getting their wishes

I was wondering which social media app would cave first. While OpenAI has built AI Slop TikTok (Sora), Pinterest will now add settings that allow people to limit how much AI generated content they see in their feed.

techcrunch.com/2025/10/16/pint…

I'm going to publish this ISO I think. It's clean, no driver integrations, just 3 updates that get it to build 19041.6456. That's the last security patch for Windows 10 before extended security updates come out. Most likely a similar LTSC ISO will make it to archive.org, too. Big difference is that I include a downgrade.bat for you: If you're on Windows 11, it will install Windows 10 on it like nothing has happened (thanks Up-down project for the idea on how this works, without them I couldn't have done it.) So, yes. This ISO will be special because if you want to "downgrade" or even "upgrade" your Win10 and Win11 install, you can with it, and it will fix any corrupted system files that way too anyway. I'm doing this as my last homage to Windows 10. It needs to be laid to rest with honor, if it does.
Oh, if anyone's willing to mirror the file, that would be quite appreciated, as my bandwidth is limited. I might choose a file uploading service, but I wanted folks to have direct links without me paying for a plan on one where file traffic limits can be an issue. Ultimately, anyone is also encouraged to re-upload it on archive.org (I just have no time to) if they wish.
This entry was edited (3 weeks ago)
in reply to Aryan

@Aryan @alexchapman OK. I'm using filemail now. Ugh. What I like: they let you pause the transfer too or resume it. filemail.com/ - I had to get a paid plan for 7-days, will probably cancel after. but it's enough for this at least to be uploaded somewhere annonomous that way. Sorry for all the trouble! No idea even why the WebRTC thing closed as the window was right there in my alt+tab and I hadn't suspended firefox, but you know, Firefox might suspend the tab, wouldn't be surprised if they do that too now like Chrome.

Well well. While I do some QA work, my PC is here busy building me a new install.ESD file with all Win10 security updates up to now integrated into the Windows 10 LTSC 2021 ISO file. This will be useful for any re-imaging or installs when I want to go do a clean one, considering the longest part of Windows update is downloading the updates themselves.

EchoBox Player | AppleVis applevis.com/apps/ios/utilitie…

reshared this

Well, great.* #Netflix wants its own video podcast library, so it’s partnering with... Spotify.

tubefilter.com/2025/10/15/netf…

*This is sarcasm. Spotify is awful. But you know that already, right?

This entry was edited (3 weeks ago)

Many exciting projects made it into the latest @NGIZero funding round.

My personal favorites are OMEMO v2 (#TWOMEMO 😜) for Converse.js (by @jcbrand) and GTK4 support for @phosh.

nlnet.nl/news/2025/20251016-se…

#XMPP #Jabber #Converse #OMEMO

This entry was edited (3 weeks ago)

TIL: There's a W3C candidate recommendation draft for a CSS markup to transfer different properties of text and controls on the web via audio cues and changes to the TTS volume, speech rate, tone, prosody and pronunciation, kind of like the attributed strings in iOS apps and it's called CSS Speech. w3.org/TR/css-speech-1/ #Accessibility #A11y #Blind

reshared this

in reply to Paweł Masarczyk

There are people who seem to feel really strongly about this being a good thing for screen reader users, and I must admit to being bewildered about why. Websites changing aspects of screen reader output may be equitable, if we compare it with the way webpages can alter visual presentation through fonts and other aspects. But to me it feels entirely inappropriate to cross that boundary between the browser as the user agent and accessibility software in order to interfere with very personal settings.

Meanwhile on iOS, the related accessibility attributes are being used to achieve outcomes nobody wants or needs, like spaces between all the digits of a credit card number. @miki @prism

in reply to James Scholes

I can see the point for e.g. text-to-speech APIs built into the browser, maybe even read-aloud features. But the case for screen reader compatibility seems to be built on the foundational assertion that SR output is monotonous and can't be "livened up" by brands.

As assertions go, I think that is both true and exactly how it should be. I don't use a screen reader for entertainment. I can think of few things more obnoxious than a marketing person thinking that my screen reader should "shout this bit."

Many web authors can't even label stuff correctly. Why on earth would we expect them to treat this sort of feature with informed respect? @miki @prism

in reply to Drew Mochak

@prism I think without ARIA or an equivalent (like more things built into the web platform), the web would've continued galloping forward with all the same UI widgets and design patterns but with no way to make them even halfway accessible, and we'd be left even more behind than we are now.

By contrast, I don't think the inability for a website to change the pitch of NVDA is a legitimate blocker to anything worthwhile. @Piciok @miki

in reply to James Scholes

@jscholes I have felt for a while that only having TTS for everything is pretty limitting. So, you know, I use unspoken. Problem solved. I haven't really thought to myself, self, it would be great if the website author could script some nonverbal feedback for me instead of what I am currently hearing, or anything like that. So this may well be a solution in search of a problem.
@Piciok @miki
in reply to Drew Mochak

@prism @jscholes @miki I don't see the point because everyone has different ways they like to hear things. People choose the verbosity and speech options that work for them and to have something override that would be irritating. I also feel that this is part of a larger conversation about the perceived need for sighted people to feel like our experience of the web is vastly different. This is why we have a lot of unnecessary context already and here is another example.
in reply to Mikołaj Hołysz

@silverleaf57 @prism @jscholes I, for one, would certainly appreciate if I could hear exactly which parts of a line of code have "red squiggles" under them, preferrably with different styles for error and warning. This is something sigted people have. Visual Studio Code solves this with audio cues, but those are per line, not per character range.
in reply to Mikołaj Hołysz

@miki I think it's a trap to suggest that such problems should currently be solved only through speech properties and auditory cues within individual apps. Expressive semantics on the web have only been explored at a surface level so far, and it's a complete stretch to go from "We don't have the ARIA properties to convey complex information," to "Let's have every application implement its own beeps and boops."

Imagine having to learn the sound scheme for Gmail, then Outlook, then Thunderbird. Then going over to Slack where they also have unread state albeit for chat messages rather than emails, but they use an entirely different approach again.

All the while, braille users are getting nothing, and people who struggle to process sounds alongside speech are becoming more and more frustrated. Even if we assume that this is being worked on in conjunction with improvements to ARIA and the like, how many teams have the bandwidth and willingness to implement more than one affordance?

We've already seen this in practice: ARIA has braille properties, but how many web apps use them? Practically none, because getting speech half right and giving braille users an even more subpar experience is easier. Your own example highlights how few apps currently let you control things like verbosity and ordering of information.

CSS Speech could turn out even worse. A product team might opt to implement it instead of semantics because the two blind people they spoke to said it would work for them, and never mind the other few million for whom it doesn't. They'll be the people complaining that there's no alternative to the accessibility feature a team spent a month on and thought was the bee's knees.

@silverleaf57 @prism @Piciok

in reply to Mikołaj Hołysz

@miki There is much shared (or adjacent) iconography in the world, with a lot more power and opinion behind it than the sounds for a web app are going to get. Despite that, icon fatigue is a real and common user complaint; it seems bizarre to be leaning into such an issue purely in the name of equity. @silverleaf57 @prism @Piciok
in reply to James Scholes

@jscholes @silverleaf57 @prism Efficiency, not equity.

Words are a precious resource, far more precious than even screen real estate. After all, you can only get a fairly limited amount of them through a speaker in a second. We should conserve this resource as much as we can. That means as many other "side channels" as we can get, sounds, pitch changes, audio effects, stereo panning (when available) and much more.

Icon fatigue is real. "me English bad, me no know what delete is to mean" is also real, and icons, pictograms and other kinds of pictures is how you solve that problem in sighted land.

Obviously removing all labels and replacing it with pictograms is a bad idea. Removing all icons and replacing them with text... is how you get glorified DOS UIs with mouse support, and nobody uses these.

in reply to Mikołaj Hołysz

@jscholes @silverleaf57 @prism Everything said above also applies to braille, Braille cells are even more precious than words in a speaker. It's a schame we can abbreviate "main landmark heading level 2" to something more sensible, but we can't abbreviate "unread pinned has attachment overdue" if those labels are not "blessed" by some OS accessibility API.
in reply to James Scholes

@miki Note that I'm specifically responding to your proposed use case here. You want beeps and boops, and I think you should have them. But:

1. I think you should have them in a centralised place that you control, made possible via relevant semantics.

2. I don't think the fact that some people like beeps and boops is a good reason to prioritise incorporating beeps and boops into the web stack in a way that can't be represented via any other modality.

@silverleaf57 @prism @Piciok

This entry was edited (3 weeks ago)
in reply to James Scholes

@jscholes @silverleaf57 @prism Centralized beeps and boops don't make much sense to me. Each app needs a different set, let's just consider important items on a list. That can mean "overdue", "signature required", "has unresolved complaints", "student not present", "compliance certification not granted" or something entirely different. We can't expect screen readers to have styles for all of these, just as we can't expect browsers to ship icons for all of these.
in reply to Mikołaj Hołysz

@miki Sure. Or it can just mean "important" in a domain-specific way that's shared across apps in that domain. We should be taking advantage of that to make information presentation and processing more streamlined, before inventing an entirely new layer and interaction paradigm that hasn't been user tested and will require text alternatives anyway. @silverleaf57 @prism @Piciok
in reply to James Scholes

@miki As noted, I think people who can process a more efficient stream of information should have it available to them. That could be through a combination of normalised/centralised semantics, support for specialised custom cases, and multi-modal output.

My main concern remains CSS Speech being positioned as the only solution to information processing bottlenecks, which I think is a particularly narrow view and will make things less accessible for many users rather than more.

Good discussion, thanks for chatting through it. @silverleaf57 @prism @Piciok

in reply to James Scholes

@jscholes At the same time, I think the chances that CSSSpeech completely takes over the industry and we all stop doing text role assignments is quite low.
explainxkcd.com/wiki/index.php…

So I am decidedly meh about this. It could help but probably won't.
@miki @silverleaf57 @Piciok

in reply to James Scholes

@jscholes @prism @miki @silverleaf57 I found the concept intriguing and am myself in two minds about it. On one hand, I wouldn't mind having the speech experience augmented by things that aren't words. I could imagine browsing a product's details page and reading upon all of it's features with tiny earcons indicating whether certain feature is supported or not rather than hearing "Yes" and "No" every time. This could even be played at the same time as the readout begins. To be fair, I also don't mind having the pronunciation of tricky words that are important for proper understanding and functioning in a domain, predefined just so I could learn it. Character and number processing might come in handy too - recently there was an issue on the NVDA Github opened against a feature to read combinations of capital letters and digits as separate entities for the benefit of ham radio operators and their call signs. Some kinds of numbers I also find easier to remember when they come digit-by-digit etc. The ability to define the spatial location of voice on the stereo sound spectrum could be useful for presenting those spatial relationships in some advanced web apps (thinking scientific contexts, design, web text and code editors etc.. As you say, however, I wouldn't expect this being widely adopted by web devs who already struggle with the proper use of ARIA. Also the trade-offs could be significant, especially if this becomes the sole way of conveying information. Blind users with a profound hearing impairment who will miss out on crucial information because it was read out too quietly, too fast and with a pitch that takes away some of the frequencies they can't discern any more; neurodivergent people confused by sudden changes and unfamiliar sounds on top of exotic keyboard shortcut choices they already have to remember etc. This could create a situation similar to WCAG SC 1.4.1 where the colour is used as the only way of conveying information.
in reply to Paweł Masarczyk

This already exists though, as a screenreader feature. Kind of. NVDA has an add-on called unspoken that will replace the announcement of roles with various sounds, there's a different one for checked vs. unchecked boxes for instance. JAWS did (does?) something similar with the shareable schemes in the speech and sounds manager. Granted, not a lot of people do this, but the ability is there if people want it. VO, TB and cvox also have earcons--they're not used for this purpose, but they could be. Having this under the user's control rather than the author's control does seem better. It prevents for instance a developer deciding to be super abtrusive with ads. I do see the potential for it to be good, the author would be able to convey more nuanced concepts being the author of the content... it just feels like a thing most people wouldn't use, and most of the people who'd try would end up being obnoxious about it.

@jscholes @miki @silverleaf57

in reply to Drew Mochak

@prism @jscholes @miki @silverleaf57 Yes, this is what I'm thinking too. Also, the addons are great - I experiment with Earcons and Speech Rules which is another addon with tons of customization. Bringing it on as a core feature would signal it as industry standard though and from that it would be possible to explore whether any external API's could augment it in any way.
in reply to James Scholes

@jscholes @prism @miki @silverleaf57 As for this being widely adopted, I expect some CSS properties could be mapped to the aural cues on a browser lever just like some HTML elements carry implicit ARIA properties with them by default. This would have to be carefully considered. Regarding sound cues: this would have to be based on some kind of familiarity principle where the sounds are those most users will already know or they resemble the action they are supposed to represent, think emptying the recycle bin on Windows. I really like the approach of JAWS representing heading levels through piano notes in C major - it sounds logical but on the other hand not everyone is able to recognize musical notes at random. I'm not convinced about the marketing value of this - I mean creating brand voices etc. It sounds fun but no more than that, at least in the screen reader context. I guess inclusion in advertising is another can of worms that might derail the discussion. I'm looking forward to when NVDA finally incorporates some kind of sound scheme system because we will then be able to talk about some kind of standard given that JAWS and to some extent VoiceOver and Talkback make use of that already. I guess then the discussion could evolve around this being complementary to something like aria-roledescription or aria-brailleroledescription, assigning familiar sounds and speech patterns to custom-built controls.
in reply to James Scholes

@jscholes @prism @miki @silverleaf57 I think inviting @tink and @pixelate into the discussion is a great idea as they might have valuable insights on this. On a related note: something that's been running around my head is how many Emojis could be faithfully represented by sounds.
in reply to Paweł Masarczyk

@jscholes @prism @miki @silverleaf57 @tink So, I generally like beeps and boops. All shiny and stuff. But the web is made by sighted people, and they will get things wrong. I'd rather we have our own tools, like NVDA'S earcons addon, and maybe have earcon packs for it to, for example, add aural highlighting for VS Code, or make-gmail-shiny, stuff like that.
in reply to André Polykanine

I am a semi accomplished musician. I can play 70 percent of the things i want to. I have written thousands of midi files, but there are ideas that I couldn't get out of my fingers, because I cannot go back in time to 1967, and have an ensemble play this piece. IT wouldn't sound saturated, and vintage if I did it modern either. It'd sound halfway there, because digital can only do analog so well. IT'll get so close, but I'm also not a virtuoso on the obo, or the flute. I cN just play keys. THere aren't samples good enough that can sound like, an old production music orchestra.
in reply to André Polykanine

yeah I'd love to be able to direct like, really musically. it obeys key and bpm, but not always chord changes or note things. ALso it's dependent on the style whether or not it will do what you ask. Some genres have limits. LIke, say you ask for complex polyrhythmic 2005 80s influenced funk. it'll get confused, because it doens't know if you want it to do, 80s drums, but 2005 production, 2005 chords, with 1980's production...
in reply to In Volo 🐦‍⬛ Vagabunden Musik

Standardmäßig muss man für Jack ein Sounddevice auswählen und kann eben nur dieses benutzen. Es sei denn man begibt sich in das Abenteuer mit config Dateien oder PulseAudio Bridges. I.d. Regel ist das nichts, was ein gewöhnlicher Nutzer auf die Schnelle eingestellt bekommt.
Mit PipeWire sind alle Hardware Devices einfach so verknüpfbar.

“This echoes the testimony of Keith Siegel, who told an audience during an event with the UK’s Chief Rabbi this July: “The more they tried to convert me to Islam, the more my Jewish identity became stronger, and my belief became stronger.” “ 🎗️

thejc.com/news/israel/hostages…