This hearing test app is awesome. If you connect it to Apple health, you can use the audiogram it generates with AirPods. So cool. apps.apple.com/ca/app/mimi-hea…

Sounds like Reddit wants to follow in Twitter's footsteps and make API access payed, though it sounds like they're specifically targeting companies scraping the site to train AI language models, while access for academics and making bots or apps that help using the site would still get free access. We'll have to see how all of this turns out in practice techcrunch.com/2023/04/18/redd…

Brandon reshared this.

in reply to Pitermach

Reddit followup: The Developer of Apollo (by far the most popular 3rd party iOS client) had a meeting with them about the API changes. Full details are in the post below but the TLDR, sounds like Reddit doesn’t want to get rid of third party clients, but they want to charge for the API usage by them - IE to offset the cost of ads people wouldn’t be seeing. So in the case of Apollo it couldn’t have a version anymore. What this means for apps like Dystopia or Nthantech’s client remains to be seen. reddit.com/r/apolloapp/comment…

I would love some support on this #CKEditor5 issue github.com/ckeditor/ckeditor5/…

So many accessibility errors can be fixed with #ATAG. #A11YFirst does a great job making a #WYSISYG better.

There are times you just don't want to update your favorite app because you heard from your friends that the new version broke accessibility. It's possible to turn off automatic updates for specific apps on google Play Store. Here's how erisilebilirandroid.com/turn-o…

🚨 Only 2 days left to apply for the Responsible AI Challenge!

Don't miss the chance to showcase your innovative AI project. Total prize pool is now $100,000 — including mentorship from industry leaders to develop your idea. future.mozilla.org/builders-ch…

Did you ever wonder how Tutanota's encryption is able to protect all your data? Check out our new encryption page with lots of interesting facts! 🔒😍

We ♥️ #encryption!

👉 tutanota.com/encryption

#security #privacy #email #data

One hundred percent agree! In my 65 years on this planet, I have never called for- or taken a thing I own in for repair, with the exception of front end alignments on my vehicles, because I don't have the equipment to do it. One small correction to this flier, however, it should read: "If you aren't allowed to fix it, you don't own it." Not everyone has the knowledge or interest in fixing things. And yo, #Sony #Photography, this flier applies to you, big time.)

Good morning everyone. I hope your all having a good day/night.
Please remember, your all very much loved and amazing. Always be kind to one another. We need to look out for each other.
Today we are gonna be talking about the parsec.

A parsec is a unit of length used to measure vast distances on a cosmic scale. It's an important unit that professional astronomers use to calculate the distance between stars and galaxies, with its definition rooted in the concept of parallax. Essentially, a star located one parsec away from us will appear to shift by precisely one arc-second (1/3600th of a degree when viewed from two different vantage points along the Earth's orbit. By measuring this parallax angle, we can use trigonometry to determine the distance to the star. Parsec is derived from the terms "parallax" and "second", reflecting the scientific principles on which the unit is based.

reshared this

This week, the Texas Senate is expected to debate and pass two bills that would take effect at all
public colleges and universities in Texas on Sept. 1, 2023:

• Senate Bill 18 to eliminate granting of tenure for all faculty members, and
• Senate Bill 17 to ban DEI offices, officers, programs, and practices.
And the Senate has already passed
• Senate Bill 16 to restrict academic freedom and critical thinking in courses and research.

aauputaustin.files.wordpress.c…

Tune in today at 16:00 CET for the hands-on demonstration of Interfacer, the new digital infrastructure for #fabCities.

🔗 interfacerproject.dyne.org/eve…

Video feed will be relayed via @peertube in the Lounge channel of the Dyne @matrix space: matrix.to/#/#dyne:matrix.org 👾

Recently I have been playing with various GUI's for the Whisper transcription software. Buzz has definitely won the showdown. Almost completely keyboard accessible, give or take the toolbar which needs exploring through object navigation of NVDA or an equivallent in your screen reader of choice; handles the downloading of models, FFMPEG conversion and everything that otherwise would have required operation in the command line, works with Whisper.CPP as far as I can tell and can be localized to other languages.
Now I can finally listen to podcasts in all the languages I can't speak. I love it when technology enhances my access to knowledge and helps me do my work even better for those who benefit from it.
github.com/chidiwilliams/buzz
#Accessibility #Audio #Languages #OpenSource

reshared this

in reply to Steffen

@radiorobbe @Radiojens Yes, I wanted to suggest object nav of NVDA as well. I usually navigate to the toolbar which is one object above and to the left of the table with the loaded file, and find the "Open Transcript" button there. I also hope that either the software will receive the needed improvements or that somebody writes an NVDA addon around it. Apart from the toolbar, the edit box with the transcript is the inaccessible part but then I just export the result to a txt file and work with a regular text editor from there.
in reply to Paweł Masarczyk

@radiorobbe I tried to navigate to the Toolbar and to open Transcript File, then I clicked with nvda+numpad-return, but nothing happened. I also simulated a left-mouse-click on a completed transcription. Both had no resuolt, no window opened. So how do you export exactly, Pawel? I found the file tasks in Appdatea/local/buzz/buzz/cache and it seems to be a raw File of the transcript, but almost non readable with lots of unreadable charakters, don't know the real format of it.
in reply to Jens Bertrams

@Radiojens @radiorobbe I use the regular Whisper, I think it's the Whisper.CPP implementation, actually, with the large model. Here are the steps:
1. I import the file using ctrl+o
2. I setup the options for the transcription job as I like them: the mechanism is Whisper, the model is large, the language is set to automatic detection, all the rest left at defaults;
3. I click Run and wait. I will eventually be moved to the table where the progresss on the task is reported.
4. I wait for it to finish i.e. to say "Completed" in the second column.
5. I navigate to the toolbar. I use the laptop layout of NVDA so I'll try to explain it using that keymap:
A. I call the navigator focus to my system focus by pressing NVDA+Backspace;
B. I navigate out of the table object - NVDA+Shift+Up arrow;
C. I navigate then two objects to the left - NVDA+shift+left arrow twice, so that I find the toolbar;
D. I expand that object with NVDA+shift+down;
E. I navigate to the right using NVDA+Shift+right arrow until I find the "Open Transcript" button;
F. I call the focus to my navigator object - that+'s NVDA+Shift+M
G. I activate the button by pressing NVDA+Enter;
6. A new window opens where the text of the transcript is presented in this inaccessible edit field that you can't handle with a keyboard. The "Export" button is found by pressing Tab. You can pick the format you need from the context menu that pops up and save it anywhere you choose.

I hope this helped. If not, and you find it a good idea, we could try to communicate somewhere else and coordinate a remote session so that I could try and see what the problem might be on your end.

in reply to Paweł Masarczyk

@radiorobbe Hey you two, I got one more question: Does anyone know what word_level_timing and "initial prompt" means? There seems to be no readme for buzz. And: Is it possible for the program to recognize when a new person is speaking? Would be great for a podcast with three people who talk, and I could offer a textversion for people with hearing difficulties.
in reply to Jens Bertrams

@Radiojens @radiorobbe Hello! Word level timing allows you to generate timestamps for each word so that you get per-word subtitles which apparently looks cool in some social media concepts. Initial prompt, if I'm correct, lets you tell Whisper some context around the recording so that it can better adapt the recognition. As far as I know, Whisper by itself can't do diarization i.e. identification of individual speakers. I'm afraid the much more trivial consequence of this is the output of everything as a huge block of text, regardless of the number of voices.

INVESTIGACIÓN | Cuca Gamarra medió para que el Gobierno del PP en La Rioja colocase a afines del partido en empresas eldiario.es/politica/cuca-gama…

Yes, twitter really did mockingly label the Canadian national broadcaster CBC as "69% Government-funded Media".

Similar broadcasters like the BBC and Australia's ABC surely have to follow CBC and stop using twitter, now that its toxic clown of an owner has shown what contempt he has for them.

If he is prepared to use his platform to paint childish graffiti on their accounts, I can't see why any serious entity would want to have anything to do with twitter.

#CBC #twitter #twittermigration #BBC #ABC

This entry was edited (2 years ago)

Dostalo se mu péče o které si jakýkoli jiný alkáč může leda nechat zdát, tak co se jim nepomstít. Co nejrychleji zapomenout na tu trosku!
irozhlas.cz/zpravy-domov/milos…

The History of Ancient Japan: The Story of How Japan Began, Told by Those Who Witnessed It (297-1274)

openculture.com/2023/04/the-hi…

Getting used to using open AI through API. Want access to the different 3.5 models, so the best I've found for that is github.com/Bin-Huang/chatbox
anything better? The accessibility of this is iffy, but it can be worked around.

We’re part of an open letter asking the UK government to rethink the Online Safety Bill to protect end-to-end encryption and respect privacy.

Read the letter here: element.io/blog/the-uks-online…

reshared this