Search
Items tagged with: accessibility
<span aria-label="Open Cookie Preferences Modal" role="complementary">
<a role="link" tabindex="0" lang="en" aria-haspopup="dialog" aria-label="Cookie Preferences, opens a dedicated popup modal window">Cookie Preferences</a>
</span>
Facebook's useless #AI based photo descriptions have made it to Threads; and, because they have, #blind people have to suffer for it. People think that posting photos on Threads without alt text is ok because of the utterly useless auto-generated descriptions. I thought I was rid of it when I abandoned Facebook in 2017. Please ask your friends, followers, and acquaintances to describe #photos on Threads.
So like I just have to ask. For people that are super critical of AI for accessibility, what do you expect instead? Do you want blind people to have human ... guides or whatever that will narrate the world around you? Do you want humans to describe all your pictures? Videos? Porn? Because that's about the only other option. And you may return with "Well audio description." And I return with "You think people are going to describe every YouTube video out there? Or old TV shows like Dark Shadows?" Because honestly that's what it'd take. If AI were *not* around, if we want *that* kind of access, that's what we'd have to ask, of darn near every sighted human in the world. And I just don't feel comfortable with demanding that of them.
Now, we'll see what Apple does to give us what will hopefully be even better image descriptions. Imagine a 3B model that is made with **high quality** images and description pairs, trained to do nothing but describe images. Apple has done pretty darn good without LLM's so far, so maybe they'll surprise us further. But my goodness, I'd much rather have something that, yes, makes me *feel* included, maybe a tad bi more than it actually *does* include me. And that's for each and every blind person to decide for themselves if they want to use AI for image, and probably soon, video descriptions, and what they're willing to trust with it. But for us to get this much real, human access, I just hope people who are detracting from AI understand that we who use AI are now used to having images described, and well, soon videos. It's just something that I don't think people should just deny quickly.
#AI #accessibility #blind #description #video #LLM
The release of #GNOME47 is imminent and our #translators have done their usual amazing work! There was a lot of #Accessibility work this cycle and #translations are a big part of that.
With a few days left, this is a great cycle to improve the user experience for non-English #ScreenReader and #Braille users.
In just a few minutes, Inclusive Design 24 will be getting underway... 24 hours of #free talks about everything related to inclusive design:
inclusivedesign24.org/2024/
#accessibility #design #code #UX #XR #gaming #disability #a11y #id24
#Accessibility #Android #Google
#id24 starts in a few hours; are you ready?
Join a global community of accessibility and design experts for 24 hours packed with live talks, advice and strategies.
We're proud to be a Gold supporter of the conference and have several of our team involved including @tink, @IanPouncey, @patrick_h_lauke and @JoeLamyman.
Explore the full line-up: inclusivedesign24.org/2024/sch…
#id24 #Accessibility #InclusiveDesign #AccessibilityConference
For now this "#deltachat_web" project is merely meant as proof that the deltachat desktop frontend/UI has no dependency on electron anymore.
With a few nice side-benefits:
It allows us developers to use other browsers dev tools to improve #accessibility for example and to bring back automated ui #testing.
Last but not least this serves as a stepping stone to make a #tauri version of #deltachat_desktop
Stay tuned for the #nlnet funded #delta_tauri project.
Speech to Text on Linux?!?!
I just found "Speech Note" when searching the Software app.
Installed it via flatpak.
flathub.org/apps/net.mkiol.Spe…
Used the "English (Vosk Large)" and "English (Vosk Small)" language model with very decent results. There are loads of models to choose from.
All processed locally. No network needed!
This is great!
#accessibility #SpeechToText #Linux #debian #flatpak #flathub #SpeechNote
#Chrome updated, and now:
* I have to restart #NVDA every single time I launch the browser, otherwise it refuses to read anything in the Chrome window.
* Someone wrapped a bunch of the UI in a landmark region, the name of which includes: the name of the current page, the name of the browser, and the name of the current Chrome profile. NVDA speaks this region's name and role whenever I move focus from the page document to the address bar. E.g. "Some Page Title - Google Chrome - James (<my email address>)".
* When Alt+Tabbing into Chrome, the word "window" is now spoken by NVDA after the window title.
#Google #accessibility strikes again.
I just submitted the following #Chrome #accessibility bugs, relating to using the latest update with #NVDA, to #Google:
1. I have to restart NVDA every single time I launch the browser, otherwise it refuses to read anything in the Chrome window. I.e. Chrome is not running. I launch it, NVDA doesn't read anything. I restart NVDA, and it works again, until I next close and restart Chrome.
2. A bunch of the UI has been wrapped in a landmark region, the name of which includes: the name of the current page, the name of the browser, and the name of the current Chrome profile. NVDA speaks this region's name and role whenever I move focus from the page document to the address bar, or to other parts of the UI like the menu button by pressing Alt. E.g. "Some Page Title - Google Chrome - James (<my email address>) region".
3. When Alt+Tabbing into Chrome, the word "window" is now spoken by NVDA after the window title, where it wasn't before.
4. While focused on the "Chrome" menu button after pressing Alt, the first press of Up or Down Arrow expands the menu as expected, but doesn't move focus onto the last or first item respectively. The same applies when pressing Space or Enter to expand the menu while focused on the button.
5. Within the Chrome menu, items that have an attached sub-menu do not convey this fact to the screen reader. E.g. NVDA only announces "Help e 19 of 22".
6. The Chrome menu can no longer be opened with Alt+F.
7. When using first-letter navigation to move to a sub-menu within the Chrome menu, where there is only one match for the letter typed, focus doesn't move into the sub-menu when it is expanded. E.g. open the Chrome menu, and press L. NVDA announces, "More tools l 18 of 22", but when I press Down Arrow, it says: "Name window… 1 of 5".
Most of the #Chrome #accessibility issues in my previous post are caused by Microsoft's UI Automation (UIA) API. If you want an #NVDA add-on to completely disable UIA in Chrome windows, here you go:
The twelfth edition of @inclusivedesign24 #id24 is just over 2 days away.
Free, online, no registration, no product pitches. Just show up (on time or not).
Countdown timer is on the schedule:
inclusivedesign24.org/2024/sch…
YouTube playlist waiting for you:
youtube.com/playlist?list=PLn7…
An important nuance I haven't seen so far though is that even within a marginalized group, opinions can vary wildly about to what degree something is considered "a problem". The fact #twitch tried to get rid of "blind playthrough" in 2020 because it was considered ablist language is a great example of this: nobody in my circle thought this was problematic, we all had a good laugh and basically said they probably had bigger problems to worry about.
Now however, only a few years later, I see more and more sentiments shifting where that is concerned, asking writers not to use blind as synonymous for ignorant, stupid or incompetent. Same with terms like crippling debt. And as opposed to the Twitch example, this time it's actually #PWD who are making these points.
The question now becomes: Did times change, and did people get more offended by this / more hurt by this? Or is this simply yet another example of people finally coming forward about something that's irked them for decades?
I myself know where I stand on this, but I'd be a hypocrite if I decided to, in this case, decide everybody thinks as I do, where I normally always preach caution about homogenizing #accessibility.
This post doesn't really go anywhere, I just thought it was an interesting bit of contemplating :))
From a selfish #accessibility viewpoint, things I would like to stop existing:
Discord.
PDFs.
statista.com
Powerpoint/Impress.
What are your "my life would be better if abled users didn't inexplicably choose X" preferences?
Boosts welcome, replies more so.
#tech #technology #Linux #accessibility @mastoblind @main
Okay, so I just have to rant about something that I think is going to hold the Linux Desktop back a bit, especially for blind users. That is config files. Specifically, editing some file that's got no kind of linter, or checker, to make sure that whatever the user put in it works and doesn't leave one without speech.
So, this is specifically about the Speech dispatcher, which is a package in Linux a bit like Microsoft's SAPI. Of course, there's that new one, Spiel, that's being worked on, but I've not tried it yet. So, you can run spd-conf to set up a config file, but a lot of folks tell me that, in order to increase the default speech rate for stuff like games that use speech-dispatcher, web apps that speak through speech-dispatcher, self-voicing apps, I'd have to edit the config file.
Now, that's not awful for me, I can do it. But there really, really needs to be a GUI for it. And if you ask *me* why I don't make it myself, I'll tell you that if it's so easy, you should have made it with the snap of *your* tallented fingers, instead of spending all that time being a reply-guy. Windows has a speech properties dialog. Mac has Spoken Content in System Settings. Linux should have something similar. And, if it turns out that there is and I just don't have the proper package installed, I'll gladly change my tune to "distros should have this installed by default."
I've also seen this in Emacs packages like Ellama, where you have to set up Emacs Lisp calls to the right function with the right parameters inside your usepackage block, and it's hard for me to figure out how to add the openAI-compatible service that I have to Ellama. And AI hasn't helped yet.
So, having things being customizable is amazing. It really is. There are so many keyboard configuration settings, for example, that I think blind people are going to love playing with. But goodness, there needs to be something to change your system TTS stuff, and hopefully one day we can have a command to turn on and off the screen, like the iOS screen curtain so we can maintain privacy.
#Linux #accessibility #blind #LinuxDesktop #foss #tts
Only now learning about Susan Banks, a prominent #accessibility advocate who "regularly interacted with [large game development studios], pushing for better options and designs", "helped to revolutionize games journalism", and even had an award named after her.
Apparently, she never actually existed.
ign.com/articles/a-prominent-a…
#news
A much more detailed post about TalkBack's new Image description feature using #AI #Gemini
Okay just, like right off the cuff here. But why does Vispero, a company making blindness software/hardware want videos for there Big Thing thing? Why not just text? Emails? Like, what? Why? Meh, whatever. I'll submit mine I guess. I'm sure some of you know what I'm gonna suggest. Feel free to submit your own ideas, goodness knows we need more grand ideas for screen readers.
#accessibility #blind #Braille #JAWS #FreedomScientific #ScreenReader
Join the State of the Browser #SOTB 2024 conference on Saturday, 14 September, at the Barbican Centre in London.
This one-day, single-track conference, organized by the London Web Standards, covers the modern web, accessibility, web standards, and more.
TetraLogical's director, @SteveFaulkner, will be speaking on "No Industry for Old Men," reflecting on key developments in the web accessibility space over the years.
Because I can’t be arsed to re-up my Dragon license, kudos to @siblingpastry for the most recent test of voice control support for wrapped labels:
tpgi.com/should-form-labels-be…
The Promise and Pitfalls of Web Accessibility Overlays for Blind and Low Vision Users
#accessibility #overlays #research
research paper (PDF)
researchgate.net/profile/Garre…
Oh, thank goodness not having a fully functional screen reader isn’t a blocker for Fedora 41. We wouldn’t want them to miss their 10th anniversary of shipping an operating system without a fully functional screen reader, after all.
ar.al/2024/06/23/fedora-has-be…
#Fedora #RedHat #IBM #a11y #accessibility #screenReader #orca #linux #openSource #FOSS #ableism masto.ai/@phoronix/11307885166…
I wrote a @TPGi blog post last year on 20 #DigitalAccessibility books. Since then, quite a few more have been published, so I've blogged about 19 more.
So, I'm genuinely curious. NVDA now has the ability to show text formatting in Braille. I don't mean through dot 7 and 8, or HTML-looking tags--although it can do those too now--but through the UEB, or whatever table you use, formatting symbols created for that Braille code. It also can now show paragraphs, either using spaces, or a Pilcro sign.
So, can JAWS do either of these? I'm seriously wondering, because people are *always* saying that JAWS' Braille support is the best in the industry. And I just want to make sure I'm using it the best I can. I also have Leasey, so if Leasey has features to help with that, @hartgenconsult I know there's the BrailleEase thing that I've not taken a look at yet. There's a ton of things I need to learn about Leasey.
For now, I know that iOS can show Braille formatting in the Books and Notes app, and the Mac can show it... in Safari as well I think. Linux... Well, Linux can't, besides the dot 7 and 8 stuff.
Braille formatting has really come far from the post I did like years ago, and I'm super happy about that. Hopefully it keeps going, especially in the age of multi-line displays.
There are two basic approaches to making novel/custom widgets accessible to assistive tech, and one of them is usually wrong:
1. accesibly describe the mechanics and visual presentation;
2. accesibly convey the functional interactions and purpose.
If you find yourself thinking "I need an aria-live region to explain this" then the chances are -- you don't -- you need to conceptually shift from 1 to 2. Live regions are the last resort.
Sharing videos of Paralympic athletes are great, but please don't forget them after the event; next time you visit a coffee shop, use a public transport or public restroom think about whether that Paralympic athlete would be able to access it and take necessary action to make it accessible; even just telling the owner/authority to make it accessible will be of great help.
#Vmware #Broadcom #Accessibility #Apache #NVDA
I think that sums up FB’s attitude towards accessibility quite neatly, and they are far from alone in this.
#SocialMedia #Accessibility
Facebook says its new app icon was just a glitch after all 9to5mac.com/2024/08/30/faceboo…
Check Out my Latest Guide I wrote for @iaccessibility on Ray-Ban Meta Smart Glasses!
Discover how these smart glasses are redefining accessibility for blind and visually impaired users with AI-powered features.
iaccessibility.net/ray-ban-met…
#Accessibility #RayBanMeta #SmartGlasses #AssistiveTechnology #VisuallyImpaired #Innovation #TechForGood #Inclusion #WearableTech
Anyone running macOS 15 and want to test to see if this `aria-activedescendant` bug is really fixed?
bugs.webkit.org/show_bug.cgi?i…
It’s not that I don’t trust Apple folks, it’s just that I have learned never to take the “Fixed!” assertion at face value.
#a11y #accessibility #Safari #ARIA