Skip to main content

Search

Items tagged with: screenReader


Gibt es schon #Screenreader die auch Bilder beschreiben, also mit KI?


In-Process is out, featuring news on:

- NVDA 2024.4.1 which has recently been released
- NVDA Contributor Documentation Survey
- We head out and about with NV Access
- NV Access board member Emma a Change Making Finalist!
- And a deep dive into the Add-on Compatibility Process

All this and more, now available: nvaccess.org/post/in-process-1…

#NVDA #NVDAsr #ScreenReader #Blog #News #Newsletter #Survey #CurrentEvents


Calling all #blind #Windows users! I'm looking for recommendations on #accessible SSH clients that work well with #1Password. I've been using Windows Terminal, but I'm not entirely satisfied. What do you use for SSH on Windows? #Accessibility #AssistiveTech #ScreenReader #BlindTech #tech
@1password @mastoblind @main


Did you catch up on our last In-Process blog at the end of October? If you haven't yet read it (or even if you have) you'll be pleased to know I've now FIXED the table of contents links so you can easily get to:
- the 2024.4 release info
- Info about add-ons
- Emphasising stability
- Benefits for users and developers in our add-on approach
- And all about reporting font attributes

nvaccess.org/post/in-process-3…

#NVDA #NVDAsr #ScreenReader #News #Blog #Info


NVDA 2024.4.1 has been released! This is a patch release to fix a bug when saving speech symbol dictionaries. Pressing the Close button in the Symbol Pronunciation dialog will now save the symbols dictionary and close the dialog.

Full info and download at: nvaccess.org/post/nvda-2024-4-…

#NVDA #NVDAsr #ScreenReader #NewVersion #News #Update


If you've been looking for that next semi-cheap music-making fix but haven't decided what to get, consider Ableton Move.
If you look at the official videos, literature and documentation, you'd be fooled into thinking that it's not actually accessible. This is incorrect.
It runs a web-server for helping you manage samples,recordings and sets, but it also has an undocumented screen-reader which I demonstrate in this video.
Next to Komplete Kontrol, I can say that this has been the most innovative, fun and game-changing piece of hardware I've owned and I thoroughly enjoy working with it.

#InspiredBySound - Let's Move! (Ableton Move Accessibility Overview) youtu.be/p8IbinbOhY4
Excerpt from Peter Kirn's blog about Move:
‘How do I access Move with a screen reader?
Andre Louis has a walkthrough for you, as pointed out here in comments! And as always, it’s terrific. This is honestly worth a watch for sighted users, too, to understand how these interactions work – and it’s required viewing if you work in instrument design.’
cdm.link/ableton-move-guide/
#Ableton #AbletonMove #Accessibility
#ScreenReader #A11y #Blind


If you're in Adelaide today, come over to U-City for See Differently Tech Fest! James and Quentin (both camera shy) are here showing off NVDA as well as exhibits from See Differently (formally Royal Society for the blind) and many other organizations.

#NVDA #NVDAsr #ScreenReader #TechFest #TechFest2024


A Day with JAWS 2035: When Your Screen Reader Scripts Itself

The morning light filters through your smart windows, casting a warm glow across the room. Your ambient AI assistant hums gently, “Good morning, Lottie. Would you like to prepare your workspace for the day?”

“Yes, please,” you say, stretching as the AI readies your home office. The blinds adjust automatically, leaving just enough sunlight to boost your energy without causing glare on your neuro-linked glasses. You smile, reflecting on the advances in technology since the days of fiddling with manual screen reader settings and customized scripts. Those days feel like a distant memory, thanks to JAWS’ AI-powered self-scripting feature—your personal assistant that knows exactly how to handle your work routine.

“Let’s get started,” you say, and JAWS springs to life, adjusting the audio tone to your preferred voice—smooth, confident, efficient. As your desktop computer powers on, JAWS begins analysing the applications you’ve opened, sensing your usual email, project management software, and a new program you’ve recently started exploring.

JAWS’ Real-Time Autonomous Scripting: A Custom Fit

“Good morning, Lottie. I’ve detected a new application in use: ResearchHub. Would you like me to generate an initial script for it?” JAWS asks in a gentle tone, its voice coming through the bone conduction implant in your ear.

You nod. “Yes, go ahead and script it.” This isn’t just any regular software; ResearchHub is dense, designed for researchers and developers with an intricate layout. In the past, navigating such software would have required hours of manually creating scripts or waiting for accessibility support. But today, JAWS’ AI-driven self-scripting feature allows it to analyse this program’s unique design and build custom commands as you go.

“Noted. I’ll adapt based on your usage patterns,” JAWS replies, instantly highlighting an unlabelled menu item. “I’ve labelled this as ‘Data Analysis.’ Would you like a shortcut assigned for quick access?”

“Absolutely,” you reply. Moments later, JAWS has created a keystroke, Control-Shift-D, which will take you directly to the Data Analysis section.

As you dive into your tasks, JAWS continues observing your interactions, quietly scripting shortcuts and macros that save you time with each click. You switch over to an email thread about your latest project, and JAWS dynamically adjusts, making sure to read each new message aloud with just the right level of detail. It’s responsive, intuitive, and seems to understand the flow of your work better than ever.

### Adaptive Behaviour Learning: Anticipating Your Needs

JAWS has learned over time what works best for you—like knowing when you prefer concise summaries over detailed descriptions or when to read full email threads aloud. Today, though, as you work through complex calculations in ResearchHub, JAWS picks up on repeated actions, noting your frequent need to access specific data fields.

Without you having to prompt it, JAWS speaks up, “Lottie, I’ve noticed you’re navigating back and forth to the Analysis Settings panel. Would you like me to create a macro for this?”

“Yes, that’d be great,” you reply, surprised at how quickly JAWS anticipates these needs. It assigns a simple command, Control-Alt-S, making it even easier for you to access the settings. With each task, JAWS quietly observes, creating personalized shortcuts and learning how to refine your workflow without interrupting your focus.

Your screen reader feels less like a tool and more like an assistant that adapts to your habits, reducing unnecessary actions and helping you move seamlessly between applications. You take a moment to appreciate the leap from manually scripting these shortcuts to having them generated in real-time, tailored perfectly to your unique style.

Dynamic Accessibility Adjustment: Visual Recognition on the Fly

Halfway through the day, you open a report in a new format. The document is packed with complex graphics, diagrams, and untagged elements—historically a nightmare for accessibility. But JAWS, equipped with advanced AI-powered visual recognition capabilities, is ready.

“Diagram detected: This appears to be a bar graph comparing quarterly performance,” JAWS announces, automatically analysing the content. “Would you like a detailed audio description, or should I just provide the key values?”

“Let’s go with the key values,” you respond, eager to save time. In seconds, JAWS summarizes the data, translating it into accessible content without needing additional third-party support. When you encounter z buttons in another application, JAWS instantly identifies them and provides real-time labels, adjusting the accessibility on the fly.

The thought crosses your mind how revolutionary this is. You’ve moved past needing someone else to make documents or software accessible for you. Instead, your screen reader adapts and scripts the solution independently, as if it’s actively learning how best to support you.

A Collaborative Community of Scripts

As the day wraps up, JAWS asks, “Lottie, would you like to share the custom scripts I created for ResearchHub with the community repository? Other users might find them useful.”

“Yes, please,” you reply. Knowing that the scripts you and JAWS have tailored today could now benefit others brings a sense of community to your day. In the past, each user’s customization stayed personal, but today, JAWS’ community sharing feature allows anonymized scripts to be uploaded to a shared repository, where other users can download them for similar applications. This feature isn’t just a convenience—it’s a small way to contribute to something larger than yourself.

You smile, thinking about the ripple effect of this community effort. As JAWS users across industries contribute their self-generated scripts, the database grows, improving access for everyone.

Reflecting on Progress: A New Kind of Independence

As you finish your work, JAWS reads aloud your notifications, wrapping up your day with a recap. You reflect on how far technology has come since those early days of assistive devices. Back then, using a screen reader required you to work around its limitations, painstakingly scripting or finding ways to access inaccessible software. Today, your screen reader does the heavy lifting, allowing you to focus on your work without the constant barrier of inaccessible content.

Looking back, you remember those initial frustrations, the hours spent tinkering with manual scripts, and the reliance on tech support for inaccessible programs. Now, JAWS’ AI-powered self-scripting has not only given you more control but also reinforced your independence. It’s not just a tool—it’s a partner in productivity.

As you power down, you realize that technology has not replaced your determination; it has amplified it. JAWS has become a proactive assistant, predicting your needs, adjusting to your habits, and making the inaccessible accessible. With the day’s tasks complete, you feel a renewed sense of autonomy—knowing that the tools at your fingertips truly work for you, enhancing not just your productivity but your entire work experience.

The screen fades to black, and the AI’s voice recedes, leaving you with a quiet appreciation for a world where technology supports your strengths, not your limitations.

#Accessibility #AccessAssistive #AI #AssistiveTechnology #Blind #Disability #JAWS #ScreenReader


@pixelate
@FreakyFwoof

asking my #blind friends using a #screenreader to help with some input on this, or a boost, please

#a11y


Ever find it hard to keep track of links, buttons, and form fields on a cluttered webpage? NVDA’s Screen Layout feature helps! When enabled, items like links and buttons stay in their visual flow, appearing on the same line. When disabled, NVDA separates each link and control onto its own line, making navigation smoother for some users. Toggle Screen Layout on and off with NVDA+V to customize your browsing experience! #NVDA #Accessibility #ScreenReader #TechTips


Code Factory und Acapela, alternative Sprachausgaben für NVDA
Wer sich den kostenlosen Screenreader NVDA (Non Visual Desktop Access) installiert, wird mit der freien Sprachausgabe eSpeak oder den Windows-stimmen leben müssen. Diese sind brauchbar, für längeres Arbeiten allerdings eher weniger geeignet. Sie sind zwar tief in Windows integriert...
merkst.de/code-factory-acapela…
#Android #Apple #Computer #Google #JAWS #Lautsprecher #Microsoft #NVDA #Screenreader #Test #VoiceOver #Windows


Our not so spooky Halloween In-Process blog post is out - No tricks, but plenty of treats - particularly from NVDA 2024.4, including all the highlights, and the new options for reporting font attributes. Plus a big walkthrough of NVDA's stability in the leadup to 2025.1. All that and more, here: nvaccess.org/post/in-process-3…

@mastoblind

#NVDA #NVDAsr #ScreenReader #Blog #News


My first thought
man, we're so spoiled on Mastodon. There are like four #accessible web apps, multiple accessible IOS and Android apps, at least two accessible Windows apps (even if they both have some issues), and a couple accessible mac apps. As #screenreader users we can actually choose the one we like, rather than picking the one where the largest number of features actually work with a screen reader.

My Second Thought
Man, this should just be the default state of existence. #Blind folks need to demand more. But I get it, it's so easy to just accept the current state of accessibility as normal, and then be surprised by things that actually work.


Question for #screenreader users: do text emotes like kaomoji generally cause your tools to read out noise or annoying nonsense, or does it just not pronounce it? I am wondering whether it's okay to use them or whether I should go back to good old emoji (that, to my knowledge, get properly read out).

Like this one:
˚‧º·(˚ ˃̣̣̥᷄⌓˂̣̣̥᷅ )‧º·˚

#accessibility #totallyblind


Hey fellow #BlindMastodon users! I'm looking for recommendations on two types of writing tools for Windows:

1. Accessible text expanders
2. Word prediction software

I use both #JAWS and #NVDA screen readers. I tried Espanso (an open-source text expander), but it didn't work well with NVDA. I also tested Lightkey for word prediction, but it didn't seem accessible.

As an #ActuallyAutistic person, word prediction would really help with my autism-related communication challenges.

Have you found any text expanders or word prediction tools that work well with screen readers? What has your experience been like? I'd love to hear your recommendations and thoughts!

#Accessibility #AssistiveTechnology #TextExpander #WordPrediction #ScreenReader #Windows #AutismAccommodations #autism #blind @mastoblind @main


NVDA 2024.4 is now available Featuring many improvements in Microsoft Office, braille & document formatting. We encourage all users to update. There's more than will fit here so please check out the full details & download from: nvaccess.org/post/nvda-2024-4/

Please note, after updating any software, it is a good idea to restart the computer. Restart by going to the Shutdown dialog, selecting “restart” and pressing ENTER.

#NVDA #NVDAsr #ScreenReader #Release #News #NewVersion #Update


It’s no secret that many of us in the blind community have embraced the rapid advances in Artificial Intelligence over the past two years. We've witnessed firsthand how these technologies can be a powerful force for good, especially within our community. AI-generated image descriptions have revolutionized how we navigate the online world, offering a perspective previously unimaginable. This impact is now undeniable, transforming how we interact with the world.”

I’ve declared the kingdom of the blind a republic—perhaps prematurely, but only by a small margin. With AI empowering us to perceive the digital world in new ways, we are no longer ruled by limitations, but actively shaping our future. Anthropic’s recent launch of ‘computer use’ marks the first steps into a new phase of AI evolution—one where AI agents begin to act independently on our behalf, initiating a shift in how we interact with technology.

As AI continues to evolve, so too will the Assistive Technology that many of us depend on. I envision a future where this intelligence becomes a true companion, guiding us seamlessly through both digital landscapes and real-world challenges. We may be just two years away from seeing JAWS, NVDA, or SuperNova transform into true Assistive Intelligence 1.0—or perhaps it will take a little longer. If AI has taught us anything, it’s that progress comes both more slowly than we expect and faster than we can possibly imagine.

What follows is my first attempt at describing how a screen reader of today could take the first steps towards becoming an Assistive Intelligence. If anyone wants to build it, I’d love to help if I can. Whatever you think, let me know what you think:

“Proposed AI-Powered Self-Scripting Feature for JAWS Screen Reader

Objective
The suggested feature seeks to integrate advanced AI-driven "computer use" capabilities, like those developed by Claude (Anthropic), into the JAWS screen reader. This functionality would enable JAWS to autonomously create and refine custom scripts in response to real-time user interactions and application environments. The aim is to enhance accessibility and productivity for visually impaired users, especially when navigating non-standard or otherwise inaccessible software interfaces.

Feature Description
The self-scripting capability would empower JAWS to analyse user interactions with applications, identify recurring actions or inaccessible elements, and generate scripts that optimize these processes. By enabling JAWS to perform this autonomously, users gain seamless and personalized access to applications without manual intervention, allowing for an enhanced, efficient experience.

The self-scripting feature will be powered by the following core functions:

1. Real-Time Autonomous Scripting: JAWS would use AI to observe user interactions with applications, especially non-accessible ones, and automatically generate scripts that improve navigation, label untagged elements, and streamline frequent tasks. For example, if a user frequently navigates to a particular form field, JAWS could create a shortcut to this area.

2. Adaptive Behaviour Learning: This feature would allow JAWS to recognize patterns in a user’s interactions, such as repeated actions or commonly accessed elements. JAWS would adapt its behaviour by creating custom macros, enabling faster navigation and interaction with complex workflows.

3. Dynamic Accessibility Adjustment: Leveraging Claude’s approach to visual recognition, JAWS could interpret visual elements (like buttons or icons) and provide instant labelling or feedback. This would be valuable in software with minimal accessibility features, as it enables JAWS to make live adjustments and effectively “teach itself” how to navigate new environments.

4. Community Script Sharing: Self-generated scripts, once verified, could be anonymized, and made available to other users via a shared repository. This would foster a collaborative environment, empowering users to contribute to a broader database of accessibility scripts for applications across various industries.

Value Proposition
This feature will address key challenges for visually impaired users, including the complexity of navigating inaccessible interfaces and the time-consuming nature of repetitive tasks. The ability for JAWS to generate its own scripts autonomously would mean:
1. Increased Accessibility: Improved interaction with non-accessible software interfaces.
2. Higher Productivity: Reduced need for external support or manual scripting, allowing users to accomplish tasks more independently.
3. Enhanced User Experience: Scripting and macro creation based on personal usage patterns -- leads to a more intuitive and personalized experience.

Technical Considerations
1. Performance: Processing real-time visual and user interaction data requires substantial computing power. A cloud-based model may be optimal, offloading some processing requirements and ensuring smooth, responsive performance.
2. Safety: Automated scripting must be closely monitored to prevent unintended interactions or conflicts within applications. Integration of safeguard protocols and user settings to enable/disable autonomous scripting will be essential.
3. Privacy: To ensure user data is protected, anonymization protocols and data privacy standards will be implemented. Data collected from user interactions would be handled in compliance with rigorous privacy standards, safeguarding user preferences and behaviour.

Conclusion
Integrating AI-powered self-scripting capabilities into JAWS would represent a significant leap in screen reader technology. By allowing JAWS to, when requested, autonomously learn, adapt, and script in response to user needs, this feature could provide visually impaired users with unprecedented control and flexibility in navigating digital environments, fostering both independence and productivity. The anticipated benefits underscore the feature’s potential to redefine accessible technology, turning screen reader into Assistive Intelligence.“

About the Author:

Lottie is a passionate advocate for the transformative potential of AI, especially within the blind and visually impaired community. She blends technical insights with a keen awareness of lived experiences, envisioning a future where AI doesn’t just assist but truly empowers. Her thoughtful reflections explore the shift from a "kingdom of the blind" to a republic, where emerging technologies like AI create new opportunities for autonomy and inclusion.

With a balance of optimism and critical realism, Lottie acknowledges the game-changing impact of AI tools like image descriptions while recognizing that more progress is needed. Her vision extends to the idea of "Assistive Intelligence," where screen readers like JAWS evolve into proactive companions, adapting to users' needs in real-time.

Known for turning complex ideas into actionable blueprints, Lottie is not just an observer of technological trends but a catalyst for innovation. Her proposals reflect a desire to elevate independence and productivity for blind users, pushing the boundaries of what's possible in assistive technology. Her insights continue to inspire conversations and shape the future of accessible tech.

I am the Blind AI, relying on AI every day to enrich my life. While my posts may occasionally benefit from AI assistance, the thoughts, perspectives, and final edits are entirely my own. AI is my tool, much like a calculator or spell-check, refining my expression but never replacing my voice.

#Accessibility #AI #AIsoftheBlind #Blind #ComputerVision #Disability #Innovation #JAWS #NVDA #ScreenReader #SuperNov


#accessibility question:

Does anyone know if a HTML support for Orca screen-reader exist?
(like @SteveFaulkner's tests on github.com/stevefaulkner/scree…)

#Orca #screenReader #a11y #Linux


Sensitive content


Sensitive content


Just found this article stating that using the #language #attribute for individual words within a text is not a good idea when you want #ScreenReader users to have a good #UX. It's just overengineered #a11y.

I'm a bit surprised as you always read otherwise (as the article also mentions).

Are some screen reader users here that can share their experiences? I'm really curious now 🤔

netz-barrierefrei.de/en/lang-a…


As it has been a while and I could have done this better last time here’s my #introduction. I’m a #blind #parent, #braille user, and #musician. I have been blind for coming up on 5 years, learned braille over the last not quite 3 years, and spend most of my time juggling being a stay at home dad, and staff for #OurBlind, mainly on our Discord and the r/blind subreddit. I also read a lot, mainly fantasy, mostly on my #kindle with the #voiceview #screenreader, though also read on a #Brailledisplay.

ourblind.com/


In-Process is out, featuring the results of our braille survey, Elston Changemakers event, the 2024.4 Release Candidate, an NV Access All-Hands, and using Data Validation in Excel.

Read now at: nvaccess.org/post/in-process-2…

#NVDA #NVDAsr #ScreenReader #Blog #News #Software #FOSS #Elston #Excel


Ever find that sitting in front of your unnecessarily complex music rig is hard sometimes? Creative block hits and there's nothing you can do about it?
I certainly find that lately, more often than not.
One device has come into my life and changed a lot of that however. Ableton Move.

In this world-first video, I take you through making a beat without sight, just using the undocumented screen-reader function within the web-based Move Manager.

It's incredibly freeing to be able to just load a fresh set, be presented with four random sounds and perhaps one of them will inspire you so you just begin doing a thing that you had absolutely no plan to do before you started.
#InspiredBySound - Let's Move! (Ableton Move Accessibility Overview) youtu.be/p8IbinbOhY4
#Accessibility #Ableton #ScreenReader #Blind #Music #Composition


Woohoo! It's a pity I haven't tried the #tuba #fedi client built with #gnome technologies earlier. It's verry accessible with a #screenreader. I am running it on the desktop using the keyboard to navigate.


The last time I made a video about #Ableton, it was to do with Note, their iOS music-making app.
This video is an Ableton-first, in which I bring you their newest piece of hardware, #AbletonMove.
It ships with a web-based screen-reader and I've been enjoying it for many months.
It uses sounds from Note, but in a hardware form.
32 poly-aftertouch pads, four tracks of midi (or samples,) 8 knobs, USB-C for power and controlling Ableton Live and a USB-A port for connecting class-compliant midi devices, should you wish to trigger it from a keyboard.

Please be advised that screen-reader support is currently an experimental feature and is not fully fleshed out.
Not all aspects of the experience are as desired and there are a few kinks, but it is very much better than nothing whatsoever, and I am extremely thankful to the team that made this possible.

Ableton themselves are not talking about this screen-reader function in any of their literature, but I think it's important enough that it deserves recognition, and to bring an accessible groove-box to blind people in this way.

#InspiredBySound - Let's Move! (Ableton Move Accessibility Overview) youtu.be/p8IbinbOhY4
#Accessibility #ScreenReader


Even though I know #HTML inside out and use it virtually every day, I'm still reading #HTMLForPeople by @bw because it's a prime example of how a good guide should be written.

1. The book is simple and easy to follow, with relevant points explained well enough even for non-coders to understand.
2. Images are clearly described for #blind readers through the use of #AltText.
3. The website is easy to navigate with a #ScreenReader.
4. There are no annoying pop-ups or ads on the website.
5. The book is entirely free of charge.

htmlforpeople.com/


#FollowerPower I am trying to install #MateDesktop on a #Raspberrypi5 to set up a system running with the latest version of #Orca #Screenreader and I am not able to get it working. I've read several step-by-step guides including installing a dummy display driver, making changes to xorg.conf etc. but no success. I wonder if there's an OS image where Mate is allready included. Any advice is much appreciated. #Linux #Accessibility


As we are up to extend our AX tests to the three screen readers Jaws, NVDA and VoiceOver, I am looking for an overview page that lists the differences.
Which elements are put out by which screen reader, how does the output differ?
I've seen such an overview linked within mastodon a while ago. The web site contained a table with elements and their output in the three named screen readers.
Maybe one of you knows where I can find such an overview.
#ScreenReader #jaws #nvda #voiceover #a11y


In-Process is now out, featuring all things braille! What's new, what's coming up and importantly, a reminder of the braille users survey! Plus, get your hands on the NVDA 2024.4 Release Candidate and all of the new features coming to everyone else shortly!

nvaccess.org/post/in-process-8…

#NVDA #NVDAsr #ScreenReader #Blog #News #Newsletter #WhatsNew #Information


The Release Candidate (RC) of NVDA 2024.4 is now available for download and testing. We encourage all users to download this RC and provide feedback. Unless any critical bugs are found, this will be identical to the final 2024.4 release.

Read more and download from: nvaccess.org/post/nvda-2024-4r…

#NVDA #NVDAsr #ScreenReader #Release #ReleaseCandidate #NewVersion


If you, dear #ScreenReader user, are deeply annoyed by the fact that, whenever you press Ctrl twice on a #YouTube page in #MicrosoftEdge, you get a modal popup about zoom in and zoom out, then go to Settings, then Cookies and Site Permissions, then find the Magnify Image button. Press it and uncheck the checkbox about the magnify image shortcut. Huge thanks for this go to Ksenia Blake for rescuing me and all others in the need. #Accessibility


Huge props to the #NVDAsr team for recognizing this and taking the steps to make #Braille a priority. Will be filling out their survey and hope other #Windows #ScreenReader users will do the same.

#Blind #LowVision #BlindMasto #BlindMastodon #BlindFedi @mastoblind


One of the themes which came through from the NVDA Satisfaction Survey earlier this year, was to improve Braille support. To help us target the most needed improvements, we have created a short survey. If you use NVDA with braille at least some of the time, please consider completing this survey.

docs.google.com/forms/d/e/1FAI…

Please also share with anyone else who may be interested.
#NVDA #NVDAsr #ScreenReader #Braille #Accessibility #A11y #Survey #CommunityInput


If you're reading this, please #boost so I can get back to being federated with the #fediverse. Unfortunately, the data center I've been hosting with for years and years encountered some serious issues. I had to move suddenly, losing almost everything. I took the chance to change the software I use. But I've still lost all of my followers, all of my federation, etc! It's also taken down rblind.com, my passion project to get more #blind folks off #reddit. I'll be bringing that back over the coming days. In the meantime, hi! I'm a blind guy who uses the #NVDA#screenreader, loves #accessibility and works in the field, and reads tons of #fanfic, #litrpg, and #sciencefiction and #fantasy in my spare time. Nice to meet you!


beta5 of NVDA 2024.4 is now available for download and testing. For anyone who is interested in trying out what the next version of NVDA has to offer before it is officially released,

In Beta 5 we have fixed an issue where the custom multiple key press timeout was not honoured when repeatedly pressing the NVDA key. There are also updates to translations.

Read more and download from: nvaccess.org/post/nvda-2024-4b…

#NVDA #NVDAsr #ScreenReader #Beta #FOSS #PreRelease #NewVersion


🦈 JAWS (only) NO MORE

"In 2017 I embarked on a journey to improve and open the reporting of issues with JAWS support for Web Standards.

I continued to work on this after leaving TPGi, until now…"

#WebStandards, #Screenreader #accessibility #HTML

html5accessibility.com/stuff/2…


The @thunderbird team just released the first beta of their email client for Android. I've filed two accessibility bugs on GitHub, and within hours one is addressed and will be in the next beta release (with a pleasant thank you note to boot).

If you feel so inclined, please consider downloading the app and reporting accessibility problems. Especially if you're a native TalkBack user.

#a11y #Accessibility #AndroidAccessibility #Android #TalkBack #ScreenReader #Thunderbird


question for people who rely on screen readers: what is a good way of notating "hey the next block of text is very unfriendly to screen readers, and only useful if you need my gpg key" succinctly?

#blind #screenreader


"I say that NVDA changed my life, because only we know how difficult it is to get a job, and companies rarely want to pay for software licenses." - Mykael, Brazil.

Read Mykael's full testimonial at: nvaccess.org/post/mykael-makes…

Helping people achieve independence is our passion at NV Access. Thank you Mykael!

#Independence #Empowerment #Accessibility #NVDA #ScreenReader #NVDAsr