Skip to main content


It’s no secret that many of us in the blind community have embraced the rapid advances in Artificial Intelligence over the past two years. We've witnessed firsthand how these technologies can be a powerful force for good, especially within our community. AI-generated image descriptions have revolutionized how we navigate the online world, offering a perspective previously unimaginable. This impact is now undeniable, transforming how we interact with the world.”

I’ve declared the kingdom of the blind a republic—perhaps prematurely, but only by a small margin. With AI empowering us to perceive the digital world in new ways, we are no longer ruled by limitations, but actively shaping our future. Anthropic’s recent launch of ‘computer use’ marks the first steps into a new phase of AI evolution—one where AI agents begin to act independently on our behalf, initiating a shift in how we interact with technology.

As AI continues to evolve, so too will the Assistive Technology that many of us depend on. I envision a future where this intelligence becomes a true companion, guiding us seamlessly through both digital landscapes and real-world challenges. We may be just two years away from seeing JAWS, NVDA, or SuperNova transform into true Assistive Intelligence 1.0—or perhaps it will take a little longer. If AI has taught us anything, it’s that progress comes both more slowly than we expect and faster than we can possibly imagine.

What follows is my first attempt at describing how a screen reader of today could take the first steps towards becoming an Assistive Intelligence. If anyone wants to build it, I’d love to help if I can. Whatever you think, let me know what you think:

“Proposed AI-Powered Self-Scripting Feature for JAWS Screen Reader

Objective
The suggested feature seeks to integrate advanced AI-driven "computer use" capabilities, like those developed by Claude (Anthropic), into the JAWS screen reader. This functionality would enable JAWS to autonomously create and refine custom scripts in response to real-time user interactions and application environments. The aim is to enhance accessibility and productivity for visually impaired users, especially when navigating non-standard or otherwise inaccessible software interfaces.

Feature Description
The self-scripting capability would empower JAWS to analyse user interactions with applications, identify recurring actions or inaccessible elements, and generate scripts that optimize these processes. By enabling JAWS to perform this autonomously, users gain seamless and personalized access to applications without manual intervention, allowing for an enhanced, efficient experience.

The self-scripting feature will be powered by the following core functions:

1. Real-Time Autonomous Scripting: JAWS would use AI to observe user interactions with applications, especially non-accessible ones, and automatically generate scripts that improve navigation, label untagged elements, and streamline frequent tasks. For example, if a user frequently navigates to a particular form field, JAWS could create a shortcut to this area.

2. Adaptive Behaviour Learning: This feature would allow JAWS to recognize patterns in a user’s interactions, such as repeated actions or commonly accessed elements. JAWS would adapt its behaviour by creating custom macros, enabling faster navigation and interaction with complex workflows.

3. Dynamic Accessibility Adjustment: Leveraging Claude’s approach to visual recognition, JAWS could interpret visual elements (like buttons or icons) and provide instant labelling or feedback. This would be valuable in software with minimal accessibility features, as it enables JAWS to make live adjustments and effectively “teach itself” how to navigate new environments.

4. Community Script Sharing: Self-generated scripts, once verified, could be anonymized, and made available to other users via a shared repository. This would foster a collaborative environment, empowering users to contribute to a broader database of accessibility scripts for applications across various industries.

Value Proposition
This feature will address key challenges for visually impaired users, including the complexity of navigating inaccessible interfaces and the time-consuming nature of repetitive tasks. The ability for JAWS to generate its own scripts autonomously would mean:
1. Increased Accessibility: Improved interaction with non-accessible software interfaces.
2. Higher Productivity: Reduced need for external support or manual scripting, allowing users to accomplish tasks more independently.
3. Enhanced User Experience: Scripting and macro creation based on personal usage patterns -- leads to a more intuitive and personalized experience.

Technical Considerations
1. Performance: Processing real-time visual and user interaction data requires substantial computing power. A cloud-based model may be optimal, offloading some processing requirements and ensuring smooth, responsive performance.
2. Safety: Automated scripting must be closely monitored to prevent unintended interactions or conflicts within applications. Integration of safeguard protocols and user settings to enable/disable autonomous scripting will be essential.
3. Privacy: To ensure user data is protected, anonymization protocols and data privacy standards will be implemented. Data collected from user interactions would be handled in compliance with rigorous privacy standards, safeguarding user preferences and behaviour.

Conclusion
Integrating AI-powered self-scripting capabilities into JAWS would represent a significant leap in screen reader technology. By allowing JAWS to, when requested, autonomously learn, adapt, and script in response to user needs, this feature could provide visually impaired users with unprecedented control and flexibility in navigating digital environments, fostering both independence and productivity. The anticipated benefits underscore the feature’s potential to redefine accessible technology, turning screen reader into Assistive Intelligence.“

About the Author:

Lottie is a passionate advocate for the transformative potential of AI, especially within the blind and visually impaired community. She blends technical insights with a keen awareness of lived experiences, envisioning a future where AI doesn’t just assist but truly empowers. Her thoughtful reflections explore the shift from a "kingdom of the blind" to a republic, where emerging technologies like AI create new opportunities for autonomy and inclusion.

With a balance of optimism and critical realism, Lottie acknowledges the game-changing impact of AI tools like image descriptions while recognizing that more progress is needed. Her vision extends to the idea of "Assistive Intelligence," where screen readers like JAWS evolve into proactive companions, adapting to users' needs in real-time.

Known for turning complex ideas into actionable blueprints, Lottie is not just an observer of technological trends but a catalyst for innovation. Her proposals reflect a desire to elevate independence and productivity for blind users, pushing the boundaries of what's possible in assistive technology. Her insights continue to inspire conversations and shape the future of accessible tech.

I am the Blind AI, relying on AI every day to enrich my life. While my posts may occasionally benefit from AI assistance, the thoughts, perspectives, and final edits are entirely my own. AI is my tool, much like a calculator or spell-check, refining my expression but never replacing my voice.

#Accessibility #AI #AIsoftheBlind #Blind #ComputerVision #Disability #Innovation #JAWS #NVDA #ScreenReader #SuperNov

in reply to aaron

@fireborn There’s only one way to reject the future to not travel there!
in reply to Charlotte Joanne

Just because a technology exists, does not mean I have to use it.
in reply to aaron

@fireborn Of course you don’t. And you also don’t have the right to say how I benefit from technology that does exist. Technology that is currently legal.
in reply to Charlotte Joanne

I never said any such thing; how you choose to benefit or not from current and future technology is your choice. All I said is I don’t want this AI first future.
in reply to Charlotte Joanne

A few points|:
1, you can't request autonomy, or it's not autonomus. This is either something that happens automatically or it's directed. Autonomus learning is a poor buzword for a potentially useful feature which should be directed by an end user, who is the one with the need.
2, the most irritating thing about adaptive software is that it's inconsistent. I know exactly where the icons on my phone's home screen are. The worst feature to me would be 'Here's what you use most' as it would subvert my expectations in an attempt to enhance them.
3, you seem to be proposing that JAWS watches how people use inaccessible software to learn to make it accessible. The obvious issue here being, that the blind people won't be using until it is accessible.
And 4, allowing an LLM to write and execute unsupervised code would alarm me to such a degree that I'd not feel confident giving its host process access to some of my data or access to the Internet as my proxy today.
in reply to Sean Randall

@cachondo Those are really good points! An all mistakes I’ve made trying to turn a high-level concept into something I could actually be done. Given that I have absolutely no idea how to do it that’s probably not a problem at the moment. Thanks for reading and thanks for your feedback. I’ll try and incorporate it into revolution too.😎
in reply to Charlotte Joanne

I predict we'll next have an AI-powered version of the old steps or macro recorders, where either someone sighted does something in an app we can't and the system makes the screen reader do it, or we describe in words what we want and a subset of image processing and mouse/keyboard manipulation gets those things done. How those steps are stored so they are robust, repeatabel and error free - i.e. turned into programs - I'm less sure on.
in reply to Sean Randall

@cachondo And they’ve had the API for two days! Listen to this weeks today in AI podcast they were doing that.
in reply to Charlotte Joanne

I'm not surprised.
The concern for the end user at the moment is that the error rate is really high - just look at the confabulations in image descriptions for examples - and how easy failing leads to retryes without confidence that there are times things can't be done.
An AI could end up spending hundreds of computing cycles trying to find a way around an inoperative button without once realising there was a modal pop-up in the way, for example.
You end up getting into circles really easily if you converse with these models and people will worry about that.
in reply to Sean Randall

@cachondo They had it ordering from Starbucks after two days! Imagine what it be like after two years.
in reply to Charlotte Joanne

hehe it's a very exciting field.
We've gone from Image may contain: dog, to
The image shows a yellow Labrador Retriever running happily through a grassy pathway with trees and shrubs on either side. The dog appears to be in mid-stride with its mouth open, tongue out, and ears flopping, suggesting motion and enthusiasm.
In just a few short years.
in reply to Sean Randall

@cachondo I know, and I checked a lot of images. They are really quite accurate as well. I have described some of the memes really accurately! There truly has never been a better time to be blind! It’s still shit though.
in reply to Charlotte Joanne

being blind, you mean?
I don't think I'd have chosen it, but I'd rather be here than not.
I can't deny that my blindness has been the driving force in my career, rounding my education with speedy access to literature, and calling out the utter ridiculousness of things like racial discrimination early on (skin colour was a learned thing for me, not something stereotyped into my viewpoints).
For all that it has undeniably put limits on me, I don't think it has done so to any greater extent than my financial circumstances, the limits of my own intelligence or physical capabilities.
in reply to Sean Randall

@cachondo You sound like you’ve always been blind. I’m one of the others I had blindness thrust upon me! They’re too completely different experiences. Both different both valid.
in reply to Charlotte Joanne

ah yes, I have.
I can see how much of a blow it would be to have it imposed after you've known sight.
Many of the people I have worked with were fighting to adapt through that. it is sad and, one day, I hope we will be able to stop it happening at all.