Search
Items tagged with: blind
New #introduction!
Hi, I am Stu. I am 24 years old, #blind, #neurodivergent, and live in the Northeast of #England. My online handle is SeedyThreeSixty, and my personal brand is Seediffusion, a mix of Seedy and Rediffusion.
I love to play video games, especially retro games from the 90s and 2000s; my favourite genre is fighting games, especially #StreetFighterII! I basically love anything from the 2000s; it's my favourite decade.
I like many different types of music, including heavy metal, EDM, Hip-hop, eurobeat and house.
I am an all round nerd who loves technology, maths, science, and virtually all other things that geeks and dweebs love.
My signature catch phrase is sweet puberty!
See profile for links to my stuff. :)
My first thought
man, we're so spoiled on Mastodon. There are like four #accessible web apps, multiple accessible IOS and Android apps, at least two accessible Windows apps (even if they both have some issues), and a couple accessible mac apps. As #screenreader users we can actually choose the one we like, rather than picking the one where the largest number of features actually work with a screen reader.
My Second Thought
Man, this should just be the default state of existence. #Blind folks need to demand more. But I get it, it's so easy to just accept the current state of accessibility as normal, and then be surprised by things that actually work.
What do people think of this response to ‘generate detailed Alt Text’ by Gemini:
Here is some detailed alt text for the image you provided:
Alt Text:
A large, menacing grey cat, reminiscent of a monster, stands on its hind legs with its front paws raised. It has glowing yellow eyes and a wide, toothy grin. The cat is surrounded by a fiery aura as it emits a powerful beam of energy from its mouth. The beam is directed towards the Empire State Building in New York City, which is engulfed in flames. The city skyline is visible in the background, with tall buildings and smoke rising from the destruction. The overall tone of the image is chaotic and destructive, evoking a sense of fear and awe.
Additional Notes:
• You may want to adjust the description of the cat's appearance based on its specific characteristics.
• If the image is part of a larger context (like a movie poster or book cover), you could add a brief reference to that context.
• The alt text should be concise and informative, providing a clear description of the image without being overly detailed.
Let me know if you have any other questions.
#Accessibility #AI #AIsoftheBlind #AltText #Blind #ComputerVision
#nvda #blind #ublockorigin #accessibility #firefox
Hey fellow #BlindMastodon users! I'm looking for recommendations on two types of writing tools for Windows:
1. Accessible text expanders
2. Word prediction software
I use both #JAWS and #NVDA screen readers. I tried Espanso (an open-source text expander), but it didn't work well with NVDA. I also tested Lightkey for word prediction, but it didn't seem accessible.
As an #ActuallyAutistic person, word prediction would really help with my autism-related communication challenges.
Have you found any text expanders or word prediction tools that work well with screen readers? What has your experience been like? I'd love to hear your recommendations and thoughts!
#Accessibility #AssistiveTechnology #TextExpander #WordPrediction #ScreenReader #Windows #AutismAccommodations #autism #blind @mastoblind @main
Do-It-Blind (DIB) Besprechung
Learn using BigBlueButton, the trusted open-source web conferencing solution that enables seamless virtual collaboration and online learning experiences.bbb.metalab.at
For me, @joplinapp@mastodon.social is the most important #ToDo and #wiki tool for my efficient daily #work and #organization.
With the new version, there are some improvements for #accessibility, such as the use of #screenreaders.
I hope #blind people can enjoy this #app.
github.com/laurent22/joplin/is…
#OpenSource #a11y
Accessibility: Comply with WCAG 2 · Issue #10795 · laurent22/joplin
Summary [ Progress: Desktop ] | [ Progress: Mobile ] This issue tracks general accessibility issues in Joplin mobile and desktop and its progress on WCAG 2.2 compliance. About the WCAG The WCAG (We...GitHub
It’s no secret that many of us in the blind community have embraced the rapid advances in Artificial Intelligence over the past two years. We've witnessed firsthand how these technologies can be a powerful force for good, especially within our community. AI-generated image descriptions have revolutionized how we navigate the online world, offering a perspective previously unimaginable. This impact is now undeniable, transforming how we interact with the world.”
I’ve declared the kingdom of the blind a republic—perhaps prematurely, but only by a small margin. With AI empowering us to perceive the digital world in new ways, we are no longer ruled by limitations, but actively shaping our future. Anthropic’s recent launch of ‘computer use’ marks the first steps into a new phase of AI evolution—one where AI agents begin to act independently on our behalf, initiating a shift in how we interact with technology.
As AI continues to evolve, so too will the Assistive Technology that many of us depend on. I envision a future where this intelligence becomes a true companion, guiding us seamlessly through both digital landscapes and real-world challenges. We may be just two years away from seeing JAWS, NVDA, or SuperNova transform into true Assistive Intelligence 1.0—or perhaps it will take a little longer. If AI has taught us anything, it’s that progress comes both more slowly than we expect and faster than we can possibly imagine.
What follows is my first attempt at describing how a screen reader of today could take the first steps towards becoming an Assistive Intelligence. If anyone wants to build it, I’d love to help if I can. Whatever you think, let me know what you think:
“Proposed AI-Powered Self-Scripting Feature for JAWS Screen Reader
Objective
The suggested feature seeks to integrate advanced AI-driven "computer use" capabilities, like those developed by Claude (Anthropic), into the JAWS screen reader. This functionality would enable JAWS to autonomously create and refine custom scripts in response to real-time user interactions and application environments. The aim is to enhance accessibility and productivity for visually impaired users, especially when navigating non-standard or otherwise inaccessible software interfaces.
Feature Description
The self-scripting capability would empower JAWS to analyse user interactions with applications, identify recurring actions or inaccessible elements, and generate scripts that optimize these processes. By enabling JAWS to perform this autonomously, users gain seamless and personalized access to applications without manual intervention, allowing for an enhanced, efficient experience.
The self-scripting feature will be powered by the following core functions:
1. Real-Time Autonomous Scripting: JAWS would use AI to observe user interactions with applications, especially non-accessible ones, and automatically generate scripts that improve navigation, label untagged elements, and streamline frequent tasks. For example, if a user frequently navigates to a particular form field, JAWS could create a shortcut to this area.
2. Adaptive Behaviour Learning: This feature would allow JAWS to recognize patterns in a user’s interactions, such as repeated actions or commonly accessed elements. JAWS would adapt its behaviour by creating custom macros, enabling faster navigation and interaction with complex workflows.
3. Dynamic Accessibility Adjustment: Leveraging Claude’s approach to visual recognition, JAWS could interpret visual elements (like buttons or icons) and provide instant labelling or feedback. This would be valuable in software with minimal accessibility features, as it enables JAWS to make live adjustments and effectively “teach itself” how to navigate new environments.
4. Community Script Sharing: Self-generated scripts, once verified, could be anonymized, and made available to other users via a shared repository. This would foster a collaborative environment, empowering users to contribute to a broader database of accessibility scripts for applications across various industries.
Value Proposition
This feature will address key challenges for visually impaired users, including the complexity of navigating inaccessible interfaces and the time-consuming nature of repetitive tasks. The ability for JAWS to generate its own scripts autonomously would mean:
1. Increased Accessibility: Improved interaction with non-accessible software interfaces.
2. Higher Productivity: Reduced need for external support or manual scripting, allowing users to accomplish tasks more independently.
3. Enhanced User Experience: Scripting and macro creation based on personal usage patterns -- leads to a more intuitive and personalized experience.
Technical Considerations
1. Performance: Processing real-time visual and user interaction data requires substantial computing power. A cloud-based model may be optimal, offloading some processing requirements and ensuring smooth, responsive performance.
2. Safety: Automated scripting must be closely monitored to prevent unintended interactions or conflicts within applications. Integration of safeguard protocols and user settings to enable/disable autonomous scripting will be essential.
3. Privacy: To ensure user data is protected, anonymization protocols and data privacy standards will be implemented. Data collected from user interactions would be handled in compliance with rigorous privacy standards, safeguarding user preferences and behaviour.
Conclusion
Integrating AI-powered self-scripting capabilities into JAWS would represent a significant leap in screen reader technology. By allowing JAWS to, when requested, autonomously learn, adapt, and script in response to user needs, this feature could provide visually impaired users with unprecedented control and flexibility in navigating digital environments, fostering both independence and productivity. The anticipated benefits underscore the feature’s potential to redefine accessible technology, turning screen reader into Assistive Intelligence.“
About the Author:
Lottie is a passionate advocate for the transformative potential of AI, especially within the blind and visually impaired community. She blends technical insights with a keen awareness of lived experiences, envisioning a future where AI doesn’t just assist but truly empowers. Her thoughtful reflections explore the shift from a "kingdom of the blind" to a republic, where emerging technologies like AI create new opportunities for autonomy and inclusion.
With a balance of optimism and critical realism, Lottie acknowledges the game-changing impact of AI tools like image descriptions while recognizing that more progress is needed. Her vision extends to the idea of "Assistive Intelligence," where screen readers like JAWS evolve into proactive companions, adapting to users' needs in real-time.
Known for turning complex ideas into actionable blueprints, Lottie is not just an observer of technological trends but a catalyst for innovation. Her proposals reflect a desire to elevate independence and productivity for blind users, pushing the boundaries of what's possible in assistive technology. Her insights continue to inspire conversations and shape the future of accessible tech.
I am the Blind AI, relying on AI every day to enrich my life. While my posts may occasionally benefit from AI assistance, the thoughts, perspectives, and final edits are entirely my own. AI is my tool, much like a calculator or spell-check, refining my expression but never replacing my voice.
#Accessibility #AI #AIsoftheBlind #Blind #ComputerVision #Disability #Innovation #JAWS #NVDA #ScreenReader #SuperNov
I suspect this relates to a specific kind of list control, but in certain apps, as I flick left or right through items, VoiceOver just goes silent on certain items. Sometimes, if you flick left to the previous item and then right to get back to the item that wasn’t speaking before, you hear the item, at other times you don’t, and the only way to hear the currently focussed item is to perform a three-finger single tap.
I am seeing this after trying several voices, and making a few other changes that I thought might work around the problem. So far, I haven’t found a fix.
I am seeing this in a list of articles in the Lire RSS Reader, it’s happening a little here in Mona, and it is also reproduceable if I go into VoiceOver settings and flick through the items in the What’s New in VoiceOver screen.
It is speech only. If you have a Braille display connected, you can read the item in Braille even when VoiceOver doesn’t speak it.
I’d be curious to know if others are seeing it, because if it is widespread, it could be a reason for those who rely heavily on their iPhones to give the public beta a miss assuming this first build goes out to public beta.
As it has been a while and I could have done this better last time here’s my #introduction. I’m a #blind #parent, #braille user, and #musician. I have been blind for coming up on 5 years, learned braille over the last not quite 3 years, and spend most of my time juggling being a stay at home dad, and staff for #OurBlind, mainly on our Discord and the r/blind subreddit. I also read a lot, mainly fantasy, mostly on my #kindle with the #voiceview #screenreader, though also read on a #Brailledisplay.
[Proposal]: blind accessibility · Issue #2635 · naev/naev
Suggestion Adding blind accessibility (not colourblind, full blindness) could be a good goal at not only improving the accessibility for blind users, but for all users in general. I've opened an is...GitHub
Do-It-Blind (DIB) Besprechung
Learn using BigBlueButton, the trusted open-source web conferencing solution that enables seamless virtual collaboration and online learning experiences.bbb.metalab.at
Amos Miller of Glidance released a new amazing video showing the capability of Glide, the mobility tool that he has created. If you have not seen this new video, and by the way it is described, then I highly encourage you to check it out... youtube.com/watch?v=Xq8ofapGxX…
#Blind #Mobility #AmosMiller #Glidance #Glide #Accessible
Glide Demonstration Video #2: Stairs, Roads without Sidewalks, Paths, Overhead Obstacles & More.
In our second scenario video, we demonstrate Glide, the world's first intelligent guide for the blind and low vision community, in a variety of new situation...YouTube
Maybe I Think Differently Because I See Differently
Ever wondered how our senses shape the way we think? It’s a fascinating concept, especially when you consider how different sensory experiences, like those between sighted and blind individuals, create unique cognitive worlds. As someone who navigates the world without sight, I’ve often wondered how this influences my thoughts and perspectives. After all, if we see the world differently, surely we think about it differently, too. Let’s explore how our sensory experiences shape our cognitive processes and perceptions.
Visual Dominance in Sighted People
For sighted people, vision dominates daily information processing. Research suggests that around 80% of their cognitive processing is tied to visual input, shaping not only how they see but also how they think. This visual dominance influences thoughts, understanding, and the focus on details like colours, shapes, and spatial relationships. Sighted individuals are often adept at processing visual patterns and details, like reading body language or interpreting facial expressions. For them, “seeing is believing,” but for those of us who cant rely on sight, belief comes from a different place altogether.
Cognitive Adaptations in Blind People
Without sight, blind people are forced to turn to other senses to form our mental landscapes. We rely on hearing, touch, and smell to navigate the world. Imagine walking into a room and, instead of scanning visually, you map it out through sound and touch. These senses aren’t just backups; they become finely tuned tools that create vivid mental representations. Over time, we develop exceptional auditory and tactile skills, and our memories sharpen to store all the information we gather through these non-visual cues.
Strengths Born from Different Senses
There are some advantages that come with thinking in a world shaped by sound, touch, and smell. For instance, blind people often excel in auditory tasks, like picking up subtle tones in conversations or identifying people by their footsteps. Memory is another area where blind people shine, as we frequently have rely on it to navigate and interact. Spatial awareness also becomes second nature, built from a blend of sound, touch, and movement that lets us “see” the world in our own way.
Beyond physical senses, abstract thinking often plays a prominent role in how we understand the world. Without the constant influx of visual distractions, our minds find other ways to stay busy—often diving into deeper, more abstract realms of thought. This can lead to an especially profound connection with concepts that aren’t tied to visual cues.
Neuroplasticity: The Brain’s Remarkable Adaptability
Perhaps one of the most intriguing aspects of sensory difference is neuroplasticity—the brain’s ability to rewire itself. When one sense is absent, the brain often repurposes the areas typically devoted to that sense. For blind individuals, parts of the brain usually reserved for vision get redeployed to boost other senses, supporting our enhanced skills and abilities in non-visual realms.
A Unique Cognitive Landscape
This distinct mix of sensory information creates a rich, varied internal world that offers a unique perspective on life. It’s a powerful reminder that our brains are remarkably adaptable, moulding our thoughts to the sensory input we experience. So yes, maybe I do think differently because I see differently. And maybe, that’s the beauty of it all.
one of my real pain points with the iPhone is composing emails using the built-in mail app with #voiceover. It’s got slightly better with iOS 18, but the Misspelt Words option in the rotor still does not work. This might not be an issue for some, but for me I cannot send emails on the phone with confidence that it is written accurately and looks professional. I know there's the option of writing it in the Notes app, reviewing it and then pasting it into Mail, but come on! That's such a clunky workaround.
Is Mail any better on the iPad at all? I would quite like something that is an in between device to a laptop and a phone that I could use to quickly write emails, make additions to documents, write quick notes etc, and in that sense the iPad could be a nice fit, but if iPadOS has the same issues as the iOS versions then meh, it might not add much for me. Any thoughts?
#Blind #BlindMastodon #accessibility #a11y #Apple #iPad #iPhone #Productivity
#MozillaFirefox #MicrosoftEdge #incrementalGames #browserBasedGames #gaming #browsers #Windows #Windows11 #WindowsEleven #WindowsElevenPro #tech #technology
Ever find that sitting in front of your unnecessarily complex music rig is hard sometimes? Creative block hits and there's nothing you can do about it?
I certainly find that lately, more often than not.
One device has come into my life and changed a lot of that however. Ableton Move.
In this world-first video, I take you through making a beat without sight, just using the undocumented screen-reader function within the web-based Move Manager.
It's incredibly freeing to be able to just load a fresh set, be presented with four random sounds and perhaps one of them will inspire you so you just begin doing a thing that you had absolutely no plan to do before you started.
#InspiredBySound - Let's Move! (Ableton Move Accessibility Overview) youtu.be/p8IbinbOhY4
#Accessibility #Ableton #ScreenReader #Blind #Music #Composition
#InspiredBySound - Let's Move! (Ableton Move Accessibility Overview)
Check out the official Ableton Move website: https://www.ableton.com/en/move/Ableton Move is the first piece of hardware I've personally been excited by in y...YouTube
youtube.com/watch?v=J-VVh5ezqG…
Smart Cane Gives the Blind Freedom to Explore
Like this video about WeWalk's smart cane and subscribe here: https://freeth.ink/youtube-subscribe-smartcaneUp next- Reversing Blindness: https://youtu.be/2U...YouTube
Hey folks, finally got my resume into shape and figured I'd try to #getfedihired.
Looking for a tech job doing backend work--I can do full-stack but being #blind means I'm going to struggle more with UI. I'm proficient in #Rust, #Elixir, #Golang, #TypeScript, and a handful of other languages. I have Android experience as well, and have done fun accessibility-related projects like writing an Android screen reader from scratch or adding screen-reader-like functionality to multiple game engines.
I don't mind big tech too much but am looking to switch away to something more co-op/non-profit if possible. It'd be nice to use my tech skills for good, not just for some investor's benefit. Good work/life balance is also a must--I don't live to work, I work to live.
Here's my resume if any of this sounds intriguing. Thanks for reading and/or boosting!
Text Team Onboarding - Campaign for Democracy
Join our Text Team and reach out to supporters and voters using our simple 4-step process.Gavin Newsom's Campaign for Democracy
Is LibreOffice ever going to behave and stop freezing whenever you do nvda+down arrow? Seriously. I installed it again on my new windows installation and quite rapidly remembered why it was I wanted ms word instead...
I can't even read a document without having to manually do it. Pathetic that it is still like this in 2024.
Do-It-Blind (DIB) Besprechung
Learn using BigBlueButton, the trusted open-source web conferencing solution that enables seamless virtual collaboration and online learning experiences.bbb.metalab.at
#Barrierefreiheit #blind
Even though I know #HTML inside out and use it virtually every day, I'm still reading #HTMLForPeople by @bw because it's a prime example of how a good guide should be written.
1. The book is simple and easy to follow, with relevant points explained well enough even for non-coders to understand.
2. Images are clearly described for #blind readers through the use of #AltText.
3. The website is easy to navigate with a #ScreenReader.
4. There are no annoying pop-ups or ads on the website.
5. The book is entirely free of charge.
HTML for People
HTML isn't only for people working in the tech field. It's for everyone. Learn how to make a website from scratch in this beginner friendly web book.htmlforpeople.com
Found a new way #WebDesigners are blocking #Accessibility.
Because I am legally #Blind, with less than 10 degrees of vision, I cannot visually solve #Captchas
Because I am mostly #Deaf, I cannot solve vocal Captchas.
Therefore, I added block and solve Captchas extensions to my browser.
On three major sites - I could no longer access, as they had added Captchas that bypassed the blockers!
FaceBook, Amazon, and Submittable do not allow Captcha blockers.
So, I had not been able to access those sites at all for a while.
When I found Submittable blocking me today, for the first time, I decided to turn off the Captcha blockers, as they obviously aren't working.
Instantly, Amazon and Submittable are working.
I haven't tried FaceBook.
#CaptchaBlockers are and #Accessibility need for blind, deaf, #DeafBlind, and multiple other disabilities. It is Disability Discrimination for sites to block Accessibility Access helpers.
I shouldn't need a sighted and hearing person, likely a stranger, to be given my login information, username and password, to log me into every site I need to go to every day!
#Tech people, please respect people with disabilities. Disable and Remove Captchas. Disable and Remove Block the Block Captchas extensions.
I wasted a lot of hours trying to figure out why suddenly I could not login to sites.
Hours because some tech people decided to break and block accessibility for a DeafBlind #Author!