Search
Items tagged with: blind
Found a new way #WebDesigners are blocking #Accessibility.
Because I am legally #Blind, with less than 10 degrees of vision, I cannot visually solve #Captchas
Because I am mostly #Deaf, I cannot solve vocal Captchas.
Therefore, I added block and solve Captchas extensions to my browser.
On three major sites - I could no longer access, as they had added Captchas that bypassed the blockers!
FaceBook, Amazon, and Submittable do not allow Captcha blockers.
So, I had not been able to access those sites at all for a while.
When I found Submittable blocking me today, for the first time, I decided to turn off the Captcha blockers, as they obviously aren't working.
Instantly, Amazon and Submittable are working.
I haven't tried FaceBook.
#CaptchaBlockers are and #Accessibility need for blind, deaf, #DeafBlind, and multiple other disabilities. It is Disability Discrimination for sites to block Accessibility Access helpers.
I shouldn't need a sighted and hearing person, likely a stranger, to be given my login information, username and password, to log me into every site I need to go to every day!
#Tech people, please respect people with disabilities. Disable and Remove Captchas. Disable and Remove Block the Block Captchas extensions.
I wasted a lot of hours trying to figure out why suddenly I could not login to sites.
Hours because some tech people decided to break and block accessibility for a DeafBlind #Author!
I think #Anki holds a great deal of potential for #blind learners of #languages and other subjects:
The desktop app uses Qt 6, so isn't entirely unusable. It's also fully open source.
The iOS app is extremely usable with VoiceOver, albeit relatively expensive for a mobile app at ÂŁ24.99, and not open source.
The web interface is usable, but would currently cause people to think more about how to use it than the actual subject they were studying. Still, with some user scripting, it could be workable.
And finally, all Anki functionality is available via their Python library, which could be used in a command line app or more #accessible frontend.
Of course, the issue is always the time needed to take things from where they are now to where they need to be. And I suspect a significant challenge would be having screen readers speak/braille things in the correct language as hopefully declared by each flashcard.
Reading WITHOUT Sight: Challenging the Ableist Assumptions of Non-Visual Literacy
In todayâs world, where accessibility is supposedly ever-expanding, comments on how blind people read â or rather, whether we âreallyâ read â reveal a significant amount of latent ableism. When someone remarks, âYouâre not really reading because you have to listen to it,â they are unwittingly touching on deep-seated biases that marginalize blind people and our experience. For me, as a blind person, these comments feel aggressive, like a slur that undermines not only my intellect but my very existence within a literate society. The underlying suggestion that my method of consuming literature is somehow less legitimate than traditional reading reflects a lack of understanding and a failure to appreciate the richness of alternative literacy.
At its core, this statement implies that visual reading is the only valid form of reading â an attitude deeply rooted in ableist assumptions. Just as the sighted world learns and adapts to new ways of accessing information, blind people, too, use technology to bridge gaps that were once insurmountable. By suggesting that listening to an audiobook or using a screen reader is inferior to reading with oneâs eyes, the speaker perpetuates a narrow view of literacy that excludes anyone who does not fit their narrow definition of a reader.
The Emotional Impact of Dismissive Comments
Hearing such remarks can be hurtful. When someone tells me Iâm not âreally reading,â they trivialize the effort, love, and passion I pour into every book. Reading, in any form, is more than just a mechanical process; itâs an engagement with ideas, narratives, and emotions. Denying my capacity to âreallyâ read is akin to erasing my agency in choosing to explore literature. It dismisses my experience and can feel like a personal attack, minimizing my intelligence and curiosity.
Moreover, these comments strip away the nuances of my identity and life experience as a blind person. They ignore the reality that many of us navigate systems not designed with us in mind, yet we adapt with resilience and creativity. Listening to a book, for me, is as much an engagement with its content as sighted reading is for others. This medium allows me to dive into narratives, to imagine worlds, and to connect with characters just as vividly as if I were reading visually. Such a remark does not just invalidate my experience, but it also points to a societal failure to recognize and celebrate the diverse ways people interact with literature.
Understanding the Roots of Ableism
Ableism, at its core, stems from a belief that certain abilities, like sight, are inherently superior. This mindset manifests in the way sighted people sometimes view adaptations like screen readers or braille as substitutes, rather than as equally valid methods of accessing information. This thought pattern diminishes the lived experiences of blind individuals and subtly implies that weâre only half-participating in the world of literature. The comment reflects an ideology that upholds one mode of experiencing the world as ideal, while relegating others to second-class status.
Furthermore, literacy is a concept that should not be defined by sensory modality. Whether through braille, audio, or screen readers, blind readers engage in the same cognitive processes of understanding and analying text. These methods are not merely compensatory but rather alternate pathways that lead to the same destination.
Responding Constructively
Addressing this kind of ableism requires a blend of assertiveness and education. In responding to these comments, I could say something like, âWhen you suggest that Iâm not really reading, it feels as if youâre diminishing my engagement with the text. For me, listening to a book offers the same intellectual and emotional journey as sighted reading does for you. Itâs not about the method; itâs about the experience of connecting with the material. Iâd appreciate it if we could acknowledge that there are many valid ways to be a reader.â
By framing the response in this way, I affirm my own experience while gently inviting the person to reconsider their assumptions. Another approach could be to highlight the diversity of literacy methods available today: âThereâs a wide range of ways people can read now, whether through audio, braille, or text-to-speech technology. These methods open up the world of literature to more people and should be celebrated rather than diminished.â
My hope is that, in responding to these comments, I can foster a moment of reflection for others. Reading is about engaging with ideas and stories, not about the medium through which we access them. Ableist remarks about non-visual reading, though sometimes spoken thoughtlessly, present an opportunity to open minds and broaden perspectives. By sharing my experience, I contribute to a more inclusive understanding of literacy and help to dismantle the harmful stereotypes that still persist.
Conclusion
Reading is not an act confined to the eyes; it is an intellectual and emotional endeavour that transcends sensory modality. For many blind people, it is the ultimate expression of our love for stories, our curiosity, and our intellect. When someone diminishes my experience as ânot really reading,â they underscore a fundamental misunderstanding of what it means to be a reader. As we continue to expand our understanding of accessibility, it is crucial to challenge and reframe such biases. Only by doing so can we begin to recognize and respect the many ways in which people interact with the written word, enriching our collective experience of literature in all its forms.
#Ableism #Accessibility #Audible #Blind #Braille #Disability #Equality #Inclusion #Kindle
Huge props to the #NVDAsr team for recognizing this and taking the steps to make #Braille a priority. Will be filling out their survey and hope other #Windows #ScreenReader users will do the same.
#Blind #LowVision #BlindMasto #BlindMastodon #BlindFedi @mastoblind
Habe gerade eine Vorstellung des Projekts "MetaBraille" gesehen. Das ist ja saustark! Open Source Tools haben die verwendet und sich eine eigene Braille Tastatur gebaut, die im Prinzip von jedem blinden Menschen nachgebaut werden kann.
Sogar die Steuerung vom 3D Drucker.
Die 3D gedruckte Tastatur kann man dann an das Handy koppeln. Klasse!!
đą *AI-generated podcasts aren't here to replace human creativity* đïžâthey're enhancing accessibility, especially for people like me who learn best by listening. As a blind student, tools like NotebookLM turning PDFs into podcasts help me absorb material more effectively. đ§ Itâs about *learning in a way that works for you*, not replacing the personal touch of traditional podcasts. đ§ đĄ
#Accessibility #AI #AIforAccessibility #Podcasts #Blind #AIAccessibility #LearningTools #NotebookLM
this is, like a lot of my posts where I'm not replying to threads, concerning #blind users of #linux. With that out of the way, let's get into it
So, I was technically able to do this for some time now, around three days or so, but now I got enough courage to actually do it. This involves a lot of courage indeed, because this is my primary and only system, and the experiment in question could have cost me speech on the computer, which is a pretty big deal
So yeah, what did I do that's so dangerous? I stopped the pulseaudio server, entirely, and by that I mean the pulse pipewire compatibility layer, pipewire-pulse. Foolishly as it might sound from the outside, I tryed this to see if I could still get speech. Not so foolishly after all, and also in a show of spectacularly defying expectations, because I'm writing this now, I do, very much so, in fact, as an unintended consequence, my system feels snappier too, incredible, right?
As many of you are pretty technical, since using linux as a VI person kinda pushes you in that direction, it should not come as any surprise to you that speech dispatcher uses pulseaudio, aka pipewire-pulse, to play sound, because usually when that crashes, you get no speech and such. So then, how is this possible? no, I'm not using alsa, or any of the other audio backends in there. The rest of this huge post will be devoted to that question, as well as explaining some background around why this matters and how things were before. Note: this would probably only make a lot of positive change to a particular group of people, those who value the amount of latency of your audio systems, either because you're musicians, want to be, are working with specialised audio software, are using a complicated hardware setup with lots of nodes which have to be balanced in the right way, or just because you can, etc etc, those kinds of people. For most of you, this may be a novelty, may get some limited use out of it due to decreased cpu load and more fast feeling interfaces in your desktop environments, but for you, this is mostly a nice read and some hipped enthusiasm I suppose.
It all started when I was talking to afew linux musicians, if I recall correctly, also someone who might have been involved in a DAW's development. I was talking about accessibility, and finally arrived to someone telling me that speech dispatcher requests a very low latency, but then xruns quite a number of times, making pipewire try to compensate a lot. I could be misremembering, but they also said that it throws off latency measuring software, perhaps those inside their DAW as well, because the way speech dispatcher does things is making pw increase the graph latency.
So then, what followed were afew preliminary discussions in the gnome accessibility matrix room, about how audio backends work in spd. There, I found out that plugins mostly use a push model, meaning they push samples to the audio server when those become available, in variable sizes as well, after which the server can do the proper arrangements for some kind of sane playback of those. Incidentally or not, this is how a lot of apps work with regular pulseaudio, usually using, directly or indirectly, a library called libpulse_simple, which allows one to basically treat the audio device like some kind of file and have things done that way, where the library could also sometimes introduce buffering of its own before sending to pulse, etc. Of note here is that pulse still has callbacks, but a lot of apps don't use that way of doing things.
Back to the problem though, this was fine for pulse, more or less anyway, because pulse didn't symbolise the media graph in any way, there was simply no such concept there, you had only apps being connected to devices via streams, so there wasn't a way with which to get apps to syncronise their rates to something which can more or less be sent directly to the soundcard. So, when apps invariably started to diverge the rate at which they pushed samples to pulse, as far as I understand, pulse took the latency of the slowest stream and add it to everyone else to attempt to syncronise, because, after all, pulse would still have to mix and send everyone's frames to the soundcard, and because there either was no poling model or no one wanted to implement it, that was the best choice to make in such environments.
Enter low-latency software, #jack and #pipewire. Here, latency minimising is the most important thing, so samples have to be sent to the soundcard as soon as possible. This means that the strategy I outlined above wouldn't work here, which gets us neetly in the concept of an audio graph, which is basically all the sound sources that can play, or capture, in your systems, as well as exactly where sound is played to and captured from. Because of the low-latency factor however, this graph has to be poled all at once, and return samples similarly fast, in what the graph driver calls a cycle. The amount for which apps can buffer audio before they're called again, aka a graph cycle duration, is user-adjustable, in jack via the buffer size, in pipewire via the quantum setting. But then, what happens to apps which don't manage to answer as fast as they get called by the server? Simple, even simpler than the answer of pulse, alsa, etc to the problem, and their various heuristics to try to make sound smooth and insert silence in the right places. The answer is, absolutely nothing at all, if an app didn't finish returning its alotted buffer of samples, not one more or less than that, the app would be considered xrunning, either underrunning or overrunning based on the size of the samples buffer they managed to fill, and their audio, cutting abruptly with perhaps afew bits of uninitialised memory in the mix, is sent to the soundcard at a fixed time, with the others. This is why you might be hearing spd crackle weirdly in vms, that's why you hear sometimes other normal programs crackle for no good reason whatsoever. And you know, this is additive, because the crackling spreads through the entire graph, those samples play with distortion on the same soundcard with everything else, and everyone else's samples get kinda corrupted by that too. But obviously, if it can get worse, it will, unfortunately, for those who didn't just down arrow past this post. There are afew mechanisms of reducing the perceived crackling issues from apps which xrun a lot, for example apps with very low sample rates, like 16 khz, yes, phone call quality in 2024(speaking of speech dispatcher), can get resampled internally by the server, which may improve latency at the cost of a degraded quality you're gonna get anyways with such a sample rate, but also te cpu has to work more and the whole graph may again be delayed a bit, or if an app xruns a lot, it can either be disconnected forcefully by pipewire, or alternatively the graph cycle time is raised at runtime, by the user or a session manager acting on behalf of the user, to attempt to compensate, though it'll never go like regular pulse, but enough to throw off latency measuring and audio calibration software.
So, back to speech dispatcher. After hearing this stuff, as well as piecing together the above explanation from various sources, I talked with the main speech dispatcher maintainer in the gnome a11y room, and came to the conclusion that 1, the xrunning thing is either a pipewire issue or a bug in spd audio plugins which should be fixed, but more importantly B, that I must try to make a pipewire audio backend for spd, because pw is a very low-latency sound server, but also because it's the newest one and so on.
After about two weeks of churn and fighting memory corruption issues, because C is really that unsafe and I do appreciate rust more, and also I hate autotools with passion, now my pr is basically on the happy path, in a state where I could write a message with it as it is now. Even on my ancient system, I can feel the snappyness, this really does make a difference, all be it a small one, so can't wait till this will get to people.
If you will get a package update for speech dispatcher, and if you're on arch you will end up doing so sooner or later, make sure you check the changes, release notes, however your package repositories call that. If you get something saying that pipewire support was added, I would appreciate it if as many of you as possible would test it out, especially when low-latency audio stuff is required, see if the crackling you misteriously experienced from time to time with the pulse implementation goes away with this one. If there are issues, feel free to either open them against speech dispatcher, mention them here or in any other matrix rooms where both me and you are, dm me on matrix or here, etc. For the many adventurers around here, I recommend you test it early, by manually compiling the pull request, I think it's the last one, the one marked as draft, set audio output method to pipewire in speechd.conf, replace your system default with the one you just built by running make install
if you feel even more adventurous , and have fun!
I tested this with orca, as well as other speech dispatcher using applications, for example kodi and retroarch, everything works well in my experience. If you're the debugging sort of person, and upon running your newly built speechd with PIPEWIRE_DEBUG=3, you get some client missed 1 wakeups errors, the pipewire devs tell me that's because of kernel settings, especially scheduler related ones, so if y'all want those to go away, you need to install a kernel configured for low-latency audio, for example licorix or however that is spelled, but there are others as well. I would suggest you ignore those and go about your day, especially since you don't see this unless you amp up the debugging of pipewire a lot, and even then it might still just be buggy drivers with my very old hardware.
In closing, I'd like to thank everyone in the gnome accessibility room, but in particular the spd maintainer, he helped me a lot when trying to debug issues related to what I understood from how spd works with its audio backends, what the fine print of the implicit contracts are, etc. Also, C is incredibly hard, especially at that scale, and I could say with confidence that this is the biggest piece of C code I ever wrote, and would definitely not want to repeat the experience for a while, none of the issues I encountered during these roughly two weeks of development and trubbleshooting would have happened in rust, or even go, or yeah, you get the idea, and I would definitely have written the thing in rust if I knew enough autotools to hack it together, but even so I knew that would have been much harder to merge, so I didn't even think of it. To that end though, lots of thanks to the main pipewire developer, he helped me when gdb and other software got me nowhere near solving those segfaults, or trubbleshooting barely intelligible speech with lots of sound corruption and other artefacts due to reading invalid memory, etc.
All in all, this has been a valuable experience for me, it has also been a wonderful time trying to contribute to one of the pillers of accessibility in the linux desktop, to what's usually considered a core component. To this day, I still have to internalise the fact that I did it in the end, that it's actually happening for real, but as far as I'm concerned, we have realtime speech on linux now, something I don't think nvda with wasapi is even close to approaching but that's my opinion I dk how to back up with any benchmarks, but if you know ways, I'm open to your ideas, or better, actual benchmark results between the pulse and pipewire backend would be even nicer, but I got no idea how to even begin with that.
Either way, I hope everyone, here or otherwise, has an awesome time testing this stuff out, because through all the pain of rangling C, or my skill issues, into shape, I certainly had a lot of fun and thrills developing it, and now I'm passing it on to you, and may those bugs be squished flat!
A complex and challenging case in accommodations, accessibility and creating healthy, respectful and effective inclusive workplaces.
For all of interested and engaged in accessibility and disability inclusion rights, it will be important and to follow this one through the legal processes.
Hey #blind #students of #Mastodon.
So I'm running into some accessibility issues in my algebra course involving graphing. I do not have the support of state services for the blind, however I'm connected with disability services on campus. Otherwise, I'm going at all this on my own and for the most part handeling things well.
However, I'm not exactly sure how best to overcome things where advanced math and graphing come into play.
My tools currently include a standard, non-graphic calculator, a laptop running NVDA as a screen reader, 80-cell braille display on lone from disability services and a whole lot of tenacity. So far, I've maintained a consistent A in my course, but with the graphing stuff, I'm concerned that might change.
So, what are your tools and techniques for dealing with this and more advanced calculatory things? What add-ons may be of use, what tools should I be looking at, and what questions should I be asking?
Thank you so much in advanced to anyone who offers any advice.
I'm telling y'all, if sighted people had to deal with the kind of senseless bullcrap that blind people *have* to deal with, daily, there would be very widespread protests. Public ones. Miles of people.
So, imagine this. You go to Google. Or Kagi. Whatever you like. The text box has "search" as a placeholder text. When you type into the field, the "search" is supposed to be replaced with what you're typing. So you go to search one day, and find that when you search for "cat food", you get "csatf aroochd". No matter what you do, that "search" is there, messing up all your searches. If you backspace everything out and try again, you get the same thing. No matter what. Now, you can click outside of the box, and type and press Enter and it works, but you don't see what's in the search box. You can paste your search in from notepad, but do you *really* want to do that? And this has been a problem for weeks now. You start to wonder if anyone at Google, uses Google.
And while this isn't entirely comparable with what's happening with the iOS Facebook app right now, it's the closest I can get. Truth be told, I don't post on Facebook. I haven't posted on there in like a year or two. So I, personally, don't have to deal with this. But for some people, Facebook is their lifeline. And no, that's not some stupid cliche like it sometimes is when overused by marketing teams. No. For some people, Facebook is how they communicate with their communities. And you had better not come in the replies all "well they should use Mastodon." No. Humble yourself. So this issue is a huge problem for them. And when you have elderly people who just want to talk to the people they care about involved, who know how to do it one way, and just stick to that because technology is so vast that one can easily get lost in their view? Things need to change. People need to understand these things. And while bugs suck and new frameworks are cool and Facebook loves to move fast and break things, if you want to do that, you'd better have a testing team that includes blind people, Braille users, dictation users, as wide a net as you can cast. And you know what? Maybe that'd cut down on that damn blind employment problem too. Fucking listen damn it! And I could go on and post on Facebook about this, using my computer because I'm privileged enough to have one and know how to work around accessibility issues, while I could even grab my Android phone, or dictate into my iPhone, I'm not the general blind person. And this isn't even just about Facebook, or just about this one situation. Developers of anything of any size should take this kind of thing to heart. And I know people are tired of me all on here dampening the party mood with all this anti-fun accessibility talk since like 2017, but at some point, we need to take things seriously. Because devs' "fun", building, developing, trying new frameworks and new updates sparkle sparkle, effects other people.
Well y'all, I broke something. I was using Emacs, and was committing a change to a repo like always, when my audio started to studder a bit. I'd noticed that Emacspeak sounds were a big sluggish, but I thought that was Sox being sox or Pipewire messing with something as usual. So I unplugged my Dell docking station, plugged it in again, and things returned to normal until the next time I committed. This time, audio stopped, and, well, never came back. I plugged the dock into my iPhone, and it worked. Windows, worked. Linux though? Nope. Not sure what's going on, but goodness I'm tired of technology. So tired of everything changing and breaking. Google Drive changed, so I'm having to redo the whole course on Google Docs/Drive. And this was not something I was expecting and definitely not something I needed today. So I'm gonna have to look for a way to reset the dock or something.
#linux #foss #accessibility #blind
question for people who rely on screen readers: what is a good way of notating "hey the next block of text is very unfriendly to screen readers, and only useful if you need my gpg key" succinctly?
Weitere Fragen zur #Barrierefreiheit und #Screenreader
Ich habe ein Manuskript, das vor allem aus Dialogen besteht, also wie ein Hörspiel oder TheaterstĂŒck. Eine Hörspielumsetzung mit echten Personen ist wohl unrealistisch.
In der visuellen Version werden die Sprechenden durch Emojis vermerkt, die kann ich leicht durch Namen ersetzen, als AltText oder in einer extra Version.
Wie wĂŒrdest du so etwas lesen wollen?
âŠ
#blind #sehbehindert
Benutzt du einen #Screenreader oder Ăhnliches?
Vermute ich richtig, dass eine einfache HTML-Seite (ohne JavaScript oder kompliziertes Layout) bei gleichem Inhalt grundsÀtzlich barrierefreier ist als eine PDF-Datei? Wie ist es mit ePub?
Wertet dein Screenreader PDF-Tagging und Alternativtexte aus, oder liest er nur vor, was an der OberflÀche zu sehen ist?
#blind #sehbehindert #Barrierefreiheit #accessibility #TaggedPDF
Okay y'all, I need to understand this. So someone said that a Pixel 8 is faster than an iPhone 15. Can someone help me understand why someone would say such a thing? The iPhone 15 has a much better SOC than even the Pixel 9, and Google has said that they aren't pushing for benchmarks. This person is using a screen reader, TalkBack and VoiceOver. Meanwhile, to me, the Pixel 8 feels even slower than my iPhone SE 2020, and the Samsung Galaxy S20 FE. Like I guess it could be their way of coping with Android, but I mean there are people who genuinely like it. And yes, it has LLM image descriptions now, so I mean there are people that will put up with it just for that feature. But, meh, sometimes people are unknowable.
#Android #iOS #accessibility #blind
With 'accessibility', I suspect you mean accessibility for #wheelchair users? Or also for e.g. #blind of #deaf persons?
In either case, there is mapcomplete.org/onwheels which has support for some accessibility tagging. Of course, more tags are always welcome
Nutzung einer externen Tastatur mit VoiceOver am iPhone: Ein Paar Gedanken
#VoiceOver #blind #iPhone #Accessibility
Die Kombination einer Bluetooth-Tastatur mit dem iPhone und der VoiceOver-Funktion kann ein echter Gamechanger sein. Ich nutze meine nun schon seit einigen Jahren und habe eine echte Workflow-Optimierung festgestellt. Es ist eine tolle Alternative zur Braille-Bildschirmeingabe oder zum herkömmlichen Touch Typing und bringt viele Vorteile mit sich. (1/7)
Hey everyone! đČâš
Iâm super excited to share that they hit their goal for the TTRPGs for Accessible Gaming Charity Bundle! Because of your amazing support, DOTS RPG Project can now team up with Die Hard Dice to create awesome braille dice for the visually impaired and blind community.
This is such a big win for making tabletop gaming more inclusive, and I couldnât be happier. Iâm really looking forward to snagging a set of these dice myself! I'm not in anyway involved other than a supporter but you have no idea how long I've been waiting for this.
This is just my personal huge thanks to everyone who chipped in and helped spread the word. You all rock! đđ
itch.io/b/2623/ttrpgs-for-acceâŠ
#AccessibleGaming #blind #BrailleDice #DOTSRPG #DieHardDice #TTRPG #ThankYou
To my last boost. Please boost this for wide spread. This is an opportunity for #braille dice sets to enter the realm of #mainstream and mass manufacturing. I haven't been able to read of full set of #dice as a #TTRPG player in 20 years since I went #blind. It would be beautiful for something like this to be added to life and make me feel like I'm right up there with everyone else. Thank you!
Just going to add the link directly here as well.
From a Mailing List:
As some of you may already know, System76 is working on their new Linux graphical interface, the COSMIC desktop. They have created a form with some questions related to accessibility. If anyone is interested in participating in the survey, please access the address below:
docs.google.com/forms/d/e/1FAIâŠ
#accessibility #Linux #foss #orca #blind
Anyone using NVDA, have you found with 2024.4 beta 4, that reviewing Windows Terminal is very sluggish?