Skip to main content


What are your thoughts on #AI and #accessibility? Do you think AI can actually help developers make more accessible software, or can play more of a role in users' lives? I'm going to take computer vision off the table because that's been talked to death. And yes, i'm using the word AI deliberately even though I understand it's kind of just a buzzword. I just want to see what comes back from this.
in reply to simon.old

Yes.

As somebody far wiser once said, this is the worst AI we'll ever have.

I think the current phase of AI development could be compared to the 56k dial-up phase of the internet, or the microprocessor kits that you had to assemble yourself to get a microcomputer at home Extremely impressive, requiring some expertise to use well, feeling like a toy to many, with incredible potential.

in reply to simon.old

possibly, on fine-tuned models that are actually trained on non-garbage code and with very specific prompts it might provide better defaults
in reply to Florian

@zersiax At the very least, it can give normal keyboard-driven instructions instead of "click here, click there". As for accessible code, I guess yes, at least in my experience it helped in this too. But Florian is also right, it needs to be trained on code with better quality.
in reply to André Polykanine

@menelion @zersiax yeah, I'm not sure that until we build a more accessible web (and start caring for reducing errors, some because of legal reasons others out of passion) that what it's trained on will improve. I would imagine seeing some errors going down in things like WebAim Million would help AI as well in certain areas or parts of knowledge more than others, but that's a super slow trend and someone would need to observe it.
in reply to Tamas G

@Tamasg @menelion ehh custom prompts can really help, too. Like... a big reason why devs don't use semantic HTML and native controls is because they can't figure out how to style those exactly like the design briefs tell them to style it, I've made that point before. If AI can be made to essentially spit out CSS resets that are always appicable in all browsers (this is a pipedream but hear me out), at that point we can essentially go back to HTML = structure and CSS = presentation but really the better way to fix that is for browsers to agree on how one can strip away all the browser-generated styling and roll your own, or for companies to not want to stand out by making their controls look different from the norm (again, this is a pipe dream) xd
in reply to Florian

@zersiax @menelion yeah, and most design schools don't train about that dev language to tell them what semantic-type control is best for the job, either. I think designers will always want that flexibility (as do some developers,) and accessibility can erode it (talked about this in engineering.atspotify.com/2023…) - AI's job will be to balance this in better ways but will need humans.
in reply to simon.old

The forever caveat to many things about AI is: "If it can stop confidently lying," and this isn't an exception. It's capable of producing good, accessible code, and can provide well-written instructions for QA testers, but telling developers to rely upon it is like telling someone to learn Python *only* with ChatGPT. Eventually it'll tell you to use some module from Python 2.6 that was deprecated 10 years ago and you'll spend so much time Googling that you might as well have not tried.
in reply to Tristan

@tristan Yeah, this is pretty much what I think. Someone could probably learn a lot from an LLM and I'm sure they could probably detect and fix accessibility with some degree of accuracy above 0%, but who's going to notice when GPT gets it wrong? If you have to ask an LLM how to do something you yourself can't test in a meaningful way, you'll never know whether it's correct. My friend loves to point out that GPT 3.5 will confidently and consistently insist role="dropdown" is a thing that exists.
in reply to simon.old

@tristan Not a software dev... yet, but LLM hallucinations are a serious problem that really need to be solved before truly useful and accurate AI applications can be developed. I'm honestly a little wary of the blind communitie's general enthusiasm for LLM powered descriptions, we are at a particular disadvantage when it comes to hallucinations. We can't necessarily verify whether GPT is telling the truth or just spitting out garbage, and the General lack of understanding with regard to how these tools are working under the hood is concerning to me.