Skip to main content


Thread:

I've conducted several informal experiments over the last few weeks about alt text for #photos as described by humans and as provided by #AI systems.

#LLMS, despite providing plethora of details when describing images, still miss the nuances of what the photos contain. Human #descriptions certainly continue to be better at conveying context.

#blind #accessibility #a11y #photography #images #photos

in reply to Pratik Patel

This is not to suggest that automatic descriptions aren't useful. Several tools available to #blind people are now capable of providing a good idea of the contents of photos when no descriptions are available. Apps and hardware are able to analyze photos and videos for quick access to the environment when this wasn't possible short time ago.

Even though these tools exist, automatic #descriptions should not be a substitute for alt text.

#accessibility #a11y #photography #photos

in reply to Pratik Patel

For this experiment, I asked several friends to send me photos and share #descriptions with me.

To give you one example, I was sent a photo of a table surface with a small coffee pot and a cup of coffee with foam on top. The table also had a plate of cookies, muffins, and croissants.

The #AI description described the drink in the coffee cup as a yogurt-based concoction. It also missed the cookies on the plate.

#accessibility #a11y #blind #photos #photo #photography