Skip to main content


The screen reader demonstrated in the quotes toot is buggy. Unicode alternative alphabets are years old and widely used for this purpose and should not be a surprise to any modern software.

My iPhone screen reader says “sinister potato.”

From: @FreakyFwoof
https://universeodon.com/@FreakyFwoof/110277460860510209

in reply to Jamie McCarthy

Not really. It's #VoiceOver for #iOS, and different synths treat it differently, but it is not a reliable or nice experience. You may have an easy ride, but that doesn't mean everyone else will, and my post proves this. There's simply no need for this kind of frilly behaviour. Standard lettering is not only understandable by a #ScreenReader, but by a non-English speaker too, who may not recognise those letters for their so-called intended purpose.
in reply to Andre Louis

Huh. Since Accessibility > Spoken Content > Speak Selection knows how to handle this correctly, why on earth does Accessibility > VoiceOver make such a mess of it? That’s very strange. Apple needs to fix this.
in reply to Jamie McCarthy

It's synth dependent. If you're using a different TTS engine, mine is UK Siri Female, then that's what you get. You are likely using something else. Apple need to fix nothing. People just don't need to use these stupid characters in this way.
in reply to Andre Louis

No, I’m using English (US) Samantha non-enhanced in both. The other random voice that I spot-checked VoiceOver with also failed.

I agree people shouldn’t use stupid characters, but also I think Apple should fix this. I expect better UI from Apple.

in reply to Andre Louis

not only that, but the original also seems to either be VoiceOver on the Mac or iOS. If you set your phone to a specific language, and disable automatic language switching, which some of us do because it loves to randomly switch to random languages because detecting languages is hard, then stuff like this happens. So sure. It *should* just work. But it's not reliable, and making it work is not always easy.
in reply to Talon

@talon Hm. My “Spoken Content” has “Detect Languages” turned on, and I see it’s default “Voices” setting includes a different voice for at least half of the foreign languages it lists. Is that why “Spoken Content” recognizes the Unicode alternate letters and converts them into English words?

I have “Detect Languages” turned on for “VoiceOver” as well, but as noted it still fails.

in reply to Jamie McCarthy

Yup, that's why Spoken Content does it. I think, but am not sure, that Apple uses different APIs for synthesizing speech in Spoken Content and VoiceOver. For example, in spoken content, it uses the neural variants of the Siri voices, but in VoiceOver it only uses the concatonative ones probably because of battery reasons. You can try to add the languages you want it to recognize to the language rotor and make sure the rotor itself is set to default. That has the highest chance of working. But in general, VoiceOver and pronouncing things is a bit hit or miss. I believe it's this ➡️ emoji which for me always gets read in Japanese. It's the one often used in written directory listings.
in reply to Talon

@talon Unicode defines it, but barely any screen reader is sophisticated enough to do the proper thing, which would be to treat a block of those symbols like a font, as those symbols are actually a font. It might say modifier bold 'text' rather than reading that for each letter. But that requires a heuristic that could quickly run into problems in any kind of dynamic flow, requiring an absolute ton of state and backtracking capability.
in reply to x0

@x0 Which kind of supports the initial point. There is no screen reader that does it all well. There are quirks all across the board from Apple operating systems to windows and android to Linux. Making a speech system that can read all of that seems to already be very difficult, so a system which can also detect all the different ways in which Unicode can be used and abused to write text feels almost impossible.
@x0
in reply to Talon

@talon I'm unsure if that combination of symbols even displays on most platforms, given those are in some of the very recent blocks. The responsibility here should actually be to the API exposing these things, which would work for all but basic text editors. These characters include what is effectively font style metadata. Presumably the rendering engine is translating them to some kind of monospace font under the hood, which would, I think, eventually include regular indexes. That is, the browser should convert a chain of these into a block of plain letters using the implied font.
in reply to x0

@talon The generated markup that gets exposed to the accessibility tree, then, would be a span or something that specified the used font, containing these letters. The screen reader may read out the font name if you have that setting on, otherwise it would just read the letters as-is.
in reply to x0

@x0 So what you're suggesting is something like a HarfBuzz equivalent for accessible text?
@x0
in reply to Talon

@talon I suppose, I don't quite know what harfbuzz does. I'm thinking of a translation in existing rendering pipelines in apps like browsers which support that kind of markup.
in reply to Talon

@talon Battery reasons! The plot thickens! That could make sense I guess, but it's pretty bizarre that Apple engineers haven't been able to figure out how to get this extremely common UI complaint resolved.
in reply to Jamie McCarthy

Put it this way. I have a long list of bugs in VoiceOver both for Mac and iOS that I personally really wish they would address, but they don't, even after I submitted feedback. And this particular problem isn't even in it. :(
in reply to Jamie McCarthy

In your video post, the language used is *definitely* not English, sounds like the Thai voice actually, so there's that.