in reply to lianna

as with most things it comes down to preference. however having them in your text directly means that a #ScreenReader user might choose to lower the #Verbosity of the punctuation they hear, thus not even realising that there were 2 tags there already.
Obviously that only works for speech users and if the tags make sense within the context of the words.
My rule of thumb is to imbed them within my text if they'd make syntactical sense without the octothorpe and add the ones that don't afterward.
in reply to Tom

Thanks for the feedback, to both of you.

Yeah, I was wondering that. I can imagine arguments for either option being better than the other.

On one hand, I imagine that constant interruptions with the word 'hash' or the word 'hashtag' are annoying, but on the other hand I assumed that it was preferable to the big wall at the end.

However, I also assumed that people would simply skip the narration of the post once they reach a wall of hashtags.
But then again, I can also imagine that sometimes there's content after the wall, so that isn't exactly foolproof and you might end up skipping too much.

I only tested the inbuilt Android screen reader, which prefaced every hashtag symbol as the word "hashtag", but I also imagined other screenreaders simply not reading out hashtags until after the post or upon being specifically prompted.

That's why I wanted to ask to make sure. 😅 It seems like it's a preference thing with no clear answer, so far.

in reply to Tom

The frustrating thing about all this is that HTML already has features to, for example, distinguish headers and footers from a post's body. Social media just doesn't use it, simply smacking a post's content onto the site in plain text, leaving the screenreader to guess what's going on.

It definitely is the site's responsibility, but writing proper #semanticHTML is unfortunately not a priority.

in reply to Tom

Personally, I don't mind; I mix them in with my post content to save space. Screen readers have preferences that allow users to change the way they read punctuation, based on what app they're using. So I just have my screen reader set not to read the hash character when I'm in a fediverse app. However, unfortunately, the training on how to use assistive technology is extremely lacking. Most users who need a screen reader are just handed the software and told to figure it out themselves. So for them, a post with hashtags might read out as "I'm now using the link number nvda link number screenreader to read this link number fediverse post." So the question I struggle with: should we all change our behavior around hashtags, knowing that eighty percent of screen reader users don't know how to fix the problem (or even that they can), or should we advocate for more and better training for screen reader users? My personal answer is that I choose to mix hashtags in with my post. If a fellow blind person tells me they're struggling with it, now I get the chance to teach them something new. However, if you're a sighted person, it could be both uncomfortable and unhelpful for you to tell a struggling blind person "learn how to use your screen reader!" So you may choose to just put hashtags at the end for accessibility reasons.
in reply to 🇨🇦Samuel Proulx🇨🇦

With any software you expect the end user to merely use the default settings and customise nothing. I try my best to make sure things work for these people, but without being blind myself, it's difficult to test.

As a note on the subject, I always place hashtags at the end of posts. Even for me, a dyslexic throwing them in the main text can make it harder to read.

in reply to Tom

The trouble is that screen readers are so customizable and powerful, that if you only use the default settings, you'll find there are many apps you can't use, many things you just can't do, and everything will be slow and awkward for you. The default settings can't be improved because the "better" settings will be different depending on what application you're using, if you can read Braille, how well you can hear, and lots of other factors. Every expert screen reader user winds up with completely different settings, and probably multiple profiles to allow the screen reader to automatically change settings for each application. And that's not even getting into third party addons (www.nvda-addons.org) that are required to do some things in some apps. Just like an IDE or a database, a screen reader isn't really something that you can just pull off the shelf and use effectively.
in reply to Tom

Sure, if they're willing to sit down and read the manual, learn all the hotkeys, and understand all the settings. The first time I got my own computer, my father gave me the training tapes for the screen reader. I believe they were 16 90 minute tapes that covered everything from basic use to scripting. He made me sit down and do all of the activities along with the tape; I'd hear it done, pause the tape, and do it myself. As a 9-year-old, I hated every minute of it. But today, I have a full time job in tech and program as a hobby. None of it would have been possible without being forced to just grind through the training. But if you went blind at age 37, you have other worries, and priorities, and things you urgently need to learn. You're probably not going to devote weeks or months to learning how to use a screen reader, and instead just struggle with the default settings and hate using your computer/phone.
in reply to 🇨🇦Samuel Proulx🇨🇦

Oh, and bonus points, the training was only in English. Lucky for me English is my native language. I think that might be slightly better today, at least for the major languages, but I wouldn't be surprised to learn that many blind people can't access the training because it's not even available in there language. And we're not even talking about folks with other challenges on top of blindness (aging, other mental differences, etc.). I guess what I'm saying is that these are hard problems to solve, and that in order to use a screen reader well, life has to have handed you a lot of other privileges first.
in reply to 🇨🇦Samuel Proulx🇨🇦

Hell, I call myself an expert, and there are settings even I have no idea when or why I would change them, or what they do. "report normalized when navigating by character" can be checked or unchecked. And right next to it I can enable or disable "unicode normalization". I'm at least sure these things are related, I guess. But hey! I do know that "Unicode consortium data" should be enabled under "extra dictionaries for character and symbol processing" when I want to hear emoji's, and disabled when I don't. Isn't it obvious that blind people hearing emoji spam should go to "extra dictionaries for character and symbol processing" and uncheck something to fix the problem? I mean of course! Everyone should just know that. Screen readers are far too powerful to have a setting called "read emoji" that you could check or uncheck. Sorry, I'm done ranting. Text stop to unsubscribe from screen reader rants. You just got me started on one of my pet peeves. I wish more UX designers worked on assistive technology. Because as things get more and more complicated, and the aging population increases, we've urgently got to figure out a way to both make this easier without removing the power for experts.