I didn’t get a chance to ask the question during the panel, so I’ll ask it out-loud here :)

I tend to have a mental model of behaviour that I think about on social media: interaction level and collective level.

If you want to moderate interaction level content, then that’s things like “is this image problematic” or “is this person harassing someone else” and so on. That seems to be what the DSA mostly addresses? Which is great! We need more protection there

But the collective level is something I’m curious about: what happens when attacks are designed to be “harmless” or “not violating” at the interaction-level, yet deeply harmful at the collective level? How does one defend against a synthetic threat carefully designed to target a vulnerable collective in a way that no interaction level defense is really capable of handeling?

Do we have regulations around this yet? Is it a problem we’re trying to address at the governance level? (I know the tooling is nearly non existent, but slowly starting to emerge. It would be encouraging to see similar momentum in governance)

#FOSDEM #FOSDEM2026 fosstodon.org/@Gina/1159951072…

in reply to modulux

@modulux A good example would be identifying a wave of spam from multiple individual accounts, each one being a “normal” amount of traffic, but the aggregate being a disproportionate amount of volume.

Another would be a group that targets a subset of a community with messages that, individually, do not break any rules but the combined result is that the targeted community members feel so unwelcome that they can’t participate safely in the space without harming their mental health.

When done severely enough and targeted enough, it can erode a community over time to the point where it collapses