Skip to main content


People who are against #AI are against progress itself. Sure, it has its issues but that's because those who are unable to see the positives simply do not look for them for example, AI is the way that me as a #Blind person is able to get descriptions that sighted people wouldn't want to write themselves. It is the way that at some point, i might be able to drive, something which sighted people take for granted because they can just get a license and drive where they want, while I have to struggle to even get to the grocery store. There are so many benefits to it so I'm pro AI all the way. #Technology, #Progress, #Computers, #cars

reshared this

in reply to Nick's world

I'm sympathetic to this argument. I've taken the same position myself.

But we also need to look at the flip side. Progress, and especially individual convenience, at what cost? At the cost of continuing to raise the planet's temperature? At the cost of trampling on the rights of those whose words and pictures were used in the training data? I'm not sure.

Matt Campbell reshared this.

in reply to Matt Campbell

@matt this. Even if it does provide benefit we cannot simply turn away from the consequences of getting what we have. Humans have been doing things like this for a really long time, and a lot of it is how we ended up in less than desirable situations. If accessibility is a hurtle, at what point is it OK to ignore in favor of benefitting the many that don't need it? No wheelchair ramps, no walkable cities, all of those are things that undoubtedly benefit somebody, people who drive cars, etc. but hurt people who can't. Just as an example. So just because someone can derive positives out of it does not mean that overall it is inherently a good thing. Plus we shouldn't forget that we've been using several kinds of AI to classify images for example, and I don't think there's a particularly good reason why an LLM should be better at this than building on less exploitative means of achieving the same thing. I don't consider myself special enough to demand everybody else just suck it up. Instead we should focus our energy on creating something that's less exploitative and works for the good of everyone, not just me.

Matt Campbell reshared this.

in reply to Talon

@talon @matt IMO, we should do everything within our power to make things more accessible. If AI is required, so be it.
in reply to Talon

@talon @HNguyenLy @matt This is an old debate that is rebooted with each advance in technology. It's ultimately pointless, because, regardless of what anyone thinks or demands, the genie never goes back in the bottle. The Internet, computers generally, nuclear power, industrialization, etc. Anyone is free to stop, but others will continue to use the new tech, and leave them behind.
in reply to Matt Campbell

@matt Let me ask, would people, and I mean real people do what they can to let's say, make things accessible out of the goodness of their hearts? no. It's been proven that humans, as a general rule, simply do not care about such things. Having an AI do it makes life much better for people than legislation would.
in reply to Nick's world

I hear you, and I've made that argument myself. I'm just not sure if that gives us the right to take what we want by brute force, at the expense of other people and our world as a whole.

If one of us needs to use an LLM to get past an accessibility barrier that's standing in the way of employment, education, or some essential task, I don't have an argument against that. But using LLMs for convenience or entertainment seems harder to justify.

in reply to Matt Campbell

@matt Maybe so, but there are many reasons why AI can work and while I do agree with you that companies need to tone it down, I'm against people who say that AI should be completely destroyed.
in reply to Matt Campbell

i think it's in many peoples' best interest to make LLMs and diffusion models and all more efficient and less power hungry, it's not like crypto where proof of work made using up as much electricity and hardware as possible the more effective method... we now have NPUs that accelerate usage while remaining under 10W, and models that are combined with novel techniques can outpace newly trained models with negligible energy consumption.

Once the AI bubble pops and big companies like OpenAI inevitably fall under their inherently bad business models, the open source models and distributed compute for training will still exist and will get more and more efficient with real use cases that people care about.

This entry was edited (3 weeks ago)

reshared this