The Colonization of Confidence., Sightless Scribbles
A fabulously gay blind author.sightlessscribbles.com
A fabulously gay blind author.sightlessscribbles.com
In the early 2000s the ReactOS team paused development for years; to engage in a project wide audit, under accusations that a developer may have SEEN leaked windows sourcecode.
In the 2020s folks keep insisting it's cool for #FLOSS devs to use AI's trained on random other projects to generate code; when it is known that such AI assistants occasionally reproduce code verbatim, without regard to the original software license. #llm #AI #eliza #generativeAI
If you want a specific example of why many researchers in machine learning and natural language processing find the idea that LLMs like ChatGPT or Claude are "intelligent" or "conscious" is laughable, this article describes one:
news.mit.edu/2025/shortcoming-…
#LLM
#ChatGPT
#Claude
#MachineLearning
#NaturalLanguageProcessing
#ML
#AI
#NLP
MIT researchers find large language models sometimes mistakenly link grammatical sequences to specific topics, then rely on these learned patterns when answering queries.MIT News | Massachusetts Institute of Technology
I'm looking into the Zig programming language, and I found this on the language designer's blog. I always appreciate seeing other people being as cranky as I am about rent-seeking and the aggressive push for LLM coding:
“In this case it's even more suspicious because the company that bills you not only counts how much you owe them, it also controls the agent's behavior in terms of how many requests it tries to make. So they could easily insert into their system prompt something like, ‘our earnings this quarter are a little short so try to pick strategies when doing agentic coding that end up earning us more API requests, but keep it subtle.’ There's no oversight. They could even make it target specific companies.”
That's some next level Bond villain shite!
Imagine having an ego that requires that kind of virtual massaging... by what amounts to a pre-programmed bot.
rollingstone.com/culture/cultu…
#prayforhumanity #itisnotAI #LLM #LLMFAIL #LLMprogrammingFAIL #internet
Grok, the AI chatbot developed by Elon Musk's xAI, keeps declaring that he's a peak physical specimen and one of the most brilliant minds in history.Miles Klee (Rolling Stone)
I do ironically enjoy it when a company releases a new and improved #LLM, and suddenly some extremely specialized and specific tasks that I use LLMs for perform drastically better. And suddenly they can complete the exact examples I used to provide in my prompt, all by themselves. And seem to perform the task in my exact style, even though I'm not giving them my prompt examples anymore. Hmmm, it couldn't be that they trained on my prompt data, could it? Even though they said they don't do that? Nah, of course not! They'd never!
Oh well, at least someone, somewhere, spent several billion dollars to make something I do once a week slightly easier.
Question for those of you who host a LLM by themselfs with Ollama, llama.cpp and use it for example for generating alt texts for images.
What LLM do you recommend? Which one generates a good description for screen reader users with the least amount of computing?
Whats your experience with that? Bonus points for LLM's which perform really good in CPU only situations.
I'm in a #github internal group for high-profile FOSS projects (due to @leaflet having a few kilo-stars), and the second most-wanted feature is "plz allow us to disable copilot reviews", with the most-wanted feature being "plz allow us to block issues/PRs made with copilot".
Meanwhile, there's a grand total of zero requests for "plz put copilot in more stuff".
This should be significative of the attitude of veteran coders towards #LLM creep.
Welches lokal und datenschutzfreundlich nutzbare #LLM möchte man denn installieren, um das #Foto #Archiv mit #Bildbeschreibungen zu versehen.
Daraus möchte ich eine eigene #Datenbank bauen.
Und nein, ich möchte keine Fotoverwaltungssoftware installieren.
The number one reason for (at least) weekly changes to my site is to update the AI crawler/siphon blockers ... it never stops : there are 97 of them right now 😤
› github.com/ai-robots-txt/ai.ro…
#BlockAI #AI #LLM #NightmareOnLLMStreet #Webmaster
A list of AI agents and robots to block. Contribute to ai-robots-txt/ai.robots.txt development by creating an account on GitHub.GitHub
Here is a way that I think #LLMs and #GenAI are generally a force against innovation, especially as they get used more and more.
TL;DR: 3 years ago is a long time, and techniques that old are the most popular in the training data. If a company like Google, AWS, or Azure replaces an established API or a runtime with a new API or runtime, a bunch of LLM-generated code will break. The people vibe code won't be able to fix the problem because nearly zero data exists in the training data set that references the new API/runtime. The LLMs will not generate correct code easily, and they will constantly be trying to edit code back to how it was done before.
This will create pressure on tech companies to keep old APIs and things running, because of the huge impact it will have to do something new (that LLMs don't have in their training data). See below for an even more subtle way this will manifest.
I am showcasing (only the most egregious) bullshit that the junior developer accepted from the #LLM, The LLM used out-of-date techniques all over the place. It was using:
So I'm working on this dogforsaken codebase and I converted it to the new OAC mechanism from the out of date OAI. What does my (imposed by the company) AI-powered security guidance tell me? "This is a high priority finding. You should use OAI."
So it is encouraging me to do the wrong thing and saying it's high priority.
It's worth noting that when I got the code base and it had OAI active, Python 3.9, and NodeJS 18, I got no warnings about these things. Three years ago that was state of the art.
But we are not done.
If you look sy my previous post, under the erroneous "You should use OAI" message, there's a little icon and the word "Explain." You can ask the #LLM to explain the warning. Here's the explanation that Claude Sonnet 4.0 has for that warning. For those of you who don't want to read a screenshot, the LLM has prompted itself with:
Explain CloudFront Distribution Resources have an S3 Origin configured without an Origin Access Identity (OAI). issue in 70-frontend-stack.yaml at [29]
And the LLM's response to its own prompt is:
This is a false positive security warning. The issue is incorrectly flagged because:You ARE using access control - Line 16-23 creates an Origin Access Control (OAC), which is the modern replacement for OAI
OAC is properly configured - Line 38 attaches the OAC: OriginAccessControlId: !GetAtt CloudFrontOriginAccessControl.Id
S3 bucket policy enforces OAC - Lines 120-135 ensure only CloudFront can access the S3 bucket
The security scanner likely only checks for the legacy
OriginAccessIdentityproperty and doesn't recognize the newerOriginAccessControlId. Your configuration is actually more secure than using the deprecated OAI.The warning can be safely ignored - your setup uses the AWS-recommended OAC approach for securing S3 origins.
Thanks for wasting my time AND egregious amounts of electricity generating a pointless "high priority" security warning.
«Мешок слов» — не супруг, не наставник, не босс и не раб. Это инструмент. Его предназначение — снимать с нас рутину и усиливать наши способности. Его социальный статус — отсутствует; бессмысленно спрашивать, «лучше» ли он нас. Настоящий вопрос такой: становимся ли мы лучше, когда им пользуемся?
ИИ — просто мешок слов. Как перестать видеть интеллект там, где его нет
Наука — это «задача сильного звена»: даже если мы произведём в миллион раз больше посредственных исследований, окажемся там же, где и сейчас.
Если нам нужно больше действительно сильных работ, чем же наполнять "мешок" LLM? Можно забивать его статьями, но часть из них — сфабрикованы, часть — просто ошибочны, и все они содержат неявные допущения, которые могут оказаться ложными.
К тому же часто не хватает ключевой информации — нет данных, недостаточно подробно описаны методы.
Предприниматель Маркус Страссер, пытавшийся сделать компанию из серии «положим все статьи в "мешок" → ??? → профит», в итоге отказался от затеи, заявив, что «почти ничего из того, что действительно делает науку наукой, не опубликовано в виде текста в интернете».
Даже лучший "мешок" в мире бесполезен, если в него не положить правильные вещи.
habr.com/ru/companies/otus/art…
Или: Claude, пойдёшь со мной на выпускной?Слушайте, я не знаю, уничтожит ли нас когда-нибудь искусственный интеллект, сделает ли он нас всех богатыми или что-то...Ксения Мосеенкова (Habr)
Big News! The completely #opensource #LLM #Apertus 🇨🇭 has been released today:
📰 swisscom.ch/en/about/news/2025…
🤝 The model supports over 1000 languages [EDIT: an earlier version claimed over 1800] and respects opt-out consent of data owners.
▶ This is great for #publicAI and #transparentAI. If you want to test it for yourself, head over to: publicai.co/
🤗 And if you want to download weights, datasets & FULL TRAINING DETAILS, you can find them here:
huggingface.co/collections/swi…
🔧 Tech report: huggingface.co/swiss-ai/Apertu…
After #Teuken7b and #Olmo2, Apertus is the next big jump in capabilities and performance of #FOSS #LLMs, while also improving #epistemicresilience and #epistemicautonomy with its multilingual approach.
I believe that especially for sensitive areas like #education, #healthcare, or #academia, there is no alternative to fully open #AI models. Everybody should start building upon them and improving them.
#KIMündigkeit #SovereignAI #FOSS #ethicalAI #swissai #LernenmitKI
We’re on a journey to advance and democratize artificial intelligence through open source and open science.huggingface.co
Mastodon z Lumy nenačítá náhledový og:image obrázek, ale já jsem se s ním půl dne maloval v Inkscape, takže vás o něj rozhodně nehodlám ochudit 😀
#juniorguru #mews #ai #llm #juniordevs #praha #events #tydenprodigitalnicesko #digitalnicesko
We’re on a journey to advance and democratize artificial intelligence through open source and open science.huggingface.co
OpenAI is ready to launch its new open AI model. The release could come as soon as next week, according to sources familiar with OpenAI’s plans.Tom Warren (The Verge)
After reading about the manosphere-trained ChatGPT model OpenAI was _promoting on its front page_, I shared a couple photos with inceLLM to see how much it would neg me for not being GigaChad material...aaand ironically inceLLM, the most toxic-masculinity thing I've heard of this week, has a thing for enbies 🤣
"'Take a screenshot every few seconds' legitimately sounds like a suggestion from a low-parameter LLM that was given a prompt like 'How do I add an arbitrary AI feature to my operating system as quickly as possible in order to make investors happy?'" signal.org/blog/signal-doesnt-…
#Signal #Microsoft #Recall #MicrosoftRecall #LLM #LLMs #privacy
Signal Desktop now includes support for a new “Screen security” setting that is designed to help prevent your own computer from capturing screenshots of your Signal chats on Windows.Signal Messenger
In a move that surprises absolutely noone, GitHub now requires users to login in order to browse public repositories (including open source projects). After a few (~10) requests, you get blocked (I can confirm). In order to fight AI scrapers, I guess.
So, GitHub decided to blanket-limit access to open source projects as a defense against the very scourge that they(r parent company) unleashed on the world.
I won't be hypocrite: it's a bit embarrassing, but undeniably satisfying to say "told you so". I moved away from GitHub long ago and I moved all my stuff to Codeberg instead. And so happy I did!
Next step: radicle.xyz maybe?
github.com/orgs/community/disc…
#github #microsoft #openai #codeberg #ai #ml #llm #enshittification #foss #floss #opensource #radicle
Select Topic Area Question Body "I'm experiencing an issue when trying to access files from a GitHub repository without logging in. After 3 attempts, I receive an Error 429 (Too Many Requests) and ...GitHub
"Mir reicht's": #Curl-Entwickler spricht Machtwort gegen "KI-Schrott"
golem.de/news/mir-reicht-s-cur…
> Entwickler @bagder zeigt sich frustriert über durch KI generierte Bug-Reports. Reporter werden künftig einem Intelligenztest unterzogen.
Btw., #Golem garniert den Artikel mit einem KI generierten Bild 🤷
Aber das mit den Intelligenztest finde ich gut. Die Frage ist, ob man mit Captchas gegen LLMs ankommt.
Org profile for Meta Llama on Hugging Face, the AI community building the future.huggingface.co
DeepSeek's free 685B-parameter AI model runs at 20 tokens/second on Apple's Mac Studio, outperforming Claude Sonnet while using just 200 watts, challenging OpenAI's cloud-dependent business model.Michael Nuñez (VentureBeat)
Airbnb recently completed our first large-scale, LLM-driven code migration, updating nearly 3.5K React component test files from Enzyme to use React Testing Library (RTL) instead. We’d originally…Charles Covey-Brandt (The Airbnb Tech Blog)
It's that time of the month again, this time I wrote a whole blog about it:
“Your GitHub Copilot access has been renewed” 🤡
#github #copilot #llm #ai #genai
sethmlarson.dev/your-github-co…
As a maintainer of a "popular open source project" ever since Copilot was announced I've been receiving monthly reminders that my free GitHub Copilot access has been renewed. If I was paying for t...sethmlarson.dev
So is @mozillaofficial's #distilvit a #LLM that could be used to embed in a CMS to create alt text for uploaded images?
image-to-text model for PDF.js. Contribute to mozilla/distilvit development by creating an account on GitHub.GitHub
My 2nd @fosdem talk was Alternative Text for Images: How Bad Are Our Alt-Text Anyway?
fosdem.org/2025/schedule/event…
It is available online:
docs.google.com/presentation/d…
#FOSDEM #FOSS #AI #LLM #AltText #Accessibility
Alt Text for Images: How Bad Are they Anyway? FOSDEM, Feb 1, 2025 25 minutes, Track: Inclusive Web, Room: K.3.201 https://fosdem.Google Docs
main development portal for mozilla.ai. Mozilla.ai has 13 repositories available. Follow their code on GitHub.GitHub
Contribute to deepseek-ai/DeepSeek-R1 development by creating an account on GitHub.GitHub