Search

Items tagged with: llm


I'm in a #github internal group for high-profile FOSS projects (due to @leaflet having a few kilo-stars), and the second most-wanted feature is "plz allow us to disable copilot reviews", with the most-wanted feature being "plz allow us to block issues/PRs made with copilot".

Meanwhile, there's a grand total of zero requests for "plz put copilot in more stuff".

This should be significative of the attitude of veteran coders towards #LLM creep.


This reminded me to work on an #LLM RSS reader.
It simply pulls the CBS news RSS feed into a list. That list gets passed to the LLM to decide the reading order. Then the article becomes summarized and read by Espeak if `--speak` is used.
Smaller models had issues reproducing the links sometimes, getting confused and trying to use the title to construct a URL. Qwen3-30B-A3B-Instruct-2507 did a good job.
`--save` saves the read links in a file to not repeat.
github.com/Jay4242/llm-scripts…
#llm


Oh no. This doesn't seem quite ... right.

#AI #LLM

#AI #llm


Welches lokal und datenschutzfreundlich nutzbare #LLM möchte man denn installieren, um das #Foto #Archiv mit #Bildbeschreibungen zu versehen.

Daraus möchte ich eine eigene #Datenbank bauen.

Und nein, ich möchte keine Fotoverwaltungssoftware installieren.

#OpenSource #FOSS



Here is a way that I think #LLMs and #GenAI are generally a force against innovation, especially as they get used more and more.

TL;DR: 3 years ago is a long time, and techniques that old are the most popular in the training data. If a company like Google, AWS, or Azure replaces an established API or a runtime with a new API or runtime, a bunch of LLM-generated code will break. The people vibe code won't be able to fix the problem because nearly zero data exists in the training data set that references the new API/runtime. The LLMs will not generate correct code easily, and they will constantly be trying to edit code back to how it was done before.

This will create pressure on tech companies to keep old APIs and things running, because of the huge impact it will have to do something new (that LLMs don't have in their training data). See below for an even more subtle way this will manifest.

I am showcasing (only the most egregious) bullshit that the junior developer accepted from the #LLM, The LLM used out-of-date techniques all over the place. It was using:

  • AWS Lambda Python 3.9 runtime (will be EoL in about 3 months)
  • AWS Lambda NodeJS 18.x runtime (already deprecated by the time the person gave me the code)
  • Origin Access Identity (an authentication/authorization mechanism that started being deprecated when OAC was announced 3 years ago)

So I'm working on this dogforsaken codebase and I converted it to the new OAC mechanism from the out of date OAI. What does my (imposed by the company) AI-powered security guidance tell me? "This is a high priority finding. You should use OAI."

So it is encouraging me to do the wrong thing and saying it's high priority.

It's worth noting that when I got the code base and it had OAI active, Python 3.9, and NodeJS 18, I got no warnings about these things. Three years ago that was state of the art.


But we are not done.

Teach your developers ignore #AI generated warnings with this one weird trick!


If you look sy my previous post, under the erroneous "You should use OAI" message, there's a little icon and the word "Explain." You can ask the #LLM to explain the warning. Here's the explanation that Claude Sonnet 4.0 has for that warning. For those of you who don't want to read a screenshot, the LLM has prompted itself with:

Explain CloudFront Distribution Resources have an S3 Origin configured without an Origin Access Identity (OAI). issue in 70-frontend-stack.yaml at [29]


And the LLM's response to its own prompt is:

This is a false positive security warning. The issue is incorrectly flagged because:

You ARE using access control - Line 16-23 creates an Origin Access Control (OAC), which is the modern replacement for OAI

OAC is properly configured - Line 38 attaches the OAC: OriginAccessControlId: !GetAtt CloudFrontOriginAccessControl.Id

S3 bucket policy enforces OAC - Lines 120-135 ensure only CloudFront can access the S3 bucket

The security scanner likely only checks for the legacy OriginAccessIdentity property and doesn't recognize the newer OriginAccessControlId. Your configuration is actually more secure than using the deprecated OAI.

The warning can be safely ignored - your setup uses the AWS-recommended OAC approach for securing S3 origins.


Thanks for wasting my time AND egregious amounts of electricity generating a pointless "high priority" security warning.

#AI #llm


#ai #LLM

«Мешок слов» — не супруг, не наставник, не босс и не раб. Это инструмент. Его предназначение — снимать с нас рутину и усиливать наши способности. Его социальный статус — отсутствует; бессмысленно спрашивать, «лучше» ли он нас. Настоящий вопрос такой: становимся ли мы лучше, когда им пользуемся?

#AI #llm


ИИ — просто мешок слов. Как перестать видеть интеллект там, где его нет

#ai #LLM

Наука — это «задача сильного звена»: даже если мы произведём в миллион раз больше посредственных исследований, окажемся там же, где и сейчас.
Если нам нужно больше действительно сильных работ, чем же наполнять "мешок" LLM? Можно забивать его статьями, но часть из них — сфабрикованы, часть — просто ошибочны, и все они содержат неявные допущения, которые могут оказаться ложными.
К тому же часто не хватает ключевой информации — нет данных, недостаточно подробно описаны методы.

Предприниматель Маркус Страссер, пытавшийся сделать компанию из серии «положим все статьи в "мешок" → ??? → профит», в итоге отказался от затеи, заявив, что «почти ничего из того, что действительно делает науку наукой, не опубликовано в виде текста в интернете».

Даже лучший "мешок" в мире бесполезен, если в него не положить правильные вещи.

habr.com/ru/companies/otus/art…

#AI #llm


A new type of career opportunity emerged from vibe coding because vibe coders didn't know what they were doing 😉

#ai #llm #jobs

#AI #jobs #llm


Big News! The completely #opensource #LLM #Apertus 🇨🇭 has been released today:

📰 swisscom.ch/en/about/news/2025…

🤝 The model supports over 1000 languages [EDIT: an earlier version claimed over 1800] and respects opt-out consent of data owners.

▶ This is great for #publicAI and #transparentAI. If you want to test it for yourself, head over to: publicai.co/

🤗 And if you want to download weights, datasets & FULL TRAINING DETAILS, you can find them here:
huggingface.co/collections/swi…

🔧 Tech report: huggingface.co/swiss-ai/Apertu…

After #Teuken7b and #Olmo2, Apertus is the next big jump in capabilities and performance of #FOSS #LLMs, while also improving #epistemicresilience and #epistemicautonomy with its multilingual approach.

I believe that especially for sensitive areas like #education, #healthcare, or #academia, there is no alternative to fully open #AI models. Everybody should start building upon them and improving them.

#KIMündigkeit #SovereignAI #FOSS #ethicalAI #swissai #LernenmitKI


#llm #tts



😂 Someone excitedly showed me how #ChatGPT fixed a bug in my script. The problem is that it completely refactored the code, expanding 50 lines into 95. I fixed the bug by changing only one line in the original code. For its defense, it was trying to be smart by converting small boilerplate into functions with try/catch blocks which made it longer. Probably it was better in a bigger codebase, but unnecessary for a small standalone script, and the code was much harder to follow. #LLM #ML #AI


There are plenty of vocal Ollama haters on social media, but look at GitHub stars! I think their strategy to keep it simple works.
* Ollama: 149895
* Llama.CPP: 84535
* VLLM: 54850
* SGLANG: 16789
#LLM #ML #AI
#AI #ML #llm



#AI #ML #llm #openai #gpt


Používáte PLACENOU verzi nějaké umělé inteligence k chatování?
#LLM #AI

  • Ano (0 votes)
  • Ne (0 votes)
  • Nepoužívám LLM vůbec (0 votes)
Poll end: 2 months ago

#AI #llm


OMG, I just came across a podcast called Flesh and Code about people having relationships with LLMs on platforms like Replika. R/Replika has over 80K members! It’s both sad and wild. Maybe because I’ve been playing with dumb LLMs since GPT-2 and understand how they work better than average people, LLMS never really worked for me. lol Since Grok on X now has a porn companionship feature that anyone 12+ can access, I guess things can get worse from here. #LLM #ML #AI reddit.com/r/replika/
#AI #ML #llm


#AI #ML #llm


After reading about the manosphere-trained ChatGPT model OpenAI was _promoting on its front page_, I shared a couple photos with inceLLM to see how much it would neg me for not being GigaChad material...aaand ironically inceLLM, the most toxic-masculinity thing I've heard of this week, has a thing for enbies 🤣

#NonbinaryPride #ToxicMasculinity #LLM


"'Take a screenshot every few seconds' legitimately sounds like a suggestion from a low-parameter LLM that was given a prompt like 'How do I add an arbitrary AI feature to my operating system as quickly as possible in order to make investors happy?'" signal.org/blog/signal-doesnt-…

#Signal #Microsoft #Recall #MicrosoftRecall #LLM #LLMs #privacy


In a move that surprises absolutely noone, GitHub now requires users to login in order to browse public repositories (including open source projects). After a few (~10) requests, you get blocked (I can confirm). In order to fight AI scrapers, I guess.

So, GitHub decided to blanket-limit access to open source projects as a defense against the very scourge that they(r parent company) unleashed on the world.

I won't be hypocrite: it's a bit embarrassing, but undeniably satisfying to say "told you so". I moved away from GitHub long ago and I moved all my stuff to Codeberg instead. And so happy I did!

Next step: radicle.xyz maybe?

github.com/orgs/community/disc…

#github #microsoft #openai #codeberg #ai #ml #llm #enshittification #foss #floss #opensource #radicle



#AI #ML #llm #llama


😲 DeepSeek-V3-4bit runs at >20 tokens per second and <200W using MLX on an M3 Ultra with 512GB. This might be the best and most user-friendly way to run DeepSeek-V3 on consumer hardware, possibly the most affordable too. You can finally run a GPT-4o level model locally, with possibly even better quality. #LLM #AI #ML #DeepSeek #OpenAI #GPT #OpenWeight #OpenSource venturebeat.com/ai/deepseek-v3…


AirBnB migrated 3,5k React component test files from Enzyme to RTL using LLMs and automation in just six weeks. The effort was originally estimated to take 1.5 years of manual engineering time to do by hand. #LLM #AI #ML #Automation
medium.com/airbnb-engineering/…




#llm #cnb




Hraju si teď s LLM DeepSeek R1 github.com/deepseek-ai/DeepSee… Vypadá to, že tento open-source model, který je dostupný zdarma (chat.deepseek.com a zaškrtnout Deepthink), dosahuje kvalit OpenAI-o1 modelu.
Na lokálním stroji ale ten 32B nebo dokonce 70B model (zabírají cca 40 GB na disku) rozhodně nerozjedu, laptop se mně potí u 8B modelů. :)
#DeepSeek #LLM


Jednou za čas si zkusím nějaké ty lokální #LLM, jestli by nešly prakticky používat bez přístupu k netu, a zatím teda ještě ne. :D
#llm


Thx for your link and efforts @Seirdy !

All this said, being part of a decentralized web, as pointed out in this toot, our publicly visible interaction lands on other instances and servers of the #fediVerse and can be scrapped there. I wonder if this situation actually might lead, or should lead, to a federation of servers that share the same robots.txt "ideals".

As @Matthias pointed out in his short investigation of the AI matter, this has (in my eyes) already unimagined levels of criminal and without any doubt unethical behavior, not to mention the range of options rouge actors have at hand.

It's evident why for example the elongated immediately closed down access to X's public tweets and I guess other companies did the same for the same reasons. Obviously the very first reason was to protect their advantage about the hoarded data sets to train their AI in the first place. Yet, considering the latest behavior of the new owner of #twitter, nothing less than at least the creation of #AI driven lists of "political" enemies, and not only from all the collected data on his platform, is to be expected. A international political nightmare of epical proportions. Enough material for dystopian books and articles for people like @Cory Doctorow, @Mike Masnick ✅, @Eva Wolfangel, @Taylor Lorenz, @Jeff Jarvis, @Elena Matera, @Gustavo Antúnez 🇺🇾🇦🇷, to mention a few of the #journalim community, more than one #podcast episode by @Tim Pritlove and @linuzifer, or some lifetime legal cases for @Max Schrems are at hand.

What we are facing now is the fact that we need to protect our and our users data and privacy because of the advanced capabilities of #LLM. We basically are forced to consider to change to private/restricted posts and close down our servers as not only the legal jurisdictions are way to scattered over the different countries and ICANN details, but legislation and comprehension by the legislators is simply none existent, as @Anke Domscheit-Berg could probably agree to.

Like to say, it looks like we need to go dark, a fact that will drive us even more into disappearing as people will have less chance to see what we are all about, advancing further the advantages off the already established players in the social web space.
Just like Prof. Dr. Peter Kruse stated in his take about on YT The network is challenging us min 2:42 more than 14 years ago:
"With semantic understanding we'll have the real big brother. Someone is getting the best out of it and the rest will suffer."


#Slop is low-quality media - including writing and images - made using generative artificial intelligence technology.


Quelle: Wikipedia.

Open source projects have to deal with a growing number of low-quality vulnerability reports based on AI. See for example this comment from Daniel Stenberg, maintainer of #Curl:

I'm sorry you feel that way, but you need to realize your own role here. We receive AI slop like this regularly and at volume. You contribute to unnecessary load of curl maintainers and I refuse to take that lightly and I am determined to act swiftly against it. Now and going forward.

You submitted what seems to be an obvious AI slop "report" where you say there is a security problem, probably because an AI tricked you into believing this. You then waste our time by not telling us that an AI did this for you and you then continue the discussion with even more crap responses - seemingly also generated by AI.

Weiterlesen bei HackerOne: Buffer Overflow Risk in Curl_inet_ntop and inet_ntop4.

#opensource #AI #LLM #Spam


#AI #ML #llm


This is making the rounds on Finnish social media.

A large association for Finnish construction companies, #Rakennusteollisuus, decided that they needed an English version of their website but apparently they didn't want to pay an actual #translator so just used some free #LLM with hilarious results.

They've fixed it now, but for a short while there was some comedy gold to be found.

P.s. I didn't find these, I've no idea who did.