Search

Items tagged with: LLM



Here is a way that I think #LLMs and #GenAI are generally a force against innovation, especially as they get used more and more.

TL;DR: 3 years ago is a long time, and techniques that old are the most popular in the training data. If a company like Google, AWS, or Azure replaces an established API or a runtime with a new API or runtime, a bunch of LLM-generated code will break. The people vibe code won't be able to fix the problem because nearly zero data exists in the training data set that references the new API/runtime. The LLMs will not generate correct code easily, and they will constantly be trying to edit code back to how it was done before.

This will create pressure on tech companies to keep old APIs and things running, because of the huge impact it will have to do something new (that LLMs don't have in their training data). See below for an even more subtle way this will manifest.

I am showcasing (only the most egregious) bullshit that the junior developer accepted from the #LLM, The LLM used out-of-date techniques all over the place. It was using:

  • AWS Lambda Python 3.9 runtime (will be EoL in about 3 months)
  • AWS Lambda NodeJS 18.x runtime (already deprecated by the time the person gave me the code)
  • Origin Access Identity (an authentication/authorization mechanism that started being deprecated when OAC was announced 3 years ago)

So I'm working on this dogforsaken codebase and I converted it to the new OAC mechanism from the out of date OAI. What does my (imposed by the company) AI-powered security guidance tell me? "This is a high priority finding. You should use OAI."

So it is encouraging me to do the wrong thing and saying it's high priority.

It's worth noting that when I got the code base and it had OAI active, Python 3.9, and NodeJS 18, I got no warnings about these things. Three years ago that was state of the art.


But we are not done.

Teach your developers ignore #AI generated warnings with this one weird trick!


If you look sy my previous post, under the erroneous "You should use OAI" message, there's a little icon and the word "Explain." You can ask the #LLM to explain the warning. Here's the explanation that Claude Sonnet 4.0 has for that warning. For those of you who don't want to read a screenshot, the LLM has prompted itself with:

Explain CloudFront Distribution Resources have an S3 Origin configured without an Origin Access Identity (OAI). issue in 70-frontend-stack.yaml at [29]


And the LLM's response to its own prompt is:

This is a false positive security warning. The issue is incorrectly flagged because:

You ARE using access control - Line 16-23 creates an Origin Access Control (OAC), which is the modern replacement for OAI

OAC is properly configured - Line 38 attaches the OAC: OriginAccessControlId: !GetAtt CloudFrontOriginAccessControl.Id

S3 bucket policy enforces OAC - Lines 120-135 ensure only CloudFront can access the S3 bucket

The security scanner likely only checks for the legacy OriginAccessIdentity property and doesn't recognize the newer OriginAccessControlId. Your configuration is actually more secure than using the deprecated OAI.

The warning can be safely ignored - your setup uses the AWS-recommended OAC approach for securing S3 origins.


Thanks for wasting my time AND egregious amounts of electricity generating a pointless "high priority" security warning.

#AI #llm


#ai #LLM

«Мешок слов» — не супруг, не наставник, не босс и не раб. Это инструмент. Его предназначение — снимать с нас рутину и усиливать наши способности. Его социальный статус — отсутствует; бессмысленно спрашивать, «лучше» ли он нас. Настоящий вопрос такой: становимся ли мы лучше, когда им пользуемся?

#AI #llm


ИИ — просто мешок слов. Как перестать видеть интеллект там, где его нет

#ai #LLM

Наука — это «задача сильного звена»: даже если мы произведём в миллион раз больше посредственных исследований, окажемся там же, где и сейчас.
Если нам нужно больше действительно сильных работ, чем же наполнять "мешок" LLM? Можно забивать его статьями, но часть из них — сфабрикованы, часть — просто ошибочны, и все они содержат неявные допущения, которые могут оказаться ложными.
К тому же часто не хватает ключевой информации — нет данных, недостаточно подробно описаны методы.

Предприниматель Маркус Страссер, пытавшийся сделать компанию из серии «положим все статьи в "мешок" → ??? → профит», в итоге отказался от затеи, заявив, что «почти ничего из того, что действительно делает науку наукой, не опубликовано в виде текста в интернете».

Даже лучший "мешок" в мире бесполезен, если в него не положить правильные вещи.

habr.com/ru/companies/otus/art…

#AI #llm


A new type of career opportunity emerged from vibe coding because vibe coders didn't know what they were doing 😉

#ai #llm #jobs

#AI #jobs #llm


Big News! The completely #opensource #LLM #Apertus 🇨🇭 has been released today:

📰 swisscom.ch/en/about/news/2025…

🤝 The model supports over 1000 languages [EDIT: an earlier version claimed over 1800] and respects opt-out consent of data owners.

▶ This is great for #publicAI and #transparentAI. If you want to test it for yourself, head over to: publicai.co/

🤗 And if you want to download weights, datasets & FULL TRAINING DETAILS, you can find them here:
huggingface.co/collections/swi…

🔧 Tech report: huggingface.co/swiss-ai/Apertu…

After #Teuken7b and #Olmo2, Apertus is the next big jump in capabilities and performance of #FOSS #LLMs, while also improving #epistemicresilience and #epistemicautonomy with its multilingual approach.

I believe that especially for sensitive areas like #education, #healthcare, or #academia, there is no alternative to fully open #AI models. Everybody should start building upon them and improving them.

#KIMündigkeit #SovereignAI #FOSS #ethicalAI #swissai #LernenmitKI


#llm #tts



😂 Someone excitedly showed me how #ChatGPT fixed a bug in my script. The problem is that it completely refactored the code, expanding 50 lines into 95. I fixed the bug by changing only one line in the original code. For its defense, it was trying to be smart by converting small boilerplate into functions with try/catch blocks which made it longer. Probably it was better in a bigger codebase, but unnecessary for a small standalone script, and the code was much harder to follow. #LLM #ML #AI


There are plenty of vocal Ollama haters on social media, but look at GitHub stars! I think their strategy to keep it simple works.
* Ollama: 149895
* Llama.CPP: 84535
* VLLM: 54850
* SGLANG: 16789
#LLM #ML #AI
#AI #ML #llm



#AI #ML #llm #openai #gpt


Používáte PLACENOU verzi nějaké umělé inteligence k chatování?
#LLM #AI

  • Ano (0 votes)
  • Ne (0 votes)
  • Nepoužívám LLM vůbec (0 votes)
Poll end: 1 month ago

#AI #llm


OMG, I just came across a podcast called Flesh and Code about people having relationships with LLMs on platforms like Replika. R/Replika has over 80K members! It’s both sad and wild. Maybe because I’ve been playing with dumb LLMs since GPT-2 and understand how they work better than average people, LLMS never really worked for me. lol Since Grok on X now has a porn companionship feature that anyone 12+ can access, I guess things can get worse from here. #LLM #ML #AI reddit.com/r/replika/
#AI #ML #llm


Please boost for reach among the blind community. Okay y'all, is it just me, or are the Meta RayBan glasses descriptions, even with detailed responses on in accessibility settings, still not very accurate? I mean it feels like they're using Llama 3.1 8B, a small model. Am I going more crazy than I already am? Am I missing some context engineering tricks? Like I don't get it. It said my coffee maker's filter basket was empty when it wasn't, said a cup of coffee was empty when it was about half full, then said the coffee cup was folded when I asked it it was full again, cause speech recognition still sucks I guess and AI can't work around that, and said a washing machine was beside the bathroom counter when it was behind me, across from the counter. Like this isn't me playing a video game, this is normal household stuff.

#meta #RayBan #llm #ai #accessibility #blind


#AI #ML #llm


AI is bad compression. Every time you run training material through it, you get a lossy summary of that material back, along with some noise.

You quickly run out of *quality* training material and start dog-fooding the output back in. Then you end up with lossy summaries of lossy summaries, and eventually all your pizza sauce recipes are dog food.

#AI #LLM

#AI #llm


That feeling when I write documentation for something, then find the documentation I forgot I wrote for the same thing two years ago, and the wording and structure are almost identical. Right down to the same silly jokes. Then while editing I noticed a mistake I made in this version, and the exact same mistake was in what I wrote two years ago. Am I just an #llm that's made out of meat? Never mind anyone else: will I even realize when I replace myself with an #AI? Maybe it's already happened...
#AI #llm


To whomever praises #Claude #LLM:

ClaudeBot has made 20k requests to bugs.gentoo.org today. 15k of them were repeatedly fetching robots.txt. That surely is a sign of great code quality.

#AI


For anyone wondering why #gravy has been trending: AI hucksters are trawling through your social media posts for training data and trends. And you know what can gum up the gears of an automated sentence generator? Posts that use the word gravy out of context. #auspol #ChatGPT #AI #LLM


People continue to think about #AI in terms of #2010s computing, which is part of the reason everyone gets it wrong whether they're #antiAI or #tech bros.

Look, we had 8GB of #ram as the standard for a decade. The standard was set in 2014, and in 2015 #AlphaGo beat a human at #Go.

Why? Because, #hardware lags #software - in #economic terms: supply follows demand, but demand can not create its own supply.

It takes 3 years for a new chip to go through the #technological readiness levels and be released.

It takes 5 years for a new #chip architecture. E.g. the #Zen architecture was conceived in 2012, and released in 2017.

It takes 10 years for a new type of technology, like a #GPU.

Now, AlphaGo needed a lot of RAM, so how did it stagnate for a decade after doubling every two years before that?

In 2007 the #Iphone was released. #Computers were all becoming smaller, #energy #efficiency was becoming paramount, and everything was moving to the #cloud.

In 2017, most people used their computer for a few applications and a web browser. But also in 2017, companies were starting to build #technology for AI, as it was becoming increasingly important.

Five years after that, we're in the #pandemic lockdowns, and people are buying more powerful computers, we have #LLM, and companies are beginning to jack up the const of cloud services.

#Apple releases chips with large amounts of unified #memory, #ChatGPT starts to break the internet, and in 2025, GPU growth continues to outpace CPU growth, and in 2025 you have a competitor to Apple's unified memory.

The era of cloud computing and surfing the #web is dead.

The hype of multi-trillion parameter #LLMs making #AGI is a fantasy. There isn't enough power to do that, there aren't enough chips, it's already too expensive.

What _is_ coming is AI tech performing well and running locally without the cloud. AI Tech is _not_ just chatbots and #aiart. It's going to change what you can do with your #computer.


After reading about the manosphere-trained ChatGPT model OpenAI was _promoting on its front page_, I shared a couple photos with inceLLM to see how much it would neg me for not being GigaChad material...aaand ironically inceLLM, the most toxic-masculinity thing I've heard of this week, has a thing for enbies 🤣

#NonbinaryPride #ToxicMasculinity #LLM


"'Take a screenshot every few seconds' legitimately sounds like a suggestion from a low-parameter LLM that was given a prompt like 'How do I add an arbitrary AI feature to my operating system as quickly as possible in order to make investors happy?'" signal.org/blog/signal-doesnt-…

#Signal #Microsoft #Recall #MicrosoftRecall #LLM #LLMs #privacy


In a move that surprises absolutely noone, GitHub now requires users to login in order to browse public repositories (including open source projects). After a few (~10) requests, you get blocked (I can confirm). In order to fight AI scrapers, I guess.

So, GitHub decided to blanket-limit access to open source projects as a defense against the very scourge that they(r parent company) unleashed on the world.

I won't be hypocrite: it's a bit embarrassing, but undeniably satisfying to say "told you so". I moved away from GitHub long ago and I moved all my stuff to Codeberg instead. And so happy I did!

Next step: radicle.xyz maybe?

github.com/orgs/community/disc…

#github #microsoft #openai #codeberg #ai #ml #llm #enshittification #foss #floss #opensource #radicle



#AI #ML #llm #llama


😲 DeepSeek-V3-4bit runs at >20 tokens per second and <200W using MLX on an M3 Ultra with 512GB. This might be the best and most user-friendly way to run DeepSeek-V3 on consumer hardware, possibly the most affordable too. You can finally run a GPT-4o level model locally, with possibly even better quality. #LLM #AI #ML #DeepSeek #OpenAI #GPT #OpenWeight #OpenSource venturebeat.com/ai/deepseek-v3…


AirBnB migrated 3,5k React component test files from Enzyme to RTL using LLMs and automation in just six weeks. The effort was originally estimated to take 1.5 years of manual engineering time to do by hand. #LLM #AI #ML #Automation
medium.com/airbnb-engineering/…




#llm #cnb




Hraju si teď s LLM DeepSeek R1 github.com/deepseek-ai/DeepSee… Vypadá to, že tento open-source model, který je dostupný zdarma (chat.deepseek.com a zaškrtnout Deepthink), dosahuje kvalit OpenAI-o1 modelu.
Na lokálním stroji ale ten 32B nebo dokonce 70B model (zabírají cca 40 GB na disku) rozhodně nerozjedu, laptop se mně potí u 8B modelů. :)
#DeepSeek #LLM


Jednou za čas si zkusím nějaké ty lokální #LLM, jestli by nešly prakticky používat bez přístupu k netu, a zatím teda ještě ne. :D
#llm