Search

Items tagged with: LLm


#llm #tts



😂 Someone excitedly showed me how #ChatGPT fixed a bug in my script. The problem is that it completely refactored the code, expanding 50 lines into 95. I fixed the bug by changing only one line in the original code. For its defense, it was trying to be smart by converting small boilerplate into functions with try/catch blocks which made it longer. Probably it was better in a bigger codebase, but unnecessary for a small standalone script, and the code was much harder to follow. #LLM #ML #AI


There are plenty of vocal Ollama haters on social media, but look at GitHub stars! I think their strategy to keep it simple works.
* Ollama: 149895
* Llama.CPP: 84535
* VLLM: 54850
* SGLANG: 16789
#LLM #ML #AI
#AI #ML #llm



#AI #ML #llm #openai #gpt


Používáte PLACENOU verzi nějaké umělé inteligence k chatování?
#LLM #AI

  • Ano (0 votes)
  • Ne (0 votes)
  • Nepoužívám LLM vůbec (0 votes)
Poll end: 1 month ago

#AI #llm


OMG, I just came across a podcast called Flesh and Code about people having relationships with LLMs on platforms like Replika. R/Replika has over 80K members! It’s both sad and wild. Maybe because I’ve been playing with dumb LLMs since GPT-2 and understand how they work better than average people, LLMS never really worked for me. lol Since Grok on X now has a porn companionship feature that anyone 12+ can access, I guess things can get worse from here. #LLM #ML #AI reddit.com/r/replika/
#AI #ML #llm


Please boost for reach among the blind community. Okay y'all, is it just me, or are the Meta RayBan glasses descriptions, even with detailed responses on in accessibility settings, still not very accurate? I mean it feels like they're using Llama 3.1 8B, a small model. Am I going more crazy than I already am? Am I missing some context engineering tricks? Like I don't get it. It said my coffee maker's filter basket was empty when it wasn't, said a cup of coffee was empty when it was about half full, then said the coffee cup was folded when I asked it it was full again, cause speech recognition still sucks I guess and AI can't work around that, and said a washing machine was beside the bathroom counter when it was behind me, across from the counter. Like this isn't me playing a video game, this is normal household stuff.

#meta #RayBan #llm #ai #accessibility #blind


#AI #ML #llm


AI is bad compression. Every time you run training material through it, you get a lossy summary of that material back, along with some noise.

You quickly run out of *quality* training material and start dog-fooding the output back in. Then you end up with lossy summaries of lossy summaries, and eventually all your pizza sauce recipes are dog food.

#AI #LLM

#AI #llm


That feeling when I write documentation for something, then find the documentation I forgot I wrote for the same thing two years ago, and the wording and structure are almost identical. Right down to the same silly jokes. Then while editing I noticed a mistake I made in this version, and the exact same mistake was in what I wrote two years ago. Am I just an #llm that's made out of meat? Never mind anyone else: will I even realize when I replace myself with an #AI? Maybe it's already happened...
#AI #llm


To whomever praises #Claude #LLM:

ClaudeBot has made 20k requests to bugs.gentoo.org today. 15k of them were repeatedly fetching robots.txt. That surely is a sign of great code quality.

#AI


For anyone wondering why #gravy has been trending: AI hucksters are trawling through your social media posts for training data and trends. And you know what can gum up the gears of an automated sentence generator? Posts that use the word gravy out of context. #auspol #ChatGPT #AI #LLM


People continue to think about #AI in terms of #2010s computing, which is part of the reason everyone gets it wrong whether they're #antiAI or #tech bros.

Look, we had 8GB of #ram as the standard for a decade. The standard was set in 2014, and in 2015 #AlphaGo beat a human at #Go.

Why? Because, #hardware lags #software - in #economic terms: supply follows demand, but demand can not create its own supply.

It takes 3 years for a new chip to go through the #technological readiness levels and be released.

It takes 5 years for a new #chip architecture. E.g. the #Zen architecture was conceived in 2012, and released in 2017.

It takes 10 years for a new type of technology, like a #GPU.

Now, AlphaGo needed a lot of RAM, so how did it stagnate for a decade after doubling every two years before that?

In 2007 the #Iphone was released. #Computers were all becoming smaller, #energy #efficiency was becoming paramount, and everything was moving to the #cloud.

In 2017, most people used their computer for a few applications and a web browser. But also in 2017, companies were starting to build #technology for AI, as it was becoming increasingly important.

Five years after that, we're in the #pandemic lockdowns, and people are buying more powerful computers, we have #LLM, and companies are beginning to jack up the const of cloud services.

#Apple releases chips with large amounts of unified #memory, #ChatGPT starts to break the internet, and in 2025, GPU growth continues to outpace CPU growth, and in 2025 you have a competitor to Apple's unified memory.

The era of cloud computing and surfing the #web is dead.

The hype of multi-trillion parameter #LLMs making #AGI is a fantasy. There isn't enough power to do that, there aren't enough chips, it's already too expensive.

What _is_ coming is AI tech performing well and running locally without the cloud. AI Tech is _not_ just chatbots and #aiart. It's going to change what you can do with your #computer.


After reading about the manosphere-trained ChatGPT model OpenAI was _promoting on its front page_, I shared a couple photos with inceLLM to see how much it would neg me for not being GigaChad material...aaand ironically inceLLM, the most toxic-masculinity thing I've heard of this week, has a thing for enbies 🤣

#NonbinaryPride #ToxicMasculinity #LLM


"'Take a screenshot every few seconds' legitimately sounds like a suggestion from a low-parameter LLM that was given a prompt like 'How do I add an arbitrary AI feature to my operating system as quickly as possible in order to make investors happy?'" signal.org/blog/signal-doesnt-…

#Signal #Microsoft #Recall #MicrosoftRecall #LLM #LLMs #privacy


In a move that surprises absolutely noone, GitHub now requires users to login in order to browse public repositories (including open source projects). After a few (~10) requests, you get blocked (I can confirm). In order to fight AI scrapers, I guess.

So, GitHub decided to blanket-limit access to open source projects as a defense against the very scourge that they(r parent company) unleashed on the world.

I won't be hypocrite: it's a bit embarrassing, but undeniably satisfying to say "told you so". I moved away from GitHub long ago and I moved all my stuff to Codeberg instead. And so happy I did!

Next step: radicle.xyz maybe?

github.com/orgs/community/disc…

#github #microsoft #openai #codeberg #ai #ml #llm #enshittification #foss #floss #opensource #radicle



#AI #ML #llm #llama


😲 DeepSeek-V3-4bit runs at >20 tokens per second and <200W using MLX on an M3 Ultra with 512GB. This might be the best and most user-friendly way to run DeepSeek-V3 on consumer hardware, possibly the most affordable too. You can finally run a GPT-4o level model locally, with possibly even better quality. #LLM #AI #ML #DeepSeek #OpenAI #GPT #OpenWeight #OpenSource venturebeat.com/ai/deepseek-v3…


AirBnB migrated 3,5k React component test files from Enzyme to RTL using LLMs and automation in just six weeks. The effort was originally estimated to take 1.5 years of manual engineering time to do by hand. #LLM #AI #ML #Automation
medium.com/airbnb-engineering/…




#llm #cnb




Hraju si teď s LLM DeepSeek R1 github.com/deepseek-ai/DeepSee… Vypadá to, že tento open-source model, který je dostupný zdarma (chat.deepseek.com a zaškrtnout Deepthink), dosahuje kvalit OpenAI-o1 modelu.
Na lokálním stroji ale ten 32B nebo dokonce 70B model (zabírají cca 40 GB na disku) rozhodně nerozjedu, laptop se mně potí u 8B modelů. :)
#DeepSeek #LLM


Jednou za čas si zkusím nějaké ty lokální #LLM, jestli by nešly prakticky používat bez přístupu k netu, a zatím teda ještě ne. :D
#llm


Thx for your link and efforts @Seirdy !

All this said, being part of a decentralized web, as pointed out in this toot, our publicly visible interaction lands on other instances and servers of the #fediVerse and can be scrapped there. I wonder if this situation actually might lead, or should lead, to a federation of servers that share the same robots.txt "ideals".

As @Matthias pointed out in his short investigation of the AI matter, this has (in my eyes) already unimagined levels of criminal and without any doubt unethical behavior, not to mention the range of options rouge actors have at hand.

It's evident why for example the elongated immediately closed down access to X's public tweets and I guess other companies did the same for the same reasons. Obviously the very first reason was to protect their advantage about the hoarded data sets to train their AI in the first place. Yet, considering the latest behavior of the new owner of #twitter, nothing less than at least the creation of #AI driven lists of "political" enemies, and not only from all the collected data on his platform, is to be expected. A international political nightmare of epical proportions. Enough material for dystopian books and articles for people like @Cory Doctorow, @Mike Masnick ✅, @Eva Wolfangel, @Taylor Lorenz, @Jeff Jarvis, @Elena Matera, @Gustavo Antúnez 🇺🇾🇦🇷, to mention a few of the #journalim community, more than one #podcast episode by @Tim Pritlove and @linuzifer, or some lifetime legal cases for @Max Schrems are at hand.

What we are facing now is the fact that we need to protect our and our users data and privacy because of the advanced capabilities of #LLM. We basically are forced to consider to change to private/restricted posts and close down our servers as not only the legal jurisdictions are way to scattered over the different countries and ICANN details, but legislation and comprehension by the legislators is simply none existent, as @Anke Domscheit-Berg could probably agree to.

Like to say, it looks like we need to go dark, a fact that will drive us even more into disappearing as people will have less chance to see what we are all about, advancing further the advantages off the already established players in the social web space.
Just like Prof. Dr. Peter Kruse stated in his take about on YT The network is challenging us min 2:42 more than 14 years ago:
"With semantic understanding we'll have the real big brother. Someone is getting the best out of it and the rest will suffer."


#Slop is low-quality media - including writing and images - made using generative artificial intelligence technology.


Quelle: Wikipedia.

Open source projects have to deal with a growing number of low-quality vulnerability reports based on AI. See for example this comment from Daniel Stenberg, maintainer of #Curl:

I'm sorry you feel that way, but you need to realize your own role here. We receive AI slop like this regularly and at volume. You contribute to unnecessary load of curl maintainers and I refuse to take that lightly and I am determined to act swiftly against it. Now and going forward.

You submitted what seems to be an obvious AI slop "report" where you say there is a security problem, probably because an AI tricked you into believing this. You then waste our time by not telling us that an AI did this for you and you then continue the discussion with even more crap responses - seemingly also generated by AI.

Weiterlesen bei HackerOne: Buffer Overflow Risk in Curl_inet_ntop and inet_ntop4.

#opensource #AI #LLM #Spam


#AI #ML #llm


This is making the rounds on Finnish social media.

A large association for Finnish construction companies, #Rakennusteollisuus, decided that they needed an English version of their website but apparently they didn't want to pay an actual #translator so just used some free #LLM with hilarious results.

They've fixed it now, but for a short while there was some comedy gold to be found.

P.s. I didn't find these, I've no idea who did.


A study asked 50 doctors to make six different diagnoses for medical conditions. "Doctors who did the project without AI got an average score of 74%, doctors who used AI got an average score of 76%, and ChatGPT itself got an average score of 90%." "AI didn’t help doctors using it as much as anticipated because physicians “didn’t listen to AI when AI told them things they didn’t agree. Most doctors couldn’t be convinced a chatbot knew more than them." #LLM #AI #ChatGPT qz.com/chatgpt-beat-doctors-at…


I'm a little puzzled at the salience that is being given to the Apple conclusions on #LLM #reasoning when we have lots of prior art. For example: LLMs cannot correctly infer a is b, if their corpora only contain b is a. #Paper: arxiv.org/abs/2309.12288

#AI #MachineLearning #logic


#AIagent promotes itself to #sysadmin , trashes #boot sequence

Fun experiment, but yeah, don't pipe an #LLM raw into /bin/bash

Buck #Shlegeris, CEO at #RedwoodResearch, a nonprofit that explores the risks posed by #AI , recently learned an amusing but hard lesson in automation when he asked his LLM-powered agent to open a secure connection from his laptop to his desktop machine.
#security #unintendedconsequences

theregister.com/2024/10/02/ai_…