I hereby coin the term "Ptolemaic Code" to refer to software that appears functional but is based on a fundamentally incorrect model of the problem domain. As more code is generated by AI, the prevalence of such code is likely to increase.
1/7
1/7
Dmytri
in reply to Dmytri • • •2/7
Dmytri
in reply to Dmytri • • •this code passes all its tests and satisfies, its specifications, yet is built on a fundamentally flawed logic.
AI code generation, which relies on examples, will likely produce significant amounts of Ptolemaic Code.
3/7
Dmytri
in reply to Dmytri • • •4/7
Dmytri
in reply to Dmytri • • •5/7
Dmytri
in reply to Dmytri • • •6/7
Dmytri
in reply to Dmytri • • •7/7
eruwero
in reply to Dmytri • • •Dmytri
in reply to eruwero • • •Dave Wilburn
in reply to Dmytri • • •This is exactly how I have described the limitations of black box models to others.
A good scientific model should do two things:
1) Provide accurate predictions of outcomes given certain inputs, and
2) Enable greater understanding of how a system works.
Simpler machine learning models, like logistic regression or decision trees, can sometimes do both, at least for simpler phenomenon. The models are explainable and their decisions are interpretable. For those reasons among others, applied machine learning researchers still use these simpler approaches wherever they can be made to work.
But in our haste to increase accuracy for more complex phenomenon, we've created models that merely provide semi-accurate predictions at the expense of explainability and interpretability. Like the ptolemaic model of the solar system, these models mostly work well in predicting outcomes within the narrow areas in which they've been trained. But they do absolutely nothing to enable understanding of the underlying phenomenon. Or worse, they mislead us into fundamentally wrong understandings. And because their training is overfit onto the limits of their training data, their accuracy falls apart unpredictably when used for tasks outside the distribution of their training. Computational linguists and other experts that might celebrate these models instead lament the benighted ignorance left in their wake.
Or how it was more eloquently stated in the great philosophical film Billy Madison:
"Mr. Madison, what you've just said is one of the most insanely idiotic things I have ever heard. At no point in your rambling, incoherent response were you even close to anything that could be considered a rational thought. Everyone in this room is now dumber for having listened to it. I award you no points, and may God have mercy on your soul."
Dmytri reshared this.
Nicole Parsons
in reply to Dave Wilburn • • •@DaveMWilburn
#pluralistic describes the technical debt of these AI coding models as asbestos in the walls.
A hazard we'll be digging out of the walls for decades to come.
It remains a fact that when petrostate despots are this desperate to impose user adoption, alarm bells should be ringing. Fossil fuel funded cyberwarfare.
reuters.com/technology/artific…
fortune.com/2025/11/20/saudi-v…
When anti-democracy billionaires are spending this kind of cash on a boondoggle...
forbes.com/sites/mattdurot/202…
‘Yes, it is an ambitious, crazy thing’: Scenes from the Kennedy Center’s Saudi-U.S. AI mind-meld, with Trump, MBS, Musk and Huang
Josh Boak (Fortune)Dmytri
in reply to Nicole Parsons • • •Nicole Parsons
in reply to Dmytri • • •How hard will billionaires work to impose the AI world view on the globe?
To the same degree as religious zealots?
Centuries of dispute over irrelevant issues like "how many angels can dance on the head of a pin?"
A worldview that says human expertise is dead.
The worldview that says "Democracy is dead. The CEO of Koch Industries & Palantir own you."
The world view of "might makes right" and "power & wealth has no goal, only its self-perpetuation".
1/
Dmytri
in reply to Dmytri • • •@Npars01 @DaveMWilburn @pluralistic eg, this underappreciated joke from July
tldr.nettime.org/@dk/114851322…
Dmytri (@dk@tldr.nettime.org)
Dmytri (tldr.nettime)Nicole Parsons
in reply to Dmytri • • •The ptolemaic mental model of the solar system was used as a pretext for centuries of religious warfare.
Entrenched interests fought wars to keep it according to James Burke's 1st episode "The Day the Universe Changed".
Why? Because it fed a narrative of an unchanging "natural order".
Will today's mental model of AI feed an equally self-serving set of narratives?
Glitzersachen
in reply to Dmytri • • •The tell-tales are lots of special casing in the code (the epicycles there) and the response "it works and that is all that counts" when you try to discuss what you see in the code.
Ah, and strange variable names.
Santiago
in reply to Dmytri • • •Patai Gergely
in reply to Dmytri • • •Dmytri reshared this.
Santiago
in reply to Dmytri • • •Dmytri reshared this.
Peter Ludemann
in reply to Dmytri • • •Giulio Cesare Solaroli
in reply to Dmytri • • •deech
in reply to Dmytri • • •Atemu@39c3[3162]
in reply to Dmytri • • •That's a good analogy.
It even extends further:
In order to make technological progress, we needed to abandon the incorrect model of our solar system. We would probably not have made it to the moon if we'd stuck to the ptolemaic model of our solar system.
Similarly, in order to meaningfully advance our software ecosystem, we need to abandon code produced using poor software engineering practices – such as LLM codegen.
slotos
in reply to Dmytri • • •Problem of Ptolemaic model wasn’t correctness—it is equivalent to heliocentric model—but the fact that its builders resisted attempts to falsify core assumptions.
That is the problem with AI coding. When it works, it skips model verification entirely. When it doesn’t, it doesn’t offer actionable insights.
Proto Himbo Derpopean
in reply to Dmytri • • •@dk@nettime.org Jumping back a level of analysis or two (and therefore maybe no longer being valid): I'm thinking of the tweaks LLM masters demand their engineers make to LLM output, usually (from what I've seen) for two reasons:
To reduce antisocial behavior (e.g., LLMs producing fascist, misogynist, racist, anti-queer, etc. content, or stop them from encouraging people to commit #suicide)
To increase the happiness of rich-people-who-own-the-LLMs (e.g., increase profit, decrease Grok saying Elon is an asshole, etc.)
The fact that both of these need to (apparently) be done regularly suggests a mismatch with "reality." Arguably, that is not objective external reality but the internal reality of the LLM vis-a-vis its constantly-updating training corpus. The combination of the LLM code and its training corpus seems to make LLMs regularly say awful things and also fail to generate maximum profit for the owners/shareholders.
I won't be the first (or 10,000th) to say there is a significant mismatch between what LLMs (currently) are and what their masters want them to do.
Wendy Nather
in reply to Dmytri • • •BJ Swope ➖
in reply to Dmytri • • •Andy Gajdosik
in reply to Dmytri • • •Enema Cowboy
in reply to Dmytri • • •David J. Atkinson
in reply to Dmytri • • •Oliver Drotbohm
in reply to Dmytri • • •Dmytri
in reply to Oliver Drotbohm • • •GhostOnTheHalfShell
in reply to Dmytri • • •Nice. It is a little unfair to Ptolemy. Epicycles are in implementation of the Fourier transform.
It is very good example of the distinction between a predictive model and a casual one. The Copernican model would still have to inject corrections.
“ For his contemporaries, the ideas presented by Copernicus were not markedly easier to use than the geocentric theory and did not produce more accurate predictions of planetary positions.”
..
en.wikipedia.org/wiki/Copernic…
concept that the Earth rotates around the Sun
Contributors to Wikimedia projects (Wikimedia Foundation, Inc.)Lucky
in reply to Dmytri • • •theUCM
in reply to Dmytri • • •web.archive.org/web/2016031518…
Quote:
" Zwei Gedankenstränge waren die Vorläufer zum Bioadapter. Zum einen die Vorstellung der Gesellschaft als Homöostat. Ich bemerkte, dass die Kybernetik diesen Zug an sich hat, als Neuheitenverhinderungsmechanismus zu funktionieren. Ich habe auch alle möglichen Gleichnisse gebraucht, etwa dass Kopernikus über moderne Computer verfügte. Dann hätte man das ptolemäische Weltbild endlos weiterführen können, das ja in erster Linie deswegen aufgegeben wurde, weil die Epizykel immer mehr und die Berechnungen immer komplizierter wurden. Aber durch Erhöhung der Rechenleistung hätte man das kopernikanische Weltbild verhindert. Vielleicht nicht für immer, aber für 100 Jahre. Wenn der Sprung zu einer neuen Qualität, einer anderen Auffassung geschieht, weil die Widersprüche nicht mehr administrierbar sind, wäre der Computer ein Mittel zur Verlängerung des alten Zustands.
Der andere Strang waren erkenntnistheoretische Schwierigkeiten. Man kann schwer übersehen, dass wir nur Repräsentationen der Wirklichkeit in unserem Kopf haben, die verbessert, verschlechtert, angepasst werden. ..."
Oswald Wiener: "Wissenschaft und Barbarei gehen sehr gut zusammen"
Spike Art DailyDmytri reshared this.
Dmytri
in reply to theUCM • • •deliverator
in reply to Dmytri • • •George Baily
in reply to Dmytri • • •AMS
in reply to Dmytri • • •Dmytri reshared this.
Lilac
in reply to Dmytri • • •god, I have needed a word like this!
Like - face generation. People either wear glasses or they don't, it's a binary operation. But generation via diffusion starts with a continuous feature space, and is acted upon by a continuous function.
This code will function in most cases but it is fundamentally incorrect.
If you take two faces, one with glasses and one without, interpolating between them will get you weird glasses melded with the face, and this is an artefact of that.
Dmytri reshared this.
Andy Wootton
in reply to Dmytri • • •