Skip to main content


New #blog post: MDN’s AI Help and lucid lies.

This article on AI focused on the inherent untrustworthiness of LLMs, and attempts to break down where LLM untrustworthiness comes from. Stay tuned for a follow-up article about AI that focuses on data-scraping and the theory of labor. It’ll examine what makes many forms of generative AI ethically problematic, and the constraints employed by more ethical forms.

Excerpt:

I don’t find the mere existence of LLM dishonesty to be worth blogging about; it’s already well-established. Let’s instead explore one of the inescapable roots of this dishonesty: LLMs exacerbate biases already present in their training data and fail to distinguish between unrelated concepts, creating lucid lies.

A lucid lie is a lie that, unlike a hallucination, can be traced directly to content in training data uncritically absorbed by a large language model. MDN’s AI Help is the perfect example.


Originally posted on seirdy.one: see original. #MDN #AI #LLM #LucidLies

:boost_ok:

This entry was edited (1 month ago)
in reply to Seirdy

> LLMs can never be remotely honest

I'm a bit irritated by this usage of "honest", because to me (as a mathematician), LLMs are brutally honest (they replicate what they've seen as good as they're allowed to in their constrained), but just unable to introspect, reflect and refine based on that (which would count as thinking, which they, as a sort of lossy compression tool, can't do)

in reply to Αλαιν Φογτια Αννα Εμιλια

@fogti The marketing of LLMs as virtual experts qualifies them for the term despite being statistical models.

I don’t think a lie always implies intelligence: something being in a position that implies accuracy (e.g. being part of MDN) means that misinformation should be treated as severely as a lie.