I'm a bit worried about the discourse around #ai.

It's totally OK to have strong opinions either way, but I feel like in certain circles it's becoming a purity test.

Do you hold the RIGHT opinions about AI, how it's trained and how it's used?

I think pointing out the dangers is important. Pointing out the risks is important, but recognizing that the situation is nuanced and complex is important as well.

#AI
in reply to Matt Campbell

@matt
To make an analogy to something much less divisive: it was clear to me that at some point, we would want to move away from subversion to some other version control system. It was also clear that the market would eventually settle down, and whatever won would have a clear upgrade path from subversion -- but that couldn't be said for all of the intermediate contenders.

So it was prudent to avoid moving until the winner clearly emerged.

I am letting people play with LLMs in controlled circumstances, wtihout ever putting their results into production and clearly marking what they do and how they got there. Someday it may crash and burn; someday it might produce something worthwhile and sustainable. But until then, the responsible thing to do is not to do it.

in reply to -dsr- (hypoparenthetically)

@dashdsrdash @matt You are absolutely correct for any situation where correctness cannot be trivially and unarguably verified.

There are situations however where correctness is a binary toggle and as plain as the nose on my face.

Does the web interface look like how I want?

Did this Python code build the correct bag of infrastructure needed to run the site?

These are trivially answerable questions.

in reply to Feoh

@matt

Gosh, no. "The web interface looks the way I want" is not "the web interface is correct", and the entire history of software development as a craft slowly working its way into an engineering discipline is the story of why those things are different.

There are things which have provably correct answers, and beyond the trivial ones, they tend to be things like "use this well-tested theorem prover".

The problem with repeatedly feeding LLM output to a theorem prover and checking for correctness is the same problem as with bogosort, the canonically worst of all possible sorting algorithms.

(In case you have forgotten bogosort:

10 LIST.randomize-order

20 if LIST.sorted != true then goto 10