I'm a little puzzled at the salience that is being given to the Apple conclusions on #LLM #reasoning when we have lots of prior art. For example: LLMs cannot correctly infer a is b, if their corpora only contain b is a. #Paper: arxiv.org/abs/2309.12288

#AI #MachineLearning #logic

in reply to mnl mnl mnl mnl mnl

I tend to think so too. I suppose it shouldn't really surprise me, but I expected a bit more critical engagement.

There's lots of evidence on the limits of LLM reasoning, as well as pretty basic self-reference and so on (how many letters l does this statement include?). And yes, reasoning is being used rather technically in the sense of deductive inference.