Skip to main content


I'm a little puzzled at the salience that is being given to the Apple conclusions on #LLM #reasoning when we have lots of prior art. For example: LLMs cannot correctly infer a is b, if their corpora only contain b is a. #Paper: arxiv.org/abs/2309.12288

#AI #MachineLearning #logic

in reply to modulux

I think it's just people looking for confirmation, and the name apple attached somehow is giving that research more public light, when it really is just another paper on how to get better benchmark to study "reasoning" capabilities. I put reasoning in quotes since it means such different things depending on if you are in the field or not.
in reply to mnl mnl mnl mnl mnl

I tend to think so too. I suppose it shouldn't really surprise me, but I expected a bit more critical engagement.

There's lots of evidence on the limits of LLM reasoning, as well as pretty basic self-reference and so on (how many letters l does this statement include?). And yes, reasoning is being used rather technically in the sense of deductive inference.