Roman Suzi
3 min readJan 20, 2024

--

1. Those are just statements, which do not reflect what I see in practice (working with ChatGPT-4 or Anthropic). Maybe, there are some repositories of miraculous ChatGPT answers, but this article is not linked to them.

Logic? AIs do simple mistakes and omissions, they do not make exhaustive case analysis even for simple situations, which they were not taught (probably). AI can be talked into logical mistakes sometimes.

Yes, LLM comes with good categorization capability, senses the context, has perfect language sense, can do simple inferences.

But generalize? No... They work mostly along associations.

Of course, I do not have experience with AI collectives. It may well be, that when several LLM / RAG / ontology / special purpose non-LLM based agents are configured into some layers where one layer plans solution(s) for a problem, other layers do some simpler tasks according to the plan, then there is a layer of checkers / reviewers, feedback loop, etc, then perhaps new properties can emerge. Because human intelligence can work on several layers, going up and down in scale as needed.

Coding requires logic? Seems like not all coding does. 95% percent of programming can be done by Chinese room (because many algorithms are already reimplemented many times). It's still that 5% (I do not know the exact number of course), which requires brains.

And there is no need to learn the whole Internet content to get sense of simple logic, because many texts in the language already facilitate that. It does not mean generalization, it means its learned in many situations maybe independently.

One more example. I am learning Coq proof assistant and sometimes when stuck with some proof I am asking AI to show hints. It rarely produces ready results (though much more frequently gives good ideas for tactics, which I appreciate). Yes, it definitely better than nothing, but accuracy is far from there. And yes, I guess what I am asking is probably learned by AI from some tutorials. Could be nice to hear if LLM helps in really bleeding edge problems, and not in auxiliary tasks but in those, which make breakthroughs.

I can also give a method for empirically checking how intelligent current LLMs are, eg, for programming (or maybe in written expression as well). The method is simple, but requires some creativity. Come up with some algorithmic problem (it's better not to take something known. For example, take some obscure NP complete problem or ACM contests one). Manually (re)dress it as for school textbook or maybe using old terms like floppy. Observe how LLM "generalize". Most of the time, single ChatGPT-like system will not make big progress with it. (I've not done this a lot, so it's more like a research needed department. So I am not saying how much success/failure this will really make).

2. Lets not put too much confidence about neurons working very much like artificial neural networks. natural neurons might have other effects in place. For example, chemistry, which might propagate slower signals. And (this is not yet currently scientific belief) there might me quantum effects, which bring are enough to ruin simple neural network model (which is first approximation for sure). And I am not even talking about spiritual domain here.

--

--

Responses (1)