Roman Suzi
2 min readAug 31, 2024

--

Interesting take. I am not quite sure which kind of logic is really meant to be discovered when a lot of human-produced texts are put into genAI memory, but even mathematicians entertain different kinds of logic. There is a logic for ethic even, there is game theory and so on.

In computer programming there is a great difference when one needs to think about a single thread consecutive execution vs multi-agent inherently parallel system.

So basically even though some logical patterns can be universal, there are kinds of logic, which specialized humans are skilled at, and LLMs probably not that much. (Even though I've noticed chatgpt is useful when thinking on theorem proofs in a proof assistant, I suspect it had access to the solutions somehow. The hypotheses of inherent logic can probably be put to costly test if for example only vast love story literature will be source for LLM. Which kind of logic will arise then?

As for children learning language, I doubt kids are exposed to such a huge amount of data as AI is. Lets say if a child hear even 5000 words a day, and learns to talk at age of 3, then it's just little over 5 000 000 words. And I suspect, many of them repeat to capture common routines. So where "parameters" were tweaked? There is just not enough training materials in one's personal life! Ok I can admit there are deictic gestures and other kinds of "reinforcements", but some kinds of "transfer learning" should still happen from innate brain structures (evolution, anyone?) or we should bring in some kind of "world information field"/ "world consciousness", or both to explain the phenomena. It may also be that human brain is more intricate than neural networks, e.g. it uses some kinds of low-frequency chemical signals for self-control. (and I've not seen ANNs were created where parts of networks can mutually temporarily shutdown each other or tune each other - correct me if I am wrong).

Definitely, I agree, that complexity is the stimulus for developing logic. In a simple world where environment is simple and predictable, natural processes will hardly lead to involved logic and patter recognition, but to have more proofs for the idea that logic coming from current LLMs is universal, I think, we need to have some kinds of simulations of different environments. For example, 3D marine life, social insect colonies, etc. Maybe, someone at this very moment research such things.

--

--

Responses (3)