Apr 3, 2023
Abstracting from Chinese room arguments, language models are still useful even if as a spellchecker on steroids or a big and lossy associative memory, but still suitable for it's purpose.
ChatGPT causes wow effect even without abilities to do abstraction, logic, arithmetic correctly, which proves the point that having vast enough memory mimics a lot of human thinking.
That said, most of the people drive cars without understanding how the engine works. Understanding is multilayered thing all the way to the origin of our Universe.
Yes, LMMs are overhyped at the moment (as many technologies before them), so what?