Jan 21, 2023
ChatGPT is (as it also (dis)claims) a language model. It can find and structure realistically-looking sentences or software code, but it can't really produce something with non-trivial logic in it.
Thus, it can be great to write some boiler-plate, it can dig some common sense things for you, but it does not really check the facts.
In other words, it master syntax and can get some facts from the vast knowledge base, but fails with even trivial logical inference.
Well, perhaps the same can be said about bad software developers. For me it is revealing how much the logic of humans depends on the language. huge memory vs fluid intelligence.