As is usual with every polarized situation, instead of trying to ban "copy-paste" / "AI-paste," let's see if it is possible to have a situation where AI is really helpful.
First of all, AI needs good inputs (knowledge of the problem domain) to make meaningful code. This happens best in a very narrow window, like a "slot" to fill in some framework. When the separation of concerns is there and "slots" to fill are defined in the architecture, AI can bring benefits.
However, second, someone should think of an architecture where it is possible to develop software in that way (roughly speaking, component architecture). This is where LLMs can still be useful because they possess knowledge about many domains, including how to decompose software into useful parts. Senior developers can work on this, allowing juniors (or LLMs) to be useful without damaging code quality.
In short, there is a need to rethink how software is being done. Certain decompositions of the code can make it more approachable by both junior programmers and LLMs.
The concern of the article is that LLMs are misused by developers trying to continue in their old ways (I suspect not much has changed as those same guys wrote the code in the same ways before, probably with less speed). This is a very valid point.
Another way we can make LLMs more useful is to select popular syntax (which LLMs were most likely trained more in).
I am basing these remedies on the assumption that the essence of software code is not syntax or structure, but its faithful representation of (problem and solution) domain knowledge, turned into practical actions. Nothing else matters; the rest are just means to achieve it. From this angle, the question is: Does an LLM already contain that knowledge? If you are writing yet another TODO-list app, the answer is yes. If you are formalizing very special business know-how, the answer is most probably no. However, even then, LLMs can help in organizing the knowledge you, as a developer, are getting from domain experts.
If software is understood as knowledge at its core, then the definition of technical debt can be seen as a lack of knowledge or inappropriate structures that prevent the addition, review, or updating of new knowledge at the required rate. This can be thought of as a kind of knowledge management viscosity.
It's also possible to automate code review using AI. For example, you could ask a large language model (LLM) to explain what a function does. If the "reverse-engineered" description of the function matches the original intentions behind the code, that's a positive sign. If it is significantly off, this discrepancy can be highlighted. This kind of semantic roundtrip can be useful in ensuring that the code accurately reflects the intended functionality and design.