Roman Suzi
3 min readJun 2, 2019

--

Well, thanks for the explanations. Yes, I understand about “driving” the design via tests, however, I can’t see the point why to drive from some junior-level first-thing-which-came-to-mind mess. TDD values KISS and YAGNI (You aren’t gonna need it) alright, but still can’t see why to set initial, pre-transformation level to something like presented in the article (I am referring to those ugly hardcoded thresholds). It’s a lot of useless typing, which has nothing to do with the knowledge of the problem domain (it can in real life, in another case, of course). The only explanation I see is that it’s just an example. But then in 1970–80s there were textbook examples how to save memory and store years as two decimal digits. Examples matter.

Also about the “generalization” process. Experienced developers can do those in the head while reading the requirements, there is no need to “drive” anywhere. Usually, there always is a sweet-spot of generic enough abstraction: Everything more concrete will require more work than needed, everything more generic will require more work to make concrete “encoding” of the logic. Experienced developers use intuition to come approximately at that sweet-spot. Why can’t they drive by TDD from there? I can’t see any contradictions with YAGNI, because all lines of code are used in the final solution. The original problem is partial problem to this, but in my experience hardcoding parameters does not make development any faster (and creating variables with hardcoded parameters in the names certainly makes it slower unless the programmer is beginner). In other words, right level of “genericity” will have the least amount of accidental complexity.

“You don’t start with generic code, you start with tests that are specific which drives your code be more generic, that’s the “driven” part of the term.” — No. Tests are to be created first, and tests are to be specific of course. It’s just the solution does not need to be too specific: Developers are smart and lazy.

“It’s not bad if you already know the solution, only there’s no test driving you to think that, which makes our hard to pair program with someone who doesn’t have the same knowledge you have”. Well. With this logic, if my pair have not learnt multiplication yet, should we write tests and code, which only does addition? Pair programming is there also for the sake of knowledge exchange. So once tests like “2 * 3 * 4 == 24” are written, I can kindly explain, that one does not need to implement that as “(3 + 3 + 3 + 3) + (3 + 3 + 3 + 3)”, or even worse, use dozens of rubber ducks to represent “1” piled to model multiplication. If programmer does not exercise abstraction with increasing level of complexity over months and years of work, I do not believe any “DD” will help.

“…than what you really need to get the job done”. I fully agree, that it’s not good to do more than is required, however, not all “extras” are born equal. First of all, intuition can still be applied to what we do: Requirements may be deficient, incomplete, made by incompetent, lost in translations, etc. At least the developer should check whether we really want to have only “U” and “D” from CRUD? Or maybe there should also be “C” and “R”? Etc. And while we can connect everything point-to-point in the rack of servers (it will get the work done, right?), for some reason there are modular cable systems and wires are guided in a certain tidy pattern. Using that latest analogy, we should first connect everything in ad hoc fashion, then transform to some better structure. I disagree with that. I do not want to add transformations, but remove unnecessary ones.

--

--

No responses yet