Roman Suzi
2 min readFeb 14, 2025

--

> Let's say you have a tiny feature in your product that smells like ETL.

> You don't know how much it will be used, whether you will build on top of it, or whether it will become critical.

> The question is how much do you invest to learn about ETL? Do you spend a day, a week, a month, a half a year?

I would learn top-down about ETL, and also find out that there is also ELT. I do not need to know every detail about it, but my primary architectural considerations will be based on kind of dominance decision rule, which usually mean that if there will come ETL 2, 3, ... N, I have already thought about that and not so much in ETL 1 prevents turning it into more stable solution. This usually boils down to not hardcoding environment parameters like paths, thinking of natural enhancements a la "entity": {"title": "ETL1"} instead of "entity": "ETL1" (broadly speaking).

The catch is that if this kind of analysis becomes a habit, it yields better results - even if there are "false positives", such as something being slightly overdesigned and later unused.

For example, if you're using a third-party vendor (Vendor A), design your system in a way that allows for easy integration of Vendor B. This is simple to achieve if planned from the start and aligns with good software engineering practices - abstracting dependencies behind an anti-corruption layer (a Domain-Driven Design concept). If all small coding decisions follow this principles, then years down the line, replacing Vendor A / adding Vendor B won’t require modifying a database schema with millions of records (and who knows what kind of rush you will have then).

I would say that someone who can quickly traverse multiple levels of abstraction in practice will recognize the sweet spot of what needs to be done now while considering risks. Also, it is (a) more fun, (b) provides opportunities to learn, and (c) starts to pay off after 2-3 years of practice. The problem is that there’s always a rush. We have a lot of time to fix something instead of making it better the first time. And I mean better not in the sense of "thinking of all possible scenarios", but thinking of principles of extensibility in the foundation of what we develop.

I am not a fan of refactoring. I see from everyday practice that it is possible to factor properly as you go.

However, different developers' minds work differently. Maybe some psychological types are at play or something, so my top-down approach might be good for, say, 5-15% of programmers. This is something I still do not understand.

--

--

Responses (1)