Koen Verbeeck lays out a recommendation:
In this blog post I’ll talk about another of those rules/mantras/patterns/maxims:
build once, add metadata
I’m not sure if I’m using the right words, I heard something similar in a session by Spark enthusiast Simon Whiteley. He said you should only write code once, but make it flexible and parameterized, so you can add functionality just by adding metadata somewhere. A good example of this pattern can be found in Azure Data Factory; by using parameterized datasets, you can build one flexible pipeline that can copy for example any flat file, doesn’t matter which columns it has. I have blogged about this:
Click through to learn more about the concept, as well as some tips on how you’d do that in various data movement products (e.g., SSIS, ADF, Logic Apps).