Anybody who is practicing TDD or BDD knows that it is important to abstract dependencies. Especially external dependencies you have no control over. But that might not be enough always.
Last year myself and a few co-workers were in a situation where we all needed to take a rather destructive update from a common dependency in several projects. The change was destructive since it involved renaming of a dependent assembly that was also being used by directly by our projects. Let me try to explain that: My project had two dependencies; A and B. However A in turn depended on B and C. The change was to rename B to D. Not a very common case but it highlighted something interesting.
For my co-workers who did not test their code depending on this assembly the update was virtually impossible. It was virtually impossible to make it even compile. Myself I had abstracted the dependency so it was very easy to make it compile but since the component was used by a lot of assemblies I ran into various assembly loading problems.
Turned out there was only one co-worker that actually could take the change. He had not only abstracted the dependency but also put it in a separate assembly so he isolated the dependency too. As you might imagine the change was rolled back and redesigned to not include the assembly renaming, but it made me think.
Just abstracting your third party dependencies is not enough to minimize the risk of the dependency to break you when there is an update. You must also put the abstraction in a separate assembly so it is isolated. Naturally you can do this on demand to save yourself some work...
So remember; abstraction is nothing - isolation is everything!
Dear Cellfish,
ReplyDeleteI don't think that more complexity, more compile time, more deployment, more IO, etc... is a fair price for handling dependency changes gracefully.
Further read: https://www.simple-talk.com/dotnet/.net-framework/partitioning-your-code-base-through-.net-assemblies-and-visual-studio-projects/
Best regards,
Daniel
@Daniel Fisher: I do not completely agree with the article you referenced since it seems to imply that if you have a large project you always compile everything. For a large project I do not think I would ever want to compile everything at once, ever. I think you want to separate your project into smaller pieces that are built separately and then shared in binary form. Either by using a private NuGet server or a binary drop folder. The referenced article kind of implies that since it mentions building all projects into a common bin folder.
ReplyDeleteOn the projects I have worked on putting everything into one single assembly would not have made it possible for me to compile in a short enough time for me to be happy when I need to make a change.
This does not mean that the article is giving bad advice. I agree some people create way more projects than they need and that is bad. I even said above that physical separation is something you can do on demand to save yourself some work since I described what I hope is a very rare scenario. The key point is to show that abstraction of dependencies is not always enough - isolation is the only way to be sure. But like they say in the movie Aliens; "let's nuke it from orbit, it is the only way to be sure" - the solution that solves all your problems is not going to be the best solution for all your problems.
I think it's a repeated pattern in discussions, and I also think it's almost always pointless to discuss. Because what we (almost) always lack is context. All the techniques and ways to split projects etc. are centering around simple questions. how big is your codebase, did you reach the "painlevel" for the next scaleout move. That could be introduction of IOC container, intrduction of verticval slices in the architecture (reducing build time by strongly reducing the longest path in the build dependency tree), introdcution of more sophisticated spectests , etc. pp. There is never a right or wrong, just the righ or wrong chouice in relation to some metrics of a codebase, and as long as we don't define those metrics and simplify to a shared mental "map", we will always stumble in the dark.
ReplyDeleteTo be more specific: in your case, the codebase was sophisticate enough, so that the overhead of the added project easily trumped the problem with updating the 3rd party reference. "Sophisticated" in that case means a higher score in soem kind of "3rd party change frequency vs code read frequency" metric. In addition, technology (a.k.a tooling) always removes the overhead, so having an alternative build tool which doesn't rely on VS (in fact inside my company I'm writing one right now) and completely derives the build tree from conventions, makes the extra project a nonissue.
Sorry for the long post, but it was just something on my mind, should probably start blogging as I see that pattern in discussion all over the place.