December 07, 2020
Have you suffered on a project due to lousy tool choices? (“Tool” can mean many things, but in this context, we’re talking about developer tools: utilities, test runners, DevOps tools, libraries, and frameworks.)
Is some far-off silo team trying to force some enterprise-grade hot garbage on your crew because the sales rep takes them out for sushi?
Is it finally your turn to make some choices, and you don’t want to screw it up?
As someone who’s been building software for well over a decade, I’ve spent a lot of time in discussions about picking tools and suffering from the consequences of those discussions. This post will equip you with some tips to help you avoid shooting yourself in the foot.
Tools exist to support people and processes. For example, you should adopt a CI/CD tool to support your agile process. However, tools don’t create quality processes. People do that. If you adopt a CI/CD tool, like GitLab or Azure Dev Ops, there’s no guarantee that you’ll improve your DevOps practices. Adopting Azure DevOps isn’t going to fix heavyweight change approvals. If you release every six months, the automation isn’t going to help you. Tooling makes it easier to make those changes, but it’s on you to do the heavy lifting.
I’ve seen too many organizations bring on a tool and claim victory without changing the underlying processes. If you have a legacy DevOps tool but commit to the cultural changes required, you’ll crush an organization with fancy tools paired with flawed processes. Good tooling choices can help you support a good culture, but the culture needs to come first.
You should strongly prefer code and command-line based tools. You can easily integrate command-line based tools into automation scripts, batch processes, and DevOps tools. I’ve seen many complex enterprise tools that have cutesy GUIs, but you can’t run them from the command line. You can’t automate a cutesy GUI.
Code-based tools are also more useful than their GUI only counterparts. If you can represent your configuration as code, you can store that code in source control and get access to all the edits made to it. It’s also a lot easier to share a code file than a sequence of 47 mouse clicks in a GUI.
You don’t have to use command-line only tools. For example, Postman is GUI-based, but its configuration can be saved to a .json file and run from a command line. I’m all for making things usable, but usability includes the ability to run the tool from the command line.
Have you ever had to adopt an annoying enterprise tool that was chosen by a committee based on the number of feature checkboxes on their website, even though the thing is so complicated it requires a Ph.D. to operate and no screen takes less than five seconds to load?
As a rule, I think individual teams should choose their toolchain. The further away you get from the people using the tools, the harder it is to make good choices. However, in enterprise contexts, that’s not always politically or economically feasible. For example, you aren’t going to let each team pick their own API gateway.
Teams that get to pick a tool for everyone can fall for “checkbox syndrome,” where choices are made based on the checkboxes on the vendor’s website and not real-world use. When you don’t have the context gained from day-to-day use, it’s easy to pick whatever tool has the most features. Most teams would prefer something simple.
Additionally, having a single team in charge of maintaining a tool incentivizes ease of management over ease of use. This is a classic case of optimizing a single group at the expense of every other group. As a centralized team, you should take on the complexity to make it easier for the other teams. There are two ways you can take on complexity for other teams.
First, support multiple tooling options. Support a range of tools to cover the levels of different complexity teams need. Keep simple tools for teams that need them and complex tools for the teams that need the extra features.
Second, find a way to abstract the complexity away. Support a facade that covers the everyday use cases or build an outward-facing product that uses the complex tool as a base. For example, if your team wants everyone to use Kubernetes for app hosting, abstract away some of the complexity with scripts. If that’s not feasible, build templates and starter kits to speed things up.
Software developers tend to be optimizers. We have an innate drive to find a better way to do something, to refactor code for an extra ten percent performance, or to learn new keyboard shortcuts to be a little faster.
This constant drive for improvement is built into our development methodologies as well. Most flavors of agile have the concept of a retrospective and encourage continuous improvement. The DevOps movement is all about getting better and faster with each iteration.
Typically, this is a good thing. We should strive to get better. However, there’s one place where this drive towards maximization doesn’t always pay off. That place is tool selection. If you want to suffer less in your software career, I’d suggest a new strategy. Instead of picking tools based on being “the best,” find tools that work well enough and move on.
A tool that isn’t perfect, but is usable and fast, will beat a purpose-built “perfect” tool most of the time. Pick tools that are simple and easy to change. That way, when you make a terrible choice, you can reverse course. A collection of simple tools will generally beat a bloated “enterprise” solution.
As a developer, I love trying out new things and finding new ways to do my work. Learning about and adopting new technologies is one of the great things about being a developer.
Picking tools doesn’t have to be a painful process. Look for simple tools that are easy to integrate with your existing pipelines and code bases. Avoid bloated enterprise solutions and bleeding-edge hipster tools. Focus on people. With a little pragmatism and forethought, you can find what you need to get the job done and get back to crushing bits.