My first accounting job is still fresh in my mind. I was 26, working in finance on complex calculations called: capitalized variances. As part of the evaluation of a company’s inventory, capitalized variances take into account price changes in production materials. These calculations can get a bit tricky due to timing offsets.
I was trembling the first time I calculated the figures as they were going to be sent directly to the CFO. At the time, I had to manage all these figures in Excel. It was extremely stressful. Of all the accountants, analysts, and controllers who worked on consolidating the quarterly business results, my calculation was the one that would have the biggest impact on Operating Income and Profitability for the quarter.
I remember thinking how unnecessary it all was. Why was I forced to be stressed when the calculation could easily have been automated? I inherited an Excel spreadsheet for calculating the variances which was unnecessarily complicated. For this reason usually, only very Senior Financial Analysts were allowed to use them. Despite lacking the seniority I was given the sheets to work with and was simply terrified that I’d make a mistake.
After all, there were dead formulas in the file, date labels were wrong, and running simulations required changing data in multiple sheets. It was very easy to get an incorrect estimate. I realized at this moment that new technology is the solution to making calculations faster, more transparent, and less error-prone, but it is only useful if it brings less complexity. Less complexity in technology is key and leveraging boring tech is the best way to ensure it.
To remove unnecessary complexity, I took the following steps:
- I began by cleaning up the data
- I then moved to the data models, making the modeling logic transparent and straightforward
- The final step was to use plain Excel
I was tempted to add VBA macros (or better yet, Python) to automate processes, however, I knew my successors might not have any experience with VBA and probably wouldn’t be able to understand my “cooler tech.”
The result was that capitalized variance became so easy that I was eventually able to hand over the spreadsheet to more junior people. I managed to significantly reduce the complexity and increase my job satisfaction.
To Scale, make things easier… with Boring Tech
Scaling an organization works the same way as improving an organizational process by making it a priority to remove unnecessary complexity. “Boring tech” might seem a bit vague, but it refers to tried and tested software that has been around for a long time and is widely understood.
Only after first adopting boring tech and maturing your digital organization can you consider employing more complex technologies. And this should only be done when it becomes absolutely necessary and has a clear purpose.
Boring tech helps us answer a key question:
How can I make my work easier and more efficient?
By avoiding impractical unnecessary tech you can make your work easier and more efficient. Fancy new features never outweigh sustainability solutions.
SQL databases are examples of boring tech because they have been around for a few decades now. Most data professionals know how to work with them, and they are battle-tested.
Kubernetes, on the other hand, is not boring tech because it has not been in use for long enough. It can add a lot of value for DevOps, but it is also not widely used or accepted by data professionals. This means that maintaining it is challenging, and it adds significant complexity to data operations. Nevertheless, Kubernetes can be made to be boring tech! If we standardize it, support an uncomplicated deployment system, and remove unnecessary complexity from it, then it, like anything, can be made to be operated by a less skilled worker.
Knowns and unknowns of boring tech
Luca Rossi, for example, says, that any technology has upsides and downsides. Paraphrasing these into the knowns and unknowns framework, we have:
- Known wins — things we know it is good at.
- Known failures — things we know it is bad at.
- Unknown failures — things it is bad at, but we still don’t know.
By his definition, boring tech is good tech that has almost everything figured out:
- Many known wins — it is widely adopted and supported
- Many known fails — limitations are well known and documented
- Few unknown fails — we know it inside out because so many things have already been tried
New tech has many unknown unknowns, because they haven’t been discovered yet, while boring tech has been tested and is widely understood.
Boring tech is widespread
Boring tech is so widespread that new tech tries to be compatible with it in many cases.
One of the easiest ways to migrate your data analytics solutions to the Cloud is to use the combination of data lake and SQL. This way, you can scale to petabytes of data using new tech, but you can still make your data available to people using good old SQL.
If I had to guess, I would say that 80% of analytics today is still done in SQL. SQL was invented in the ‘70, it is an old but established way of querying and manipulating data. Data professionals from many career paths know SQL, and those who don’t can learn it in a few weeks.
You can also do a lot with more straightforward statistical tools. These tools implement well into programming languages like R or Python. They have been around for a while, and their real-world usage for many years now has hardened them.
Boring tech is the best way to mitigate the talent shortage
One of the most significant advantages of using boring tech is that it allows you to leverage a vast talent pool with technical and domain knowledge.
There are two big reasons why boring tech tends to have a wider pool of talent:
- They’ve been around longer, so people have had more time to figure them out
- They are extremely simple, so people can quickly learn how to use them.
A better way to pick technology is to examine the type of talent you already have within your company as well as the type you and your company can attract from the job market. It’s a challenging exercise because every company wants to be as cool as, say, Google or Apple, but these companies have very different data challenges than most others.
But it doesn’t scale!
First of all, you can scale a single database instance to a few terabytes and thousands of users. Is it ideal? Sometimes yes, sometimes not. You should choose what is best for your situation, not what the masses are promoting at the moment. The critical bit is to catalog and manage the data exceptionally well; think of yourself as a librarian. Data doesn’t change as fast as technology does.
I am convinced that tech will continue to evolve, but solid, boring tech will remain foundational. SQL is here to stay; the same goes for many programming languages. But many other UI-based technologies will continue to evolve, which is fine as long as you develop with them.
Remember that before Cloud, there was Big Data and Hadoop. Hadoop was a revolutionary set of tools that made it possible for companies to analyze increasingly big datasets. But Hadoop was complex so a few startups were created to make it simpler. One of them was Cloudera. If your company went into data science before 2016-2017, it possibly went with Hadoop on-premise and a vendor like Cloudera. However, the world moved on in just a few years, and Hadoop on-premises lost its position as the gold standard. Instead, running on Cloud exclusively became number one. If your company was on Hadoop/Cloudera, you might have been forced to migrate to another technology after just five years. But here is the good news: your underlying data probably hasn’t changed! Your company has probably been using the same data structures for decades.
Remember: data is like glaciers, it changes very slowly.
OK then, when can I use new tech?
One of the hardest things for me to do is to keep up with all the new technology developments and be able to estimate if/when this new piece of tech is worth the try. I try to use the new sparingly and prudently. Regardless, I have a whole team at dyvenia constantly working on new tools and technologies, but even so we go for boring tech whenever possible.
Ideally, before adopting new technologies, I want to prove its value first. Like in my capitalized variance story, many times value can be added by removing complexity from existing tools and processes first.
Another way to introduce new tech is to test it first on a few non-critical projects. That way the organization can start digesting the “new” parts of the technology.
New technologies should be adopted carefully, over time, with a plan and very strong communication around value and processes. The goal here is to minimize the chances of failure and increase the chances of success. This requires a different mindset, not adopting new tech expecting amazing results, but adopting it accepting risks and preparing for potential unknown failures to deal with.
Businesses undervalue boring technology. Usually their goal is to Incorporate the latest technologies into their company’s systems. Unfortunately, it is often done haphazardly as little consideration is given to the benefits of boring technology, i.e. established technologies. Businesses with established technologies are more profitable because they are easier to recruit, build, and maintain.
This is why I highly suggest using a boring technology stack. You’re in the business of resolving your customers’ issues. It doesn’t matter what technology you use. Instead focus on satisfying your customers and benefit from the speed and scalability that boring tech brings.
This article was originally published by dyvenia CEO Alessio Civitillo on his LinkedIn page.