The optimization of a multicloud architecture, simply put, is the ability to configure the technology to optimize the architecture for the business requirements, as well as to minimize costs. For each dollar spent on cloud technology, you want the maximum value coming back to the business.

The truth is that few cloud architectures are fully optimized. I’ve talked about the bias towards complexity as a primary culprit. However, the root cause is waiting to think about architecture optimization until it’s deployed and in operation. By then, it’s too late.

So, what are the primary reasons that multicloud architecture falls way short of being fully optimized? Here are just three, and how to address them:

Leveraging too much technology. A cousin to complexity is excess. Multicloud architects and development teams often attempt to toss as much technology as they can into the multicloud mix, typically for all types of requirements that “might” materialize. You may need only one service governance technology, but you’re using three. You can get away with one storage resource, but you’ve got seven. You end up with more cost and no additional value to the business.

This is a tough problem to solve because most architects are attempting to build for a future that has not arrived yet. They pick a database with built-in mirroring technology because they may move to fully distributed databases, although probably not for several more years. So the number of database types goes from two to four without a really good reason. Keep in mind that you should be building for “minimum viability” to get near an optimized state.

Not building for specific requirements. Requirements—strict, detailed, immediate requirements—are well understood because they primarily determine what your multicloud architecture will be. They outline the patterns of problems that the architecture needs to solve. However, I see a large number of multicloud projects that are attempting to design and build for general requirements that have little foundation in what the business needs now.

I know this is the “it depends” answer that people hate from us consultants. However, in order to move towards full optimization, the requirements determine your minimum viable multicloud architecture, not the other way round. 

Not architecting for change. Those charged with building multiclouds, even if they are approaching full optimization, often neglect to understand how to design for agility. Here, agility means understanding that part of the architecture needs to adapt easily to changes that will occur in the future. Numerous architecture tricks accomplish this, and the basic concepts are pretty easy to understand.

Make sure to place volatility into a domain. You’re looking to avoid a complete redo of a database or application around simple changes, for example. Say you’re leveraging data virtualization between the databases and applications, allowing you to change the virtual schema as many times as you need to use a mapping tool, without forcing costly and risky changes to the physical databases in the multicloud.

This is a bit different than deploying too much technology, since change—including the degree and frequency—is itself a requirement. This is not about assuming or hedging bets.

Architecture continues to be more art than science. However, by understanding the emerging best practices, people designing and building multiclouds can return much more value to the business. That’s the objective.

Source