Choice vs. Complexity -- Can you have one without the other? 🫣😫

publishedabout 1 month ago
4 min read

Simplicity is at the heart of our desire to use cloud-native application methodologies. Service-based applications are designed to decrease complexity in individual service components. Using cloud-native infrastructures focuses and reduces our available infrastructure choices. Simplicity is core to virtually all cloud-native patterns.

But one of the fundamental tenets of modern application development, is actively working against this desire for simplicity. You see, modern application architectures encourage team empowerment. Team empowerment brings decision-making down to the lowest logical part of an organization. Modern application methodologies enable distributed decision-making at the lowest levels of the organization.

But how much choice should you give your development teams in building their applications? The answer may not be as simple as it seems.

Choice and Your Cloud-Native Teams

Deciding how much choice to give your teams is not an easy decision.

On the one side, we want to give our development teams the freedom to decide how they design, develop, and operate their applications. Empowered teams are innovative teams. The more choice you give your development teams, the greater they can innovate. This innovation can lead to many architectural and product advantages, including more customer-centric solutions and faster responses to change. This typically results in a shorter time to market, more competitive products, higher reliability and availability, and ultimately happier, most engaged teams.

However, choice has a negative downside. The characteristic that brings you innovative, customer-oriented solutions also works against simplicity. An increased choice means increased variations in decisions made within your cloud-native applications. More variations increases the overall application complexity. Put simply, the more choices you give your team, the more variations they will use. The more variations used, the more complex your overall application becomes.

You see, choice means complexity, at the cost of simplicity.

Early on, choice empowers your organization, and fuels innovation through the cloud-native processes you are utilizing. But, as time goes on, the cloud-native processes that initially empowered your organization, can work against it in the form of increased complexity.

The more you empower your team, the more complex your application becomes, the less supportable it is long term.

Obviously, this counter-intuitive result is not what you expect, nor what you want for your organization.

Effectively managing knowledge is fundamental to reducing complexity in any application. Managing knowledge is key to reducing cognitive load, and ultimately improving maintainability. But, long-term knowledge management is often at odds with innovation and choice.

How do we enable our teams without hurting our long term maintainability?

Managing Decisions with Sandboxing

This is the main idea behind sandbox policies. A sandbox policy is a framework given to your service teams that define the criteria and framework for the decisions they are enabled to make on their own.

In a sandbox model, your cloud-native service teams are encouraged to make any decision that meets their teams needs and goals, as long as the decision fits within a well established set of sandbox policies.

What’s an example of a sandbox policy? A sandbox policy might be something like: “Your team can develop its applications in any programming language that’s contained in the following list of languages”. By specifying an allowed list of programming languages, you are giving your teams choice that encourages innovation. Yet by restricting the size of the list, it keeps their decision from going so far away from the decisions made by the other teams in the organization that it increases the overall application complexity. If most developers are using Go or Python in your application, you may not want one team going off and developing a service using Perl or C#.

Sandbox policies can be created around any decisions that are pushed down within the organization:

  • What API methodology are we allowed to use in our service design? Procedural or asynchronous? Web-based? REST? REST light?
  • What execution environment can we use to operate our service? Serverless? Containers? Bare metal?
  • What third-party plugins can we use?
  • What can we use to monitor our service?
  • What are the required security policies and systems?
  • What testing strategy should we use?

The team has many decisions to make, and sandboxing gives them choices and options, but also provides boundaries and protections. As long as the decision they want to make is within the walled garden of the sandbox, all is good.

But what if a team wants to make a decision that goes outside of the sandbox? There certainly are cases where this can happen: A primarily Linux based application may have a service that requires Microsoft Azure. A service team may want to bring in a new tool to use that’s never been used before. Exceptions do come up.

In these cases, the decision must be run by a higher level decision making authority. In most companies, this is typically an architecture team, or a technical policy board or steering committee, or perhaps an executive authority such as the CTO.

The decision is made in context of other, related decisions. Ultimately, the goal is to give the development teams the flexibility they require, without allowing decisions that inappropriately increase technical debt, decreases ability to manage the application, or increases long term complexity unduly.

In a typical organization, sandbox policies themselves are defined and created by the same decision making authority. As teams request exceptions to the policy, the policy itself may be adjusted, changed, and ultimately evolve into a better and more complete policy.

All of this with the same goals: giving teams choices and options for innovation without endangering the long term complexity and maintainability of the application.

Having sandbox policies is essential to keeping your cloud-native organization healthy, and your application manageable and sustainable, in the long term.

This article, written by Lee Atchison, first appeared in Container Journal.

Software Architecture Insights with Lee Atchison

Lee Atchison is a software architect, author, public speaker, and recognized thought leader on cloud computing and application modernization. His most recent book, Architecting for Scale, 2nd Edition (O’Reilly Media), is an essential resource for technical teams looking to maintain high availability and manage risk in their cloud environments. Lee has been widely quoted in multiple technology publications, including InfoWorld, Diginomica, IT Brief, Programmable Web, CIO Review, and DZone, and has been a featured speaker at events across the globe.

Read more from Software Architecture Insights with Lee Atchison

Are You Ready for Cloud-Native?

about 2 months ago
2 min read

Securing data at rest and data in motion

about 2 months ago
5 min read