The term cloud native gets thrown around a lot recently. And we nod along as this is the panacea to all the software development issues we ever had. But do we really know what it means? What value can we get? What does it take?
When you look up Cloud Native Computing on Wikipedia, we won’t get a much better understanding of the concept, unless we have a technical background. Just consider this:
Cloud native computing is an approach in software development that utilizes cloud computing to “build and run scalable applications in modern, dynamic environments such as public, private, and hybrid clouds”. Technologies such as containers, microservices, serverless functions and immutable infrastructure, deployed via declarative code are common elements of this architectural style.
This typically does not help much when we come from a business environment. In this article, we’ll introduce all the concepts that are necessary for a good understanding of Cloud Native Development and the related terms.
Cloud native even has its own foundation: the Cloud Native Computing Foundation (CNCF), launched in 2015.
In general, cloud native is an approach to building and running applications that exploits the advantages of the cloud computing delivery model. Cloud native is about how applications are created and deployed, not where.
The main Cloud Native Development objective could be described as a new approach to develop solutions leveraging new technologies and culture.
This goes hand in hand with the core business goal which is business agility.
Business agility is the ability to compete and thrive in the digital age by quickly responding to market changes and emerging opportunities with innovative business solutions.
Business agility requires that everyone is involved in delivering solutions: business and technology leaders, development, IT operations, legal, marketing, finance, support, compliance, security, etc.
As you can see, the business agility needs meet the cloud native development objective.
Cloud native development helps companies achieve business agility mainly in the following three areas:
- Increased speed – It is possible to provision resources really fast without a need to buy, install and configure them. Cloud technologies allow fast piloting, delivery of software and its updates and a safe rollback (by keeping the previous version running).
- Increased quality – A very high reliability is achieved by duplicating resources and scaling. In case of a failure of some resource, there are other resources to compensate for the failure and smoothly transition the traffic to the operational parts of the cloud. Data is typically duplicated for backup reasons. Cloud environments also comply with important business standards and offer increased levels of security.
- Lower costs – An absolutely elastic scalability allows us to provision only the resources currently needed according to the given demand. We don’t need to keep many nodes up and running in the night for example. Companies then apply the pay per use model.
Through agile and lean project management practices, software is developed in small increments that are released early and often. There is no need to wait for a complete implementation to only see that the result does not fully meet our expectations.
If the project is supposed to fail, we want it to fail fast with as little costs as possible.
Private, Public and Hybrid Cloud
We use the terms private, public, and hybrid cloud throughout the article, so it is good to clearly define what they mean.
First, there are some things all of these clouds have in common. Every cloud abstracts, pools, and shares scalable computing resources across a network. Every cloud type also enables cloud computing, which is the act of running workloads within that system.
In the Public Cloud, a service provider makes resources available to the public via the internet. In this case, you don’t have to worry about the local hardware cost or keeping the local hardware up-to-date. This provides an opportunity of salability and resource sharing that would not have been possible for a single organization to achieve.
In the Private Cloud, one creates a cloud environment in its own data-center and provides self-service access to compute resources for its own organization.
All clouds become private clouds when the underlying IT infrastructure is dedicated to a single customer with completely isolated access.
The Private Clouds no longer have to be sourced from on-premises IT infrastructure. Organizations can now build Private Clouds on rented, vendor-owned data centers located off-premises, which makes any location and ownership rules obsolete. This leads to several private cloud subtypes including:
- Managed private cloud – companies use an environment that’s deployed, configured, and managed by a third-party vendor.
- Dedicated cloud – A cloud within another cloud. This can be dedicated for a certain department for example.
Finally, the Hybrid Clouds are when several clouds are connected together (through a network for instance). Typically, Private and Public Clouds are connected to form a Hybrid Cloud. However, two Private or two Public Clouds can also form a Hybrid Cloud.
And for the things not to be that easy, there is one more type – Multi Cloud. A Multi Cloud is a cloud approach made of more than one cloud service, from more than one cloud vendor (public or private). All Hybrid Clouds are Multi Clouds but not all Multi Clouds are Hybrid Clouds. Multi Clouds become Hybrid Clouds when multiple clouds are connected by some form of integration or orchestration.
For example, when there are services deployed in multiple clouds on multiple vendors and all communicating together. Some are on premise, some in Amazon EC2, and some in Google Compute Engine. This is an example of Multi Cloud.
What we already know from the past? What stages did we go through?
Originally, software development started in the monolithic era. All components were tightly coupled, made to smoothly work together, packed as a single solution that just got deployed and worked.
Monolithic architectures are simple to build, test, deploy and monitor. Components in a monolith typically share memory which is faster than service-to-service communications. This introduces a better performance (to some degree).
However, from the point of view of reliability, an error in any of the modules in the application can bring the entire application down.
When we wanted to update any piece of the application, we might find it extremely difficult. Due to a single large codebase and tight coupling, the entire application would have to be re-deployed for each update.
Also because of the tight coupling, the application must use the same technology stack in all its components. This is very limiting in the vendor choices. And typically leads to a single vendor lock-in.
However, it is important to say, that at the time, monolithic architecture might have been the only possible solution due to hardware and other limitations.
Three-tier (Multi-tier) Architecture
The Three-tier, Multi-tier, or N-tier architecture was introduced to solve some of the drawbacks from the monolithic era.
By segregating an application into tiers, developers acquire the option of modifying or adding a specific layer, instead of reworking the entire application.
Some of the layers might be independently scalable. The user interface layer can scalable independently for example.
Due to a bit looser coupling, it is possible to use different technological stack for the User Interface and Business logic for example.
The Three-tier architecture mitigates the lack of scalability and agility of the monolith architecture. On the other hand, it keeps some of the previous drawbacks. For instance, a poor reliability of a single tiers can heavily influence the other tiers and thus the whole application.
Service Oriented Architecture
The service orientated architecture (SOA) promotes loose coupling between services. SOA separates functions into distinct units (so called services), which developers make accessible over a network in order to allow other developers to combine and reuse them in the creation of final business applications.
The SOA manifesto was written in October 2009 and its six core principles perfectly reflect the main issues of its times (with some of them more or less persisting until today):
- Business value over technical strategy
- Strategic goals over project-specific benefits
- Intrinsic interoperability over custom integration
- Shared services over specific-purpose implementations
- Flexibility over optimization
- Evolutionary refinement over pursuit of initial perfection
Among the main benefits of SOA designs are scalability, agility, and the freedom to choose the technological stack used for individual services.
Scalability allows creation of smaller services that can be horizontally scaled.
Horizontal scaling means that you scale by adding more machines into your pool of resources whereas vertical scaling means that you scale by adding more power (CPU, RAM) to an existing machine.
Agility means that the services can be deployed independently. A new version of a service can be deployed and tested by just a subset of a few customers to make sure that everything works smoothly prior to a much wider deployment.
Each service can be built with a completely different technological stack. This allows various teams to work together on a final solution.
It also provides flexibility in hiring as we do not need to demand a specific programming language.
One risk that still prevails in SOA is a centralized database(s). Typically (among supporting databases), there is a shared database that holds the business data. Such a database easily becomes a single point of failure and a potential performance bottleneck.
Another risk is a centralized integration through Enterprise Service Bus. The Enterprise Service Bus decouples the services, but is difficult to maintain and not very “agile”.
Cloud Native Application Development
How is Cloud Native Application Development different from the traditional model? It is not about infrastructure like public, private, or hybrid cloud. The core principles are in the evolution of architecture, platforms, tools, and processes.
The following table summarizes main differences between the traditional and Cloud Native model.
|Server-centric (physical or virtual)||Container-centric|
|Scale up vertically||Scale up horizontally|
|Tightly coupled monolith||Loosely coupled services|
|Infrastructure-dependent||Portable across infrastructure|
|Waterfall, semi-agile, long delivery||Agile with continuous delivery|
|Local IDEs and developer tools||Cloud-based, intelligent tools|
|Siloed teams – development, operations/IT, quality assurance, security etc.||Cross departmental collaboration|
Let’s have a look at these differences in more detail.
Server-centric vs Container-centric. Concentrating on containers allows us to have heterogeneous (and even commodity) hardware to run and operate our services. The deployments are hardware, and operating system agnostic. They just rely on a container service that is available for practically anything. This makes any migration a piece of a cake.
Scale up vertically vs horizontally. We have already mentioned what is the difference between those two types of scaling. The problem with vertical scaling is a limiting factor of available hardware. Even if we bought the most expensive machine in the world, there are simply limits on the number and speed of CPUs, size of the addressable memory space etc. In horizontal scaling, we simply add any machine available.
Tightly coupled monolith vs loosely coupled services. We have already mentioned the main disadvantages of monoliths like single technology stack, low error resilience, and difficulties with updates. On the other hand, loosely coupled services bring us the benefits of being able to switch the services, deploy multiple versions of the same service at the same time, upgrade clients gradually etc.
Infrastructure-dependent vs Portable across infrastructure. This is closely related to the first feature – container-centric. We simply do not need to code specifically for any architecture, network infrastructure or similar.
Waterfall vs Agile. None of the project management methodologies are strictly prescribed for any of the models. However, the reality shows that traditional methodology tends to use waterfall project management, while Cloud Native Development introduces small incremental changes in the agile manner.
Local vs Cloud based tools. Cloud based tools allow us to manage, deploy, and in some cases even develop the applications wherever we are. We just use any computer, log in to the system and we are good to go. All developers can share the same environment and see the same thing with the same state.
Siloed teams vs Cross departmental collaboration. This is also related to the project management methodology chosen. However, Cloud Native Development is not only about technology. It also introduces cultural changes. All teams need to work closely and constructively together to build the best possible solution. There is no space for any friction in handovers.
Let’s investigate the four core principles of Cloud Native Development in more detail.
Service based architecture builds on principles of modularity and loose coupling (as we have mentioned in the section on SOA above). A typical representation of service based architecture is a widely popular Microservices architecture.
The Microservice architectural pattern is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API (REST API).
These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.Martin Fowler, Chief Scientist, ThoughtWorks [MF]
What has changed since the times of SOA? Definitely, there was a huge technology evolution leading to wide adoption of public and private clouds. We also drifted from physical machines, through virtual machines to containers.
Decentralized approach also gained a big popularity to remove single point of failures. Data is stored in multiple data stores, the services use lighter and simpler APIs, and the services integrate through those APIs. Also, organizations and teams reflect that in their cultural shifts.
Services are built to be immutable and externally configurable. They are stateless (improves performance) and horizontally scalable. The services themselves are decoupled and independently deployable so that no assumptions are made about the existing environment and no strict rules on the order of deployment are made.
Individual services own data relevant to them and the implementation rather reflects the business domain being solved. Also, the services are technology independent making it possible for any team of developers to start developing them (no matter what programming language is currently popular in the team).
Microservices concentrate on addressing the business domain problem. While we want the microservices to be as simple as possible and allow polyglot implementations, we still need a wide variety of supporting functions for them to work successfully, including: security, scaling, configuration, resilience, observability, discovery, middleware, API interface etc.
The key to success is not reinventing the wheel. There are standardizations around these supporting functions that every microservice can easily reuse.
Among main benefits of microservices are:
- Faster time to market – as development cycles are shortened, a microservices architecture supports more agile deployment and updates.
- Highly scalable – as demand for certain services grows, you can easily add CPUs to meet the needs.
- Resilient – the independent services, when built properly, do not impact each other. This means that if one service fails, the whole application does not go down.
- Freedom – developers have the freedom to choose the best language and technology for the necessary function.
Like with any other technology, nothing is a panacea and there are also some risks in the microservices world:
- Complexity – partitioning an application into independent services also means that there are more moving parts to maintain.
- Services communication – exchanging data over the network introduces latency and potential failures.
- Data consistency – as with any distributed architecture, ensuring data consistency is a challenge, both for the data store and for data in transit over the network.
What are containers? Containers are objects for holding or transporting something…
Containers are technologies that allow you to package and isolate applications with all the files necessary to run. They represent portable deployment and execution units.
Containers are a way to keep the individual services/deployments/applications isolated, while keeping the extra layers needed for that as thin as possible.
Before containers were introduced, the most typical approach was using Virtual Machines. On top and a hardware platform, there was an operating system. The operating system was running a sort of heavy weight hypervisor that allowed virtual machines to be created and executed. The virtual machines then had their own operating system installed in them.
The target hardware platform then had to run many operating systems at once as well as the hypervisor itself.
With containers, there is only a thin layer of container engine on top of the operating system and each container uses the core services from that single operating system through the container engine. So there is always a single operating system on a single hardware platform. More memory and CPU power are then available to the application services. The container engine also keeps services securely separated.
The main benefits of containers are:
- It is possible to start, create, replicate or destroy containers in seconds.
- Improved developer productivity and development pipeline.
- Fast and smooth scaling.
- Effective isolation between containers.
- High resource efficiency and density.
- Platform independence.
Among the risks to keep an eye on are:
- Increased number of moving parts.
- Interdependencies between services.
- Complex infrastructure (we might need experts on various hardware platforms).
Distributed integration means that there are no shared data models/structures and no direct links between services and applications. All communication and data exchange happens in the form of either an API call, a message exchange or similar.
An Application Protocol Interface (API) is a standardized interface to a software service or data that can be invoked at a distance over a communications network
From every API (i.e. a service), only a tip of an iceberg is publicly available to other services. However, every API must be managed. This introduces new challenges in order to keep every API well documented, secured, scalable, reliable, tested, monitored, and versioned. We also need to carefully control access and SLAs.
The core differentiators in distributed integration are decentralized and self-service approach to application deployment, using open standards, and lighter data formats.
The right scenarios to use distributed integration include microservice to microservice calls, calls to embedded or external devices or services, bridging calls to existing monoliths.
Infrastructure as Code
We want our infrastructure to be repeatable, declarative, and reproducible. We want to keep recipes for the infrastructure and be able to pull out a recipe and have the infrastructure ready within minutes in the exact same state as always.
This is why we call it an infrastructure as a code – we have scripts/recipes to create the infrastructure.
Such an infrastructure enables two important principles: Continuous Integration (CI) and Continuous Delivery (CD).
Every single change to an application service is automatically tested and deployed. The goal is to demonstrate that the code is production ready. Due to the recipes for infrastructure, the CI/CD execution pipelines are able to create ephemeral (short-lived) testing environments very similar to the production environments.
Any release can then happen on a push of a button (as well as a safety rollback should anything go wrong).
An inability to deploy new code must not cause the production outage. We need all changes and artefacts to be versioned and rollback-able (including the infrastructure recipes).
The lack of automation makes servers like snowflakes – unique and fragile. Configuration drifts over time, engineers introduce manual hotfixes, releases are getting more and more painful.
Our goal should be to automate all the things, so that the servers can be easily recovered from any failure instantly.
At the end of the CD pipeline a couple of things can happen. Just migrating all users to the newly deployed service version does not seem like the best idea. Therefore, we use several techniques to make the transition smooth and safe.
Canary deployments are used to verify that an update does not bring anything. It works like real canaries in mines. An update is deployed to a limited (single) container and only specific users (friends, family, partners, etc.) are directed to this deployment.
When nothing fails, we are good to gradually switch to general deployment. Or we are ready to easily rollback in case of troubles.
Blue-green releases are used to take the last step of testing into real production. Let’s say all traffic works on the older blue environment. You deploy the new update in a green environment and perform all the testing. Once everything is ready, all the traffic is directed to the new green environment keeping the older blue environment intact and idle. Should anything fail, we are able to easily rollback to the blue environment. If everything works smoothly, we can safely remove the blue environment.
A/B testing is used to experiment with new features. When we want to see how a testing group of users (family, friends, partners, etc.) reacts to new changes before we introduce them to all users. We can see if users are not confused, behave as expected, trigger needed actions etc. After a final assessment of the results, changes may be promoted globally, or discarded.
New Development Culture
Are you Cloud Native? Yes, you have all the right tools in your toolbox, so you should be. Or? Unfortunately, it is not only the technology that matters.
Applying the culture, practices, processes and technologies of the Internet-era to respond to people’s raised expectations.Tom Loosemore, Digital Leadership Consultant [TL]
Tools influence the culture, the culture influences the tools.Adam Jacob, CTO, Chef
It is crucial to create an organization that ignites passion and performance. Where people have their WHY and motivation in shared purpose and passion, where employees are heavily engaged. HOW is such an organization managed? Typically through meritocracy and by getting things done. WHAT does such an organization do at the top? The leaders typically set strategic direction based on decisions that were made inclusively together with the employees.
DevOps is a cultural and professional movement, focused on how we build and operate high velocity organizations, born from the experiences of its practitioners.Adam Jacob, CTO, Chef
It is important to bear that in mind. If we kept the siloed organizations where we throw problems over the fence, where people are motivated by threats, where higher ranks are always right… we could never fully jump on Cloud Native Development.
Because according to Conway’s Law – any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure
To successfully perform the Cloud Native Transformation we highly recommend you to apply so called CALMS principles of DevOps:
- Culture. Before silos can be torn down, there needs to be a culture of shared responsibility, or at least a group of people devoted to establishing that culture in a grassroots type of way, with management approval and support.
- Automation. Similar to the technical practices centered around continuous delivery mentioned above, teams undertaking a DevOps transformation should be devoted to automating as many manual tasks as possible, especially with respect to continuous integration and test automation.
- Lean. Development teams are making use of lean principles to eliminate waste and optimize the value stream, such as minimizing WIP, making work visible, and reducing hand-off complexity and wait times.
- Measurement. The organization is devoted to collecting data on their processes, deployments, etc., in order to understand their current capabilities and where improvements could be achieved.
- Sharing. A culture of openness and sharing within and between teams (and enabled with the proper tools) keeps everyone working toward the same goals and eases friction with hand-offs when issues arise.
Core company values supporting these principles clearly define expected and model behavior worth following. Such values could be openness, transparency, responsibility, and similar.
Transparency can be very well achieved through collaborative work management like Lumeer. Especially when companies go through digital transformation, they seek tools that show the progress of work performed by cross-functional teams.
Overall, the mixture of benefits of Cloud Native Application Development leads to a higher amount of uptime and a better user experience. Compared to traditional methods and systems, applications can be made faster, problems are rectified sooner, and all-around tasks require less effort.
Teams are able to communicate problems faster and more easily. Microservices enable small and regular updates to applications through CI/CD. Containers allow for improved scalability in the cloud and faster resolutions to issues.
If you have not already started with Cloud Native Application Development, you should seriously consider it for your next project.
Cloud Native Development also allows companies to build and run their software in a public cloud and share the benefits with others (typically for small payments in form of subscriptions). This is so called Software as a Service and you certainly know many of such services like Slack, GMail, Taskeo, etc.
This comprehensive guide to cloud native development was delivered to you by Lumeer.
Lumeer allows you to plan, organize, track work in one visual platform.