Cloud-Native  V/s  DevOps

Cloud-Native V/s DevOps

To scale out the gap or similarities between the DevOps and CloudNative, we first need to know how they both were created, their basic roles and the grounds on which they are making a difference in the tech domain.

The first revolution was the creation of the cloud, then it was the dawn of DevOps and then the third one was the coming of containers.

Together, of which on comprising they form the new software world of Cloud-Native.

History

Instead of buying a computer, you buy compute. Instead of sinking large amounts of capital into physical machinery, which is hard to scale, breaks down mechanically, and rapidly becomes obsolete, you simply buy time on someone else’s computer and let them take care of the scaling, maintenance, and upgrading.

But Cloud computing isn't only about buying resources and physical systems to save up the capital you may use it to run your own software, but increasingly you also rent cloud services: essentially, the use of someone else’s software, you’re using a cloud service, sometimes called software as a service or SaaS.

Infrastructure as a Service

When you use cloud infrastructure to run your own services, what you’re buying is infrastructure as a service (IaaS). It’s just a commodity, like electricity or water. Cloud computing is a revolution in the relationship between business and IT infrastructure.

The cloud also allows you to outsource the software that you don’t write: operating systems, databases, clustering, replication, networking, monitoring, high availability, queue and stream processing, and all the myriad layers of software and configuration that span the gap between your code and the CPU. Managed services can take care of almost all of this undifferentiated heavy lifting for you.

The Dawn of DevOps

Earlier, developing and operating software were essentially two separate jobs, performed by two different groups of people. Developers wrote software, and they passed it on to operations staff, who ran and maintained the software in production.

Developers tend to be focused on shipping new features quickly, while operations teams care about making services stable and reliable over the long term.

“The system” is no longer just your software: it comprises in-house software, cloud services, network resources, load balancers, monitoring, content distribution networks, firewalls, DNS, and so on. All these things are intimately interconnected and interdependent. The people who write the software have to understand how it relates to the rest of the system, and the people who operate the system have to understand how the software works—or fails.

The origins of the DevOps movement lie in attempts to bring these two groups together: to collaborate, to share understanding, to share responsibility for systems reliability and software correctness, and to improve the scalability of both the software systems and the teams of people who build them.

Business Advantage

DevOps has been described as “improving the quality of your software by speeding up release cycles with cloud automation and practices, with the added benefit of software that actually stays up in production”.

Speed, agility, collaboration, automation, and software quality are key goals of DevOps, and for many companies that means a major shift in mindset.

Infrastructure as Code

The DevOps movement brings software development skills to operations: tools and workflows for rapid, agile, collaborative building of complex systems. Inextricably entwined with DevOps is the notion of infrastructure as code.

Cloud infrastructure can be automatically provisioned by software. Instead of manually deploying and upgrading hardware, operations engineers have become the people who write the software that automates the cloud.

The traffic isn’t just one-way. Developers are learning from operations teams how to anticipate the failures and problems inherent in distributed, cloud-based systems, how to mitigate their consequences, and how to design software that degrades gracefully and fails safely.

The massive scale of the cloud and the collaborative, code-centric nature of the DevOps movement have turned operations into a software problem. At the same time, they have also turned software into an operations problem.

Coming of Containers

To deploy a piece of software, you need not only the software itself but its dependencies.

Alternatively, some languages provide their own packaging mechanism, like Java’s JAR files, or Python’s eggs, or Ruby’s gems. However, these are language-specific, and don’t entirely solve the dependency problem: you still need a Java runtime installed before you can run a JAR file.

Omnibus package- attempts to cram everything the application needs inside a single file. An omnibus package contains the software, its configuration, its dependent software components, its configuration, its dependencies, and so on.

Some vendors have even gone a step further and included the entire computer system required to run it, as a virtual machine image, but these are large and unwieldy, time-consuming to build and maintain, fragile to operate, slow to download and deploy, and vastly inefficient in performance and resource footprint.

Putting Software in Container

The container format contains everything the application needs to run, baked into an image file that can be executed by a container runtime.

The virtual machine contains lots of unrelated programs, libraries, and things that the application will never use, most of its space is wasted. Transferring VM images across the network is far slower than optimized containers.

Even worse, virtual machines are virtual: the underlying physical CPU effectively implements an emulated CPU, which the virtual machine runs on. The virtualization layer has a dramatic, negative effect on performance: in tests, virtualized workloads run about 30% slower than the equivalent containers running directly on the real CPU, with no virtualization overhead, just as ordinary binary executables do.

If you have two containers, each derived from the same Debian Linux base image, the base image only needs to be downloaded once, and each container can simply reference it. The container runtime will assemble all the necessary layers and only download a layer if it’s not already cached locally. This makes very efficient use of disk space and network bandwidth.

Plug and Play Applications

Not only is the container the unit of deployment and the unit of packaging; but it is also the unit of reuse, the unit of scaling, and the unit of resource allocation (a container can run anywhere sufficient resources are available for its own specific needs).

Developers no longer have to worry about maintaining different versions of the software to run on different Linux distributions, against different libraries and language versions, and so on. The only thing the container depends on is the operating system kernel (Linux, for example). Simply supply your application in a container image, and it will run on any platform that supports the standard container format and has a compatible kernel.

Container Orchestrator

Operations teams, too, find their workload greatly simplified by containers. Instead of having to maintain a sprawling estate of machines of various kinds, architectures, and operating systems, all they have to do is run a container orchestrator: a piece of software designed to join together many different machines into a cluster: a kind of unified compute substrate, which appears to the user as a single very powerful computer on which containers can run.

  • Orchestration in this context means coordinating and sequencing different activities in service of a common goal (like the musicians in an orchestra). Scheduling means managing the resources available and assigning workloads where they can most efficiently be run.

  • Cluster management: joining multiple physical or virtual servers into a unified, reliable, fault-tolerant, apparently seamless group.

  • Container orchestrator usually refers to a single service that takes care of scheduling, orchestration, and cluster management.

  • Containerization (using containers as your standard method of deploying and run‐ ning software) offered obvious advantages, and a de facto standard container format has made possible all kinds of economies of scale.

Upgradation from Borg to Kubernetes

To solve the problem of running a large number of services at a global scale on millions of servers, Google developed a private, internal container orchestration system called Borg.

Borg is essentially a centralized management system that allocates and schedules containers to run on a pool of servers. While very powerful, Borg is tightly coupled to Google’s own internal and proprietary technologies, difficult to extend, and impossible to release to the public.

In 2014, Google founded an open-source project named Kubernetes that would develop a container orchestrator that everyone could use, based on the lessons learned from Borg and its successor, Omega.

If you want to learn more about Kubernetes specifically more and more about its infrastructure, Control pods, schedulers, etc., have a look at my other blog:

Cloud Native

The term cloud native has become an increasingly popular shorthand way of talking about modern applications and services that take advantage of the cloud, containers, and orchestration, often based on open-source software.

Cloud Native Computing Foundation (CNCF) was founded in 2015 to, in their words, “foster a community around a constellation of high-quality projects that orchestrate containers as part of a microservices architecture.”

Part of the Linux Foundation, the CNCF exists to bring together developers, end users, and vendors, including the major public cloud providers. The best-known project under the CNCF umbrella is Kubernetes itself, but the foundation also incu‐ bates and promotes other key components of the cloud native ecosystem: Prome‐ theus, Envoy, Helm, Fluentd, gRPC, and many more.

Cloud-native applications run in the cloud; that’s not controversial. But just taking an existing application and running it on a cloud computing instance doesn’t make it cloud-native. Neither is it just about running in a container, or using cloud services such as Azure’s Cosmos DB or Google’s Pub/Sub, although those may well be important aspects of a cloud-native application.

Good cloud-native service design consists of making wise choices about how to separate the different parts of your architecture. However, even a well-designed cloud-native application is still a distributed system, which makes it inherently complex, difficult to observe and reason about, and prone to failure in surprising ways.

While cloud native systems tend to be distributed, it’s still possible to run monolithic applications in the cloud, using containers, and gain considerable business value from doing so.

Future of Operations

Networks and container orchestrators are complicated. Every team developing cloud-native applications will need operations skills and knowledge. Automation frees up staff from boring, repetitive manual work to deal with more complex, interesting, and fun problems that computers can’t yet solve for themselves.

In a software-defined world, the ability to write, understand, and maintain software becomes critical.

Distributed DevOps

Each development team will need at least one ops specialist, responsible for the health of the systems or services the team provides. She will be a developer, too, but she will also be the domain expert on networking, Kubernetes, performance, resilience, and the tools and systems that enable the other developers to deliver their code to the cloud.

Thanks to the DevOps revolution, there will no longer be room in most organizations for devs who can’t ops, or ops who don’t dev. The distinction between those two disciplines is obsolete and is rapidly being erased altogether. Developing and operating software are merely two aspects of the same thing.

Developer Productivity Engineering

The point is that self-service has its limits, and the aim of DevOps is to speed up development teams, not slow them down with unnecessary and redundant work.

Instead of calling this team operations, we like the name developer productivity engineering (DPE). DPE teams do whatever’s necessary to help developers do their work better and faster: operating infrastructure, building tools, and busting problems.

While developer productivity engineering remains a specialist skill set, the engineers themselves may move outward into the organization to bring that expertise where it’s needed.

At this point, not every developer can be an infrastructure expert, just as a single team of infrastructure experts can’t service an ever-growing number of developers. For larger organizations, while a central infrastructure team is still needed, there’s also a case for embedding site reliability engineers (SREs) into each development or product team.

Hope you get to learn some from this blog for which you came here 😄. Soon I will be releasing other blogs explaining some really amazing tools to automate DevOps functions, making and deploying applications and explaining how Golang connects and acts as a bridge in this CloudNative world 💻🚀.

If you like my Article then please react to it and connect with me on Twitter if you are also a tech enthusiast. I would love to collaborate with people and share the experience of tech😄😄.

My Twitter Profile:

Aryan_2407

Did you find this article valuable?

Support Aryan Parashar by becoming a sponsor. Any amount is appreciated!