This article was published as a part of the Data Science Blogathon.
We go back in time to discuss the history of containers and virtualization. It’s important to bear in mind that the containers can exist because of the possibility of the machines being virtualized. I will reinforce this because we talk about the differences between the host in virtualized vs. container environment. Therefore, it’s important to know that both are use cases that can be used for each project type.
The history beginnings with the technology that introduces virtualization, where the project V-Server has developed by Jacques Gélinas in the year 2002, these make it possible to run several Linux servers in a box with independence and security. In the same year, the Linux Kernel received an important update that years later will have gone made it possible to build containers how we know them today. This update introduces namespaces. With these, it’s possible to run a process isolating these from others using global resources, and containers use these in your core functionality.
To understand containers, we will explain the basics of how this architecture works and the differences between the virtualized machines.
The virtualized machine is a layer between the hardware and the emulated Operational System that run applications. We will use web apps; for example, in the image below, we can see how this architecture works.
This model is used by hosts to up services to be accessed by sharing resources of a server between various virtualized O.S in unique machine hardware. In the image above, we can see an important component responsible for providing installation of different O.S over the host base O.S; this component is the hypervisor.
This technology provides many benefits to infrastructure costs with hardware resource allocation optimization and provides the environment to able containers possible.
The containers you see above can run isolated processes on the same machine; virtualized containers use fewer resources to up an application environment because the container engine runs over the O.S layer and provides isolation of many resources instead of using the entire O.S environmental resources. An example of architecture is the docker, which you can see in the image below.
The docker container has many things we can mention, but the docker image is the main component; these images can be deployed to a docker repository of artifacts on DockerHub or private Enterprise.
Docker needs an orchestrator to manage the images, networks, and many other features. Here, we introduce the Kubernetes or K8s(eight letters between K and S). The Kubernetes has the docker into your architecture performing an easy-to-use way of containers and services that containerization provides us. In the image above, we can see a structure of K8s.
We can see that k8s incorporate docker in a layer responsible for the containers running into a Pod. The Pod is a set of one or more Linux containers and is the basic unit of K8s.
This tutorial aimed to introduce the concept of containerization, and we can conclude that it is not hard to understand the background of Kubernetes and docker. Now you can check the references and explore more and learn how to migrate or build your systems for the containerized side that will turn it fast to scale and distribute for your customers.
In this tutorial, you learned: