The natural growth of this successful project, coupled with the dynamism and mobility required by today’s information technology, has led us to move away from monolithic models based on on-premises virtual machines to cloud computing based on containerized microservices.
This is how Kubernetes, a container orchestrator with more than 10 years of experience at Google headquarters, has helped us to manage the entire lifecycle of our platform.
At a very high level, Kubernetes (κυβερνήτης, Greek for “helmsman” or “captain”) is an open-source system that can run on practically any type of machine (Node). When running Kubernetes on several machines, you obtain a so-called Cluster. Our Pods run in each of those nodes to become our applications. Finally, each Pod can be composed of one or more Containers that are the services that make up the application.
Some of their main features are:
Kubernetes encourages application developers to write code as uncoupled microservices facing the problem of large monolithic applications while the total amount of functionality remains unchanged. This enables each microservice to be developed independently by a team that is specialized in such services and technologies such as PHP or .Net.
Pods are capable of running multiple containers. When taking the microservices philosophy to the extreme, this feature is particularly useful. As in the case of one of our applications, we can distinguish between web services, php services, queue services and logging services.
Kubernetes makes it easy to horizontally scale the number of Pods in use depending on the needs of the application and determining efficiently which Node a Pod should run on based on the resources available across the Cluster.
Deploying Kubernetes will create additional Pods from a newer Docker image ensuring that they are running and healthy before destroying the old Pods. Should anything go wrong, we can always roll back changes and smoothly return to the previous state.
It works on my machine (and yours)
The resources in our cluster are declared in version controlled code, which is why your configuration files will produce the same results every time they create your declared resources. This immediately speeds up the process of building, testing, and releasing software. Our current CI/CD pipeline uses Docker Registers and HELM templates to package our services in Kubernetes.
Taking into account the above mentioned, delivery periods are shorter and safer and repetitive and manual tasks are reduced. As a consequence, the development team can focus on developing, thus increasing QA reliability, while at the same time using less resources.
Pedagoo thus becomes a modern, modular, flexible, maintainable, extensible, stable and efficient platform that is fully prepared for future technologies and market challenges. Would you like to feel the touch of the Kubernetes wheel?
Are you interested in our project? Are you looking for a new challenge and do you have technological skills? Do not hesitate to contact us! We are looking for people at different levels and positions.