官术网_书友最值得收藏!

Introduction to Kubernetes

In the previous chapter, we studied serverless frameworks, created serverless applications using these frameworks, and deployed these applications to the major cloud providers.

As we have seen in the previous chapters, Kubernetes and serverless architectures started to gain traction at the same time in the industry. Kubernetes got a high level of adoption and became the de facto container management system with its design principles based on scalability, high availability, and portability. For serverless applications, Kubernetes provides two essential benefits: removal of vendor lock-in and reuse of services.

Kubernetes creates an infrastructure layer of abstraction to remove vendor lock-in. Vendor lock-in is a situation where transition from one service provider to another is very difficult or even infeasible. In the previous chapter, we studied how serverless frameworks make it easy to develop cloud-agnostic serverless applications. Let's assume you are running your serverless framework on an AWS EC2 instance and want to move to Google Cloud. Although your serverless framework creates a layer between the cloud provider and serverless applications, you are still deeply attached to the cloud provider for the infrastructure. Kubernetes breaks this connection by creating an abstraction between the infrastructure and the cloud provider. In other words, serverless frameworks running on Kubernetes are unaware of the underlying infrastructure. If your serverless framework runs on Kubernetes in AWS, it is expected to run on Google Cloud Platform (GCP) or Azure.

As the defacto container management system, Kubernetes manages most microservices applications in the cloud and in on-premise systems. Let's assume you have already converted your big monolith application to cloud-native microservices and you're running them on Kubernetes. And now you've started developing serverless applications or turning some of your microservices to serverless nanoservices. At this stage, your serverless applications will need to access the data and other services. If you can run your serverless applications in your Kubernetes clusters, you will have the chance to reuse the services and be close to your data. Besides, it will be easier to manage and operate both microservices and serverless applications.

As a solution to vendor lock-in, and for potential reuse of data and services, it is crucial to learn how to run serverless architectures on Kubernetes. In this chapter, a Kubernetes recap is presented to introduce the origin and design of Kubernetes. Following that, we will install a local Kubernetes cluster, and you will be able to access the cluster by using a dashboard or a client tool such as kubectl. In addition to that, we will discuss the building blocks of Kubernetes applications, and finally, we'll deploy a real-life application to the cluster.

主站蜘蛛池模板: 武强县| 凤城市| 夏河县| 离岛区| 郸城县| 乌拉特后旗| 定西市| 周宁县| 平昌县| 巴彦淖尔市| 五华县| 西和县| 吐鲁番市| 湘潭县| 墨竹工卡县| 建湖县| 衡阳县| 垦利县| 凤阳县| 宜城市| 湘潭县| 逊克县| 蓬莱市| 屏山县| 阿合奇县| 务川| 乌鲁木齐县| 楚雄市| 龙山县| 亳州市| 安平县| 金川县| 九龙县| 建湖县| 青冈县| 同德县| 古浪县| 万宁市| 屯留县| 尖扎县| 抚州市|