Evolution of application deployment over the past 20 years.
Configure your local and remote lab environments.
Covers the resource types that are included with Kubernetes.
•Pod
•Job
Using helm to manage Kubernetes resources
Example microservice application.
Kubernetes manifests to deploy the demo application.
Explore how custom resources can add functionality
Install additional software to enhance the deployment.
Improving the DevX when working with Kubernetes.
How to safely upgrade your clusters and nodes.
Implement CI/CD for your applications (with GitOps!)
So far, we've mostly been using capabilities that are built into Kubernetes itself, with some additional functionality provided by drivers for those common container interfaces. Now let's take a look at the ways in which you can extend the Kubernetes API to adapt the system to fit the needs of your particular application. When people talk about Kubernetes, the first thing that comes to mind is that it is a container orchestrator. It allows you to take containerized workloads and deploy and schedule them across a number of different compute resources. However, it's not just a container orchestrator. Kubernetes effectively provides an API for declaring a set of resources. You send those resources to the Kubernetes API server, it stores them in etcd, and makes sure that the state of those resources is shared across all the control plane nodes and accessible via the API. The second piece that Kubernetes provides is this concept of a control loop that is continuously observing and acting upon those resources. Let's take some of the built-in resources that we learned in module 4 and think about how this plays out in those cases. If I create a deployment, I have my deployment.yaml file, it defines a whole bunch of configuration for that deployment, I send that to the Kubernetes API with the kubectl apply command, it gets stored in etcd, and then that second part kicks in. We have the Kubernetes controllers that look at that deployment and say, oh this is a deployment, I need to create a corresponding replica set, and it will do so. Then there's another controller that looks at that replica set and says, oh this is a replica set, I know what I need to do, I need to create a number of pods to go along with this replica set. The behaviors that the system should take for those built-in resources are all provided by these controllers that come with Kubernetes out of the box. However, you can define your own custom resources with whatever schema you want, tell Kubernetes how to interpret that schema, and it will happily accept those custom resources, store them within etcd, and maintain their state. You can then write your own applications, in this case we call them controllers, which will query the Kubernetes API to find out which of those custom resources you have deployed into your cluster and take whatever action you want. This may sound a little bit abstract, but let's break it down like we did for the built-in resources with a couple of use cases. One example is a project called CloudNativePG, which is designing a system for deploying PostgreSQL databases onto Kubernetes that we're going to take a look at in the following section and actually deploy it into our cluster. What they've done is created a set of custom resources associated with Postgres clusters, backups of Postgres, etc, and then the logic of what maybe a human database admin would perform is encoded into a custom controller such that you can execute many common workflows in a declarative fashion by deploying these custom resources. Another great example is management of TLS certificate. There's a project called CertManager, which I'm not covering in this course, but I would certainly suggest you often take a look at it, and they use custom resources for managing certificates. So they have one for certificates, one for let's say an HTTP challenge, one for a certificate provisioner that is pointed at Let's Encrypt, for example, and so they have those custom resources, and then they also have implemented a controller such that when you deploy those custom resources to your cluster, that controller is able to go off and take action such as provision a new certificate and store those credentials in a Kubernetes secret. Another great example is a project called Crossplane. It enables you to deploy infrastructure simply by creating custom resources in Kubernetes. They have what are known as managed resources, which map one-to-one with, let's say, an infrastructure provider. If you're deploying something on AWS, maybe you have an EC2 instance custom resource that lives in your cluster. Crossplane controller sees that, is able to interface with the AWS API, and actually create the corresponding instance with all the configuration you've provided in your AWS account. As you can see, these three use cases are incredibly varied, and hopefully that sparks the idea that you can take this pattern of defining a set of resources and then writing an application or controller to observe and act upon those resources to handle all sorts of different scenarios. There are many different projects that help with providing the tooling required to build these types of controllers. Some of the most popular ones are KubeBuilder, Operator SDK, MetaController, or you can use the Kubernetes client code that's provided for many popular languages to bootstrap your own. If you wanted to get started with building out your own Kubernetes operators, KubeBuilder has an excellent tutorial in which you literally go through the process of writing a replacement for the built-in cron job controller. So you're effectively solving the same types of problems that the people building Kubernetes were, and learning the underpinnings of how this operator model works by building out your own implementation of a cron job on top of Kubernetes. To make this a little more concrete, I wanted to just show what a custom resource definition looks like. It is an open API schema that defines all the fields that are allowed, required, and or optional for your custom resource. And so this example here is the ingress route, and we actually used this in the previous module when we were deploying our ingress for the application. We deployed traffic, and it defined these custom resources, which it then used to define its routing configuration. If we look down here at the schema, you can see all the different properties. It tells us what the resource is. It tells us that in order to create an ingress route object, we need to define an API version, a kind, a metadata, a spec. Within that spec, we need to have routes. We can have a TLS configuration. And so these projects, or you, define a custom resource like this, and that gets stored in your cluster just like any other resource. So if I do k get crd, here are all the custom resources that are installed into my cluster. You can see that traffic deployed a number of custom resources. You can see that Kong, which we used in the gateway API section of the built-in resources module, deployed a number of custom resources. And then because this is the GKE cluster, Google has also deployed a number of custom resources that it's using on the back end. Hopefully this description of a variety of different operators and how they tie into this control loop structure is a little bit more and how they tie into this control loop model that Kubernetes provides will get you thinking about how you could apply this pattern to your own application scenarios. Building operators is a relatively advanced topic and falls outside the scope of this course, but we will be deploying and using some of these operators in the following section to get a feel for how they work and how you should interact with these custom resources.