Video Thumbnail for Lesson
12.2: Kustomize

Kustomize

Transcript:

Okay, let's navigate into the module 12 subdirectory. Within this directory, we've got a Helm directory, a Kluctl directory, and a Customize directory. We're going to start with Customize. By looking at the structure here, we can see I've built out a configuration for all of my services. I haven't included, for example, the Helm installation for Postgres or the Helm installation for traffic. However, this should give you an idea of what a Customize overlay model would look like for your first-party services. As you can see within the base directory, I have one subdirectory for each of my services here. Within that, I have the resource YAML files. These are going to be very similar to what we saw in module 7, where we deployed onto Kubernetes in a single environment. I then have, in addition to my base, I have a production subdirectory and a staging subdirectory. Each of these contains only a subset of the resource definitions that need to be modified from that base. So let's take a look at our Go-based API. So let's go into the base for our Golang application. This is a deployment.yaml file is actually identical to what we saw in module 7. We're defining a set of containers, including the one that we built and pushed. We're specifying the port. We're giving it some resource limits. We're specifying the security context. So this is all stuff that we've seen before. However, this image tag is something that's going to differ between environments. So how can we write an overlay that will allow us to modify just this subset of the YAML file? If we go into our production overlay for the Golang API, we have a few files. We start out with this customization. This is sort of how you tell customize which files it should reference. In my base, I say, each of these files represents a resource. Within this section is going to be the base definition of any of your Kubernetes resources. However, within our overlay, under the resources section, we pass a path to the base folder, and that's going to pull in all of those resources from that base folder. And then I have a number of patches here. So the first patch is my deployment.yaml. Let's take a look at what that looks like. And so in my patches, I'm changing two fields from my deployment.yaml. I'm specifying instead of one replica, I want two replicas for production. And instead of version foobar baz, I want version production version. When we go to apply this to the cluster using customize, customize is going to take that base and then replace those two particular fields within this YAML such that the resulting deployed specification will have these values. If we look at our staging overlay, in staging, we're going to run a single replica, and we're going to run the staging version. And so this shows you that we can share most of that configuration in that common base and then only modify the fields that we care about via these overlays. The other patch that I'm making is replacing one of the routes in my ingress rule. For production, the host is going to be kubernetescourse.devobstructor.com. And for staging, I'm going to postfix this dash staging to the end of it. You'll notice that this patch looks a little different. And this brings up that limitation that I mentioned around customize not having great capabilities for merging together configurations using YAML arrays. So within the ingress route, the routes are a YAML array. And so in order to figure out which rule we want to modify, we have to use this style of patch where we specify an index within that array. Customize is not able to automatically align our patch to a particular entry within the array. That's why we had to use this style of patch for that particular use case. Because we didn't specify patches for any of the other resources, such as the secret or the service, those values are going to get deployed as is from the base into all of our environments. The other services look quite similar. We're going to be patching in a new image tag and maybe modifying the resource requests or the replica numbers. I'll let you review those on your own time. Let's look at how we would interact with these setups. If we want to render out the values but not apply them to the cluster, we can do a few things. So let's render out the production values. To do that, we can call kubectl customize and then pass it the path to the overlay environment we want to render. And then I'm just piping that to YQ to get the nice syntax highlighting here. As you can see, let's scroll down to the API Golang. We've got the host that got patched in. And if we find the deployment, we can see we got two replicas and our production version is the one being used. Let's instead render out staging and see the difference. So for staging, our deployment has a single replica, uses staging version, and if we look at the ingress route, and if we look at the ingress route, oh, it looks like I have a bug where I didn't patch that properly in one of my overlays. Looks like for node I did, for Golang I did, and for client-react-nginx I did not. So let's take a look at fixing that. So if we go into our staging overlay for client-react, ingress route, this was incorrect and should be staging. Let's re-render it. And now our ingress route for the staging configuration is correct. If we wanted to apply these to the cluster, we could navigate to the corresponding directory, do kubectl-k, and then pass it the path to that directory. Because I don't have real image version tags, I had that production version and staging version as placeholders. I'm not going to do that right now, but that is how you would. And so as you can see, getting started with Customize is quite easy. And so I think it's a great place for people using Kubernetes to get started with deploying into multiple environments.