Video Thumbnail for Lesson
12.4: Kluctl (Single Service)

Kluctl (Single Service)

Transcript:

Now let's use Kluctl to do the same thing. And with Kluctl, we're actually going to build out a full version of our application, including the third-party dependencies. We'll deploy a staging environment to our Civo cluster, and we'll deploy a production environment to our GKE cluster. Now, before we build out the entire project, let's start with a single service and see how we would use Kluctl to define our templates and environments that we're going to deploy our Golang application into. From there, we'll extend it and add all the other services, including our third-party dependency. So I'll navigate into the Kluctl single service directory, and we'll open up this .cluecontrol.yaml file, which is where you define all the different environments. As you can see, I've defined two targets. Targets are the language used by Kluctl to define an environment you're going to deploy into. Oftentimes, this will correspond one-to-one with a Kubernetes cluster, so a single target may represent a single cluster. However, that's not required. You can have multiple targets that go into the same cluster, into different namespaces, for example, and all the examples here, I'm going to have that one-to-one mapping from target to cluster. There's a really nice feature within a target that you can specify a context. This is the context within your kubeconfig file. By doing so, you can avoid the potential for you being authenticated against the wrong cluster and applying an incorrect configuration, which could be catastrophic if you did so against the production environment. By adding a context here, you're able to eliminate that whole class of bugs entirely. In addition to targets, you can define arguments. These are top-level global variables that can be used throughout the entire Klue Control project. In this case, I'm passing a single argument, and that is the name of the environment. So I'm going to pass it either production or staging. Using that argument, I'm going to be able to load additional configurations that will get injected into my templates. This discriminator field is how Klue Control keeps track of which resources it is managing. So you need to have some sort of discriminator here, which will allow the tool to uniquely distinguish between different targets. If you're deploying into separate clusters, this is less critical, but still best practice. If you're deploying into the same cluster, this is the only way that Klue Control can know which resources it is managing for a particular configuration, and that allows it to do things like prune and delete resources safely. So here I'm templating in the name of the target itself into this discriminator. So this will be Klue Control-staging and Klue Control-production as my discriminator. The next important concept and file to look at is the deployment.yaml file. This is separate from the concept of a Kubernetes deployment. So you'll see these deployment.yaml files throughout the Klue Control project. Within my repo, I'm using lowercase deployment to mean a Klue Control deployment. And then as I have throughout the course, I use the uppercase D deployment to refer to a Kubernetes deployment. If we look in this file, it has a few things. First, it's loading in additional variables. So like I said, that single argument at the top level, you could put more arguments at the top level, but it can quickly get quite noisy if you put all of your templating configurations within your Kluctl file. So instead, I've opted to have just the environment name there. And then I'm able to use that environment name to load in an additional configuration file. So within this, I'm loading in a file at config slash staging or production. So if I go in my config subdirectory, you can see I have a production file and a staging file. And these configurations at this top level are going to be sort of variables that are shared across different services. So for example, I have my host name, which is shared between my two backend services as well as my React client. And so by putting it at this location, I'm then going to be able to reference that shared vars.hostname across all those different manifests. And so depending which target I choose, I'll either load in kubernetescourse.devosdirective.com or I'll postfix that dash staging, just like we saw in some of the other tools. Also within deployment, I have further deployments listed. And so you're able to reference other deployments to recursively go through your directories and define how you want all the resources to be applied. In this case, I have one path called namespaces. That's this subdirectory. And I have another path called services here. I then have a barrier. And what a barrier does is ensures that items that come prior to it in this list will be executed first, such that my namespaces will be created before my services try to spin up. And then finally, you can place a common label. So these will be kubernetes labels that get applied to all the resources that are being provisioned by this deployment. We can then take a look at these subdeployments. So if we look in namespaces, it's just a single namespace object. Because namespaces just has manifests at the top level, it will automatically pick it up. In the services subdirectory, I'm going to have multiple services here. So I have my apigolang folder, and then the deployment references that subdirectory. And finally, within there, I have one more deployment. Here I'm passing it a path to the manifest file. Manifest is where all of my kubernetes manifests are gonna live. And then I have another config subdirectory. And these will be configurations that are specific to this application. So if I load that one up, I'm loading in a version and a number of replicas that will get used in my deployment.yaml file. So if I go here in my deployment.yaml, I can see I'm referencing that apigolang.replicas. That's gonna get loaded in from this config. It's gonna be two for production or one for staging, as well as the version is gonna get used from this apigolang specific configuration. But then if I look at the ingress route, here I'm using that sharedvars.hostname, which is coming from my top level configuration files. So that's coming from this top level configuration. And so this, while it's a relatively minimal example, shows a lot of the different ways that you can handle templating and managing configurations for different environments, including sharing some of those configurations across services while having others being service specific. Similar to Helm template or kubectl customize, Kluctl has a render option. So we're gonna pass it the dash T flag to specify target. Most of the Kluctl commands will require this target flag. And that's because you need to know the context of the target that you are trying to execute that command against. So here we're gonna render out the production configs, print them all to the console, and then pipe that to YQ to get syntax highlighting. So we can see we get all of the resources associated with our Go-based API. And also we get that namespace at the top of the list because that's what's gonna be deployed first. It's adding some convenient labels. You can see this common label that we had specified. If you recall within our top level deployment.yaml, we said we want this Kubernetes course label to be applied to all of our resources. And as you can see, here it is, here it is. You can see our number of replicas was injected as well as the version that we specified. Let's now render out the staging values. You can see our host name indeed has this post fix like we would want. And then our deployment has a single replica and is using a different version that it's picking up from that staging specific config. And so that's how we can configure a single service.