Evolution of application deployment over the past 20 years.
Configure your local and remote lab environments.
Covers the resource types that are included with Kubernetes.
•Pod
•Job
Using helm to manage Kubernetes resources
Example microservice application.
Kubernetes manifests to deploy the demo application.
Explore how custom resources can add functionality
Install additional software to enhance the deployment.
Improving the DevX when working with Kubernetes.
How to safely upgrade your clusters and nodes.
Implement CI/CD for your applications (with GitOps!)
Okay, we've reached the final module of the course. And this is the one that takes a project from kind of a one-off hobbyist project where you're manually deploying things into a more production-ready system where you have automations that are enabling you to push your code to Git and have those changes be automatically built into container images, have those images be automatically deployed into your clusters. That's what we'll be looking at in this continuous integration and continuous delivery section. And specifically, it allows us to reach these higher levels of capabilities where early on in the course, I showed you how to use the kubectl create command from the command line. We never want to be doing that in production, only really useful for learning purposes. The next level we progressed to in module 7, a manual kubectl apply command. We got our configurations ready, we applied them. In module 12, we defined our configurations to be able to apply to multiple environments. However, it was still a manual application. The next level beyond that would be to run a kubectl apply or a kubectl apply from an automated pipeline like GitHub Actions or CircleCI. And then kind of the next level beyond that, that has become a de facto standard for companies managing Kubernetes resources is something called GitOps. GitOps is the idea that you have all your manifests in Git, which we already do. However, now you're automatically syncing the state of that repo into the cluster and applying those manifests such that your cluster state and your repo state maintain parity automatically. For continuous integration, we'll be using GitHub Actions. And generally, the types of things that you would want to run in continuous integration pipelines are things like your test suite. So when people are trying to merge new code from a pull request into main, you want to execute all the tests that you have against those to make sure they don't break anything. Things like linting and validation as well. You would generally want to build your container images and then push those container images to a registry for your Kubernetes clusters to consume. On the continuous delivery side, here we're thinking about how we get those changes into our Kubernetes manifests and then how we apply those Kubernetes manifests to the clusters. We also would want to validate that those deployments are working as expected. In this case, we're going to use Kluctl and its built-in GitOps capabilities combined with a GitHub Action that automatically updates our manifests within the Git repo to achieve this automation. You could use something like Renovate Bot to achieve this as well. I've implemented kind of a hacky GitHub Action that does a find and replace on the specific versions that we care about for the manifests. And so let's jump over to our code editor and get this working. Now GitHub Actions for a repo live in a top-level .github folder. Within there, there's a workflows folder and any number of YAML files. In this case, I have a single workflow that I'm naming image-ci. For a workflow, you can specify triggers. Here I'm saying every time someone pushes to the main branch in this repo, I want this workflow to run. Or any time someone pushes a tag that matches this number.number.number format. Those will be my production releases, whereas I'm going to deploy the staging with each push to main. Then I'm specifying this path key here. This says only rebuild images when the applications themselves change. Because I've got so much stuff in this repo, I didn't want to rerun this workflow every single time I changed something in Module 4, for example. This says only run this workflow when the Module 6 files change, because that's where the applications and their Docker files live. Within a workflow, you can specify any number of jobs. We're going to start with a job that's going to generate the image tag. This is going to be the tag that we use for those container images. It has just one output. That output is the image tag itself. Then the second job is going to take as an input that image tag, and it's going to build, tag, and push all of our container images. The reason that I separated this as a separate job is so that the second job can run in parallel using this matrix strategy. We'll have five pipelines running in parallel building our container images all at once, whereas the image tag generation only needed to happen a single time. If you had separate image tags for each of your services, for example, you were able to release them independently, you might need to generate an image tag specific to each service, and therefore they could live in the same job. The third job, what I've named here Update Tags, and this job specifically goes into the repo, finds all of the Kubernetes manifests that use those tags, and updates them before creating a pull request such that I can go in as a human, review the tags that have been modified, and merge that pull request to deploy to production. Let's look at the specifics of what's happening here. The very first step is to check out my code, and because I'm using the Git tags as a mechanism to generate the image tag, I need to use this Fetch Step 0. This will ensure that the GitHub Action workflow has all of the Git tags available to it rather than just the latest commit. I'm then installing Task, this task runner that I've been using throughout. I want that available in the runner because I've actually defined the commands that are executed by this workflow in Task, and then I'm going to generate the image tag. Here I'm specifying a working directory, so this is my module 14 directory, and specifically I'm calling the task generateVersionTagTask, so let's take a look at what that actually is. Let me navigate to module 14. We'll navigate to GitHub Actions within that, and you can see I've got this generateVersionTag command. Let's execute it. Output 0, 5, 0, 44, G, and then a hash. The command that it's running is git describe, looking at all the tags, and finding the first parent of any particular tag matching on this pattern, and so if I run git tag, you can see here are all the tags I've applied in my cluster, ranging up to 0, 5, 0. The most recent one is this tag, and so then when I do this git describe command, it starts with that most recent tag. It tells me I've made 44 commits since that tag, and then this is my latest commit, this 51B089, and so this command gives me a mechanism to have my image tags increment with each new commit on main, and also any time you have a release, this prefix will change, and so this makes it very easy for me to see from my image tag what's the latest version on production and how many commits away my staging version is. If I was on a commit with a specific tag, instead this would just return 0, 5, 0. Okay, so we generated our image tag, we stored that in a variable, and then we echoed that out into the GitHub output at the key image tag because we specified this output, specifically calling out the step that we should grab it from. That value of 0, 5, 0, 44, G, 5, 1 would then get passed into the next step. This needs command ensures that this job won't run until the previous job has completed, and then we'll have one copy of this job for each of these paths. Let's take a look at what steps are going to be run. Again, we start by checking out, so this will give me the code associated with the event that triggered the workflow. I'm installing task. I'm setting up QEMU. This is a computing architecture emulation capability such that I'll be able to build both AMD64 and ARM versions of my container images. I'm then setting up docker build X. These are third-party GitHub actions that you can pull in, as you can see with just a couple lines. There is a GitHub actions marketplace with all sorts of third-party actions that you can take a look at and see if they meet your needs. If there is an action that's available that meets the needs of your particular workflow, you can leverage those open-source projects and avoid having to write a bunch of custom scripts yourself. We're going to set up QEMU. We're going to set up docker build X. We're then going to log into Docker Hub, passing it a username and token. If this job is specifically associated with our Golang API, we need to set up Go in order to build it. We also need to set up Co, which is that tool we're using to build the container image, only if this copy of the job is running for our Golang API. Then finally, we run the build image command. We pull in that image tag from the previous job. We specify a working directory of the path of the particular copy of this job that we're running. Then we actually call out to task, running our multi-architecture build task from that module. We're leveraging the work that we already did in these other earlier projects to execute this task. Because we have the same task name across each of our services, that's why we're able to run this one command and have it work across all of those different projects simply by specifying a different working directory. One thing we're not optimizing here is saving and restoring the Docker cache across GitHub Action Workflow runs. That's something you could do if you wanted to speed up the build times. Have it cache those Docker layers between runs. If you recall, when we issued these buildx multi-architecture build commands, they included a push to the registry. That's why we don't have an additional push step. These tasks build and push those images. The final job that I talked about is this updates tag job. This is where we want to take the tag that we used for building these images and update the corresponding Kubernetes manifests within our repo. Here I'm saying that both of these previous jobs need to complete before this one will execute. And then taking the following tasks, we check out the code, we install task. This is where the bulk of the work is done in this update image tag step. We get our tag from that initial job and we run two tasks. The first one, we update our staging tags, and this will run every single time this executes, whether it's on main or from a release. And then if this workflow was triggered by a tag, then GitHub ref will look like this. It will say refs tags and then have the tag number. And if this is true, then we'll also run task update production tags with the new tags. Let's go look at these tasks and see how they work. I've got this task called update image tags. And it, as it's described, recursively updates tags and files with the specified comment. And so this looks, this takes an input of a path and then recursively goes through every file in the repo looking for a specific comment to identify which lines to modify and then replaces those with the version specified. I've included this particular task file as excluded because otherwise the identifier comments that I'm using here would get modified by itself. So this is just a way that I can prevent that from happening. And then I run a series of commands. This first one is just error handling saying you have to specify an identifier comment and you have to specify a starting path. Otherwise it won't work. And so that is just there to provide a help statement for people to use this command. I'm then specifying out some information. And then this is where the actual execution of the replacement takes place. I use the find command using the starting path as the entry point, looking for any YAML file, searching for, using grep, the identifier comment, and then taking the output of those and looping over them. For each of those items, as long as the file is not within my excluded files list, then I update that file using sed or stream editor. I find the version tag that exists and the identifier comment and replace the version with the new tag. Once I've looped through everything, those files will now be updated in place because I use the dash I command and we can proceed. I then have two additional tasks which call this top task. So first I check that you have a new tag specified as it is required. And then I call the update image tag specifying the two inputs. In this case, the identifier for staging is staging image tag. So looking for those across our repo, we can see within module 12 in my Kluctl services directory, specifically in my staging.yaml config, I have a version with a tag that looks like the one I've been generating followed by that identifier comment. And so when I issue this command, it will find all of these similar definitions and update them all accordingly. So let's try this locally. Also for my starting path, I'm using the git rev parse command to give me the absolute path of the root of my git repo. And so this allows it to work regardless of whether it's on my system and I've cloned the repo into one location or it's on the GitHub action runner and it's cloned in another location or it's on your system. This will still give you a proper starting point to traverse through your repo and find all of these versions. So let's try this locally. We'll start just by running it. This task failed. New tag is required. Because I didn't specify new tag, it didn't know what to update to. And so that error checking we had did its job. And so let's rerun this, but we'll set new tag equals foo var baz. Here it found all the YAML files, looped through, and we see here it tried to update all those files. Let's go into this one. And we can see that now instead of the previous version, it is specified as foo var baz across all of those staging configuration files. If we look at the production ones, because they used a different identifier comment, it would not have changed. We can instead do update production tags with foo var baz thing. It will loop through. And now the production versions match that while the staging versions are still on foo var baz. This would work perfectly fine, but does make a few assumptions around how you're storing things. But as long as you followed the conventions of using these identifier tags, this is a reasonable approach for updating tags. If we now jump back to our GitHub Action workflow, so on every single run of this workflow, we're going to update the staging tags to match. On the workflows triggered by release tags, we're going to update both the staging tags and the production tags. And so this allows us to... And so at this point, those tags have been updated, but only local to the runner within that Git repo. So now we need to get those local changes pushed back up to GitHub so that we can merge them into main and get them deployed. And so this final action is a third-party action called create pull request. It does require a personal access token. You can't just use the token associated with the workflow. A PR request against whatever base branch you specify. In this case, main containing those changes. And so why don't I go ahead and make a commit to the repo, and we can watch this workflow happen. And you'll recall that I have this paths filter here. So it's only going to rebuild these images when I modify something in this path. So why don't I go into module 6, and I will create a trivial change to the readme. I'll just add a period. Now we can commit that, and we'll push it. And so if we go to the repo now, under actions, you can see a new workflow run has been created. We're starting out by generating that image tag. We then are running our five build jobs in parallel. If we click into one, we can see the tag that it received was 05045 because now it's 45 commits since that latest release. Two of our applications have now built and pushed successfully. Let's go over to Docker Hub and see that it was pushed. We see this version 05045 was just pushed a minute ago. Awesome. Our build jobs have completed after a few minutes, and now we're in that update tags job. We can see that it updated our staging configuration files like we would expect and now created a pull request. We should be able to go under pull requests and see here the title, the tag is correct. And my personal access token, that's why it shows my name, pushed the latest to this branch. And if we look at the changes, it contained changes to all of those staging configs just like we would expect. Now because this was a push domain and not a push due to a release tag, that's why it only updated the staging ones. And so that showcases an end-to-end CI workflow for generating useful image tags, building all of our container images across a parallel set of jobs, and then going and updating those tags within our Kubernetes manifests automatically. You'd obviously want to have additional workflows for things like running your unit tests and integration tests, but I just wanted to focus on how we get code changes into container images. Now we can shift focus onto our GitOps installation to get those versions that are now represented in that pull request into our cluster.