Evolution of application deployment over the past 20 years.
Configure your local and remote lab environments.
Covers the resource types that are included with Kubernetes.
•Pod
•Job
Using helm to manage Kubernetes resources
Example microservice application.
Kubernetes manifests to deploy the demo application.
Explore how custom resources can add functionality
Install additional software to enhance the deployment.
Improving the DevX when working with Kubernetes.
How to safely upgrade your clusters and nodes.
Implement CI/CD for your applications (with GitOps!)
Here is the transcript with line breaks added for better readability:
The next topic that we need to cover is how you can grant access to either applications or users to the Kubernetes API. Each of these resource types that we've been talking about, you can define permissions associated with them and allow or disallow access based on those permissions for your applications to be able to make calls to the Kubernetes API and interact.
You can grant this access on a per-namespace basis or on a cluster-wide basis, and we're going to show how to do both of these. As an example to demonstrate this, I've written a very simple job in the bottom right here, which runs a container using the kubectl command line, which is just going to issue a kubectl get pods command either in the default namespace, and then I have a variant that tries to get them across all namespaces.
If you didn't include any additional information about a service account, the default permissions would deny this access and this job would fail. However, we can grant this access by creating a service account—in this case, namespace-pod-reader—creating a role that gives access to get, list, and watch pods in the default namespace, and then binding that role and service account together via a role binding.
Then, within the template in our job definition, we can use that service account and tell it to auto-mount the service account token so that our kubectl command can use it and succeed.
I think this will make more sense as we create these roles and see how they behave when they succeed and when they fail in an actual cluster environment. I'll navigate to the RBAC subdirectory and create my namespace.
First, let's create that job with no service account and therefore no permissions to query the Kubernetes API. If we take a look at what that job looks like, it's like the one from the slide, but we have not specified a service account and we've not specified anything about that service account token.
Now if I do ok get pods, we can see it's errored, it's tried again, and errored. If we look at the logs, the error message tells us that the service account that's being used—in this case, the default service account in my namespace—is not allowed to run the get pods command.
If we want to grant access to make that command, let's first start by doing it at the namespace level. So I'll start with this service account called namespace-pod-reader. I'm also going to create a role called pod-reader.
A role applies only within a namespace, whereas a cluster role applies across the entire cluster. Then I'm going to bind that pod-reader role to my namespace-pod-reader service account using this role binding.
Finally, within my job specification, I'm now using the namespace-pod-reader service account. I'm specifying that it should auto-mount the service account token. This was a change that came into Kubernetes a few versions ago, where historically it would automatically mount a token associated with the service account just by specifying it. Now you have to set this to true if you want that to happen.
And then I'm issuing the get pods command within specifically the 04rback namespace that I'm operating in.
So if I do T03, we created this. We created the service account, the role, the role binding. We created this job, and then we also created another job.
Here, I'm modifying the command that I'm running. So instead of only trying to get pods in the current namespace, I'm also trying to get pods across all namespaces.
So if I do get pods now, this was my first job that failed. Let me delete that just to clean it up. We have the first one, which operated only in the 04rback namespace. And then we have the second one, which failed, retried, and failed again.
The reason we only see two copies is because we set this back-off limit to one. That means it fails once, it tries one more time, and if that fails, it stops.
Let's look at the logs from each of these. You can see the logs from the successful pod show the pods that exist within this namespace because that's where the command was running. And if we issue this, we can see it is not authorized to get pods at the cluster scope because the role that we specified only had access within the namespace.
If we did need access at the cluster level, we could use a cluster role and a cluster role binding. So now I have another service account that I'm calling cluster-pod-reader, a cluster role, which has get, list, and watch access to pods, and this will apply across all namespaces.
You can specify any number of rules here within a cluster role. Here, I'm granting access to pods, but these could be any Kubernetes resource or any custom resource that is defined down the road.
Also, for these verbs, I'm specifying here that it should have read access, but if you needed to have write access, you would specify that in the verbs section.
We then have our cluster role binding, tying together the service account with the cluster role. And finally, we have our job specification, where we're referencing the cluster-pod-reader and accessing pods across all namespaces.
So I'll go ahead and apply that. We can see it is running. It has completed. It completed successfully.
Let's look at the logs. Here you see pods from across a bunch of namespaces. Looks like this one I hadn't cleaned up yet—the current namespace and the kube-system namespace. And so it succeeded in getting pods across all the namespaces.
So if you ever need your workloads to access the Kubernetes API and resources within the Kubernetes API, RBAC and service accounts are going to be the mechanism to do that.
Also, while this course is not focused on administering Kubernetes clusters, as you grant access to users of the Kubernetes cluster, RBAC is going to perform in the same way. You're going to specify which resources any individual user or group of users is allowed to access and what actions or verbs they're allowed to take against those resources.
The specific mechanism for how you create a user account and how you map it to a role or a cluster role are going to differ across different managed clusters, but you will be using this RBAC system under the hood.
And now that actually brings us to the end of the built-in Kubernetes resource types that I wanted to cover. Hopefully, that gives you a lay of the land of the different types of resources that are available and how you would use them to build out your application architectures.