Evolution of application deployment over the past 20 years.
Configure your local and remote lab environments.
Covers the resource types that are included with Kubernetes.
•Pod
•Job
Using helm to manage Kubernetes resources
Example microservice application.
Kubernetes manifests to deploy the demo application.
Explore how custom resources can add functionality
Install additional software to enhance the deployment.
Improving the DevX when working with Kubernetes.
How to safely upgrade your clusters and nodes.
Implement CI/CD for your applications (with GitOps!)
Before signing off, I do want to call out a few additional topics that could serve as logical next steps as you continue to build your Kubernetes knowledge. There's a few networking topics that would be worth looking into. We learned about Kubernetes networking and how to get traffic into your cluster and how to communicate between services, but there's a lot more depth you could explore there. I'd look at the various CNI plugins and the trade-offs between them, as well as potentially how to handle networking across multiple clusters and optimize your network for maximum scalability. Additionally, you could look at network policies and how they can help you secure your cluster by defining the specific network paths that should be allowed for egress and ingress between and amongst your services. And then thirdly, service meshes. There's tools like Istio and Linkerd that can provide a ton of networking capabilities. They'll give you mutual TLS, automated retries, and some additional observability with no application code changes. On the workload optimization side, you should learn how to tune your services with the appropriate level of resources. There's going to be a balance between optimizing your resource utilization and cost efficiency, while also achieving application stability. In order to do this, you'll need to understand the resources that your applications consume under different load patterns. There's some tools that can help with this, like Goldilocks and KRR, and you can use those to monitor your applications, and they'll provide recommendations about the resources that you should be requesting. I would also look at autoscaling. Throughout the course, we had a static cluster size and a fixed number of replicas for all of our workloads. However, within Kubernetes, you can scale at both the pod or the cluster layer. The horizontal pod autoscaler is a component that allows you to scale the number of replicas in a workload up or down. This can be based on CPU usage or other custom metrics. And then the cluster autoscaler, or there's an open source project called Carpenter, allow you to scale your cluster by adding and removing nodes based on the pod scheduling demand. Speaking of scheduling, we mostly let the default scheduler do its thing. With the exception of in module 13, where we shifted some workloads from the old to the new node pool while performing an upgrade, you could look at using node affinities, taints, and tolerations, or custom schedulers to influence where pods get scheduled within your cluster, to ensure that you're designing your systems for high availability, resource efficiency, and to meet whatever specific application requirements you may have. As you start running Kubernetes in production, you're going to need to understand and implement some sort of disaster recovery plan in case something goes wrong. Tools like Valero and Kasten K10 can help with this. If you're using GitOps like we did in the course, then your cluster state should be stored in version control, but any stateful applications you're running will need to have an appropriate backup and recovery solution, and you should be testing that solution periodically to make sure it still works. Finally, we talked about operators in module 8 and how you can extend the Kubernetes API, but didn't dive deep into actually doing so within the course. Taking those ideas and building out a custom operator of your own is another great way to take your Kubernetes skills to the next level. And with that, you've reached the end of the course. If you made it this far, my hope is that you feel ready to deploy and operate your applications on Kubernetes. To briefly recap, we started by building our foundational knowledge of Kubernetes. We learned about the history and motivations for the system, explored the built-in capabilities, learned how to use Helm to deploy applications. We then took that knowledge and deployed a representative demo application, along with a variety of useful tooling, into a Kubernetes cluster. Finally, we explored what happens after your app is deployed, how do you debug, how do you deploy to multiple environments, and how do you automate the process of getting code into your cluster with automated pipelines and GitOps. My goal is for this course to become the go-to resource for people who want to learn Kubernetes effectively. If you found value in the course, consider sharing it with your colleagues at work or with your network on social media. If you do so, please tag me. I'm at Sid Palace on Twitter, or you can search Sid Palace on LinkedIn. Also, if you want to connect with others who have completed the course, come join my Discord community. There's a link in the description, and we can continue to talk about all things Kubernetes. Remember, Kubernetes is a vast and evolving ecosystem. Hopefully this course has given you a solid foundation, but there's always more to learn.
And remember: Just. Keep. Building. 🚀