Evolution of application deployment over the past 20 years.
Configure your local and remote lab environments.
Covers the resource types that are included with Kubernetes.
•Pod
•Job
Using helm to manage Kubernetes resources
Example microservice application.
Kubernetes manifests to deploy the demo application.
Explore how custom resources can add functionality
Install additional software to enhance the deployment.
Improving the DevX when working with Kubernetes.
How to safely upgrade your clusters and nodes.
Implement CI/CD for your applications (with GitOps!)
Keeping your Kubernetes cluster updated ensures you receive security patches and can use the latest features. Below is a common approach to upgrade both the control plane and the worker nodes.
Before upgrading, verify that none of your deployed resources rely on API versions that will be removed in your target Kubernetes version. The kubent tool scans your cluster and warns about deprecated APIs.
# run the check
kubent
If the tool reports deprecated API usage, update those manifests first.
Kubernetes allows the control plane to be ahead of the worker nodes by up to two minor versions. Upgrade it first using your cloud provider's CLI. In Google Kubernetes Engine (GKE) you can select the rapid release channel and upgrade to a specific version:
# list available versions
gcloud container get-server-config --format "yaml(channels)"
# switch the cluster to the rapid channel
gcloud container clusters update $CLUSTER_NAME --release-channel rapid
# upgrade the control plane
gcloud container clusters upgrade $CLUSTER_NAME \
--zone $GCP_ZONE \
--master \
--cluster-version 1.30.1-gke.1329003
Rather than upgrading nodes in place, create a new pool running the updated version. This "blue‑green" strategy lets you test the new nodes before removing the old ones and gives you an easy rollback option.
# create a new node pool on the latest version
gcloud container node-pools create updated-node-pool \
--cluster $CLUSTER_NAME \
--zone $GCP_ZONE \
--machine-type e2-standard-2 \
--num-nodes 2
Use Kubernetes scheduling features to move workloads from the old nodes to the new ones:
# mark old nodes unschedulable
kubectl cordon <node-name>
# evict workloads from the old nodes
kubectl drain <node-name> --ignore-daemonsets --delete-emptydir-data --force
After draining, restart any deployments or stateful sets that require zero downtime so that the new pods schedule onto the updated node pool.
Once all workloads run successfully on the new nodes, remove the outdated pool:
gcloud container node-pools delete default-pool \
--cluster $CLUSTER_NAME \
--zone $GCP_ZONE
This procedure minimizes the risk of downtime during upgrades. On‑premises environments may upgrade nodes in place due to hardware constraints, but when running in the cloud, creating a fresh node pool provides the safest path forward.