Evolution of application deployment over the past 20 years.
Configure your local and remote lab environments.
Covers the resource types that are included with Kubernetes.
•Pod
•Job
Using helm to manage Kubernetes resources
Example microservice application.
Kubernetes manifests to deploy the demo application.
Explore how custom resources can add functionality
Install additional software to enhance the deployment.
Improving the DevX when working with Kubernetes.
How to safely upgrade your clusters and nodes.
Implement CI/CD for your applications (with GitOps!)
Ingress and Gateway API are two Kubernetes resources for routing external network traffic into a cluster and directing it to services.
Ingress allows routing HTTP/HTTPS traffic from a single external load balancer to multiple services inside the cluster. It is widely used and supports layer 7 (HTTP/HTTPS) routing. Many implementations exist, such as Ingress NGINX, HAProxy, and Traefik. While Ingress doesn’t natively handle layer 4 traffic (TCP/UDP), some controllers offer it through custom configurations via annotations.
A basic Ingress configuration routes traffic based on paths or hosts. The Ingress controller reads the configuration and sets up the routing rules, sending traffic to the appropriate services in the cluster.
Official docs: https://kubernetes.io/docs/concepts/services-networking/ingress/
The Gateway API is a more advanced alternative that natively supports both layer 7 and layer 4 routing. It introduces additional resources like Gateways and Routes, providing more flexibility and cleaner configurations without relying on annotations. Gateway API has built-in support for more complex routing scenarios and is designed to replace Ingress over time.
Official docs: https://gateway-api.sigs.k8s.io/
For new projects, GatewayAPI is ideal due to its modern features, while Ingress remains a solid option for existing setups.
Let's deploy some Ingress resources and see how they behave.
First we will use the built in GKE ingress controller (requires GKE cluster), and then we will deploy ingress-nginx which will work on any cluster.
# task 01-create-namespace
# - Create a namespace for these examples and set it as default.
kubectl apply -f Namespace.yaml
kubens 04--ingress
We need some application to route traffic to. In this case we will use a simple nginx deployment.
# task 02-apply-deployment
# - Apply the Deployment configuration.
kubectl apply -f Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-minimal
spec:
replicas: 3
selector:
matchLabels:
app: nginx-pod-label
template:
metadata:
labels:
app: nginx-pod-label
spec:
containers:
- name: nginx
image: nginx:1.26.0
Once those pods are running we will route traffic to them with a NodePort Service and GKE ingress.
It is important to note that the GKE Ingress for Application Load Balancers requires using a NodePort type service as the backend.
# task 03-apply-service-and-minimal-gke-ingress
# - Apply the NodePort Service and minimal GKE Ingress.
kubectl apply -f Service.nginx-nodeport.yaml
kubectl apply -f Ingress.minimal-gke.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-nodeport
spec:
type: NodePort # For GKE load balancer this has to be a NodePort
selector:
app: nginx-pod-label
ports:
- protocol: TCP
port: 80 # Port the service is listening on
targetPort: 80 # Port the container is listening on (if unset, defaults to equal port value)
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-gke
annotations:
kubernetes.io/ingress.class: "gce"
# kubernetes.io/ingress.class: "gce-internal" (for traffic external to cluster but internal to VPC)
spec:
# NOTE: You can't use spec.ingressClassName for GKE ingress
rules:
- host: "ingress-example-gke.com"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-nodeport
port:
number: 80
💡 You could retrieve the IP address of the google load balancer and set up a DNS record for it, or add it to your /etc/hosts
file to route traffic to it locally.
If you don't want your implementation tied to a particular cloud provider, you can use a 3rd party ingress controller like ingress-nginx
.
First, install it using helm:
# task 04-install-nginx-ingress-controller
# - Install nginx ingress controller using Helm.
helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx \
--create-namespace \
--version 4.10.1
Because the Nginx controller is running as a pod in the cluster, it can use a ClusterIP type service as a backend.
# task 05-apply-service-and-minimal-nginx-ingress
# - Apply the ClusterIP Service and minimal NGINX Ingress.
kubectl apply -f Service.nginx-clusterip.yaml
kubectl apply -f Ingress.minimal-nginx.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-clusterip
spec:
type: ClusterIP # This is the default value
selector:
app: nginx-pod-label
ports:
- protocol: TCP
port: 80 # Port the service is listening on
targetPort: 80 # Port the container is listening on (if unset, defaults to equal port value)
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-nginx
# Can use
# annotations:
# kubernetes.io/ingress.class: "nginx"
spec:
ingressClassName: nginx
rules:
- host: "ingress-example-nginx.com"
http:
paths:
- path: /
pathType: ImplementationSpecific
backend:
service:
name: nginx-clusterip
port:
number: 80
Once again, you can either creat a DNS entry for the loadbalancer IP or use /etc/hosts
to test.
Finally, clean up by deleting the namespace, which will also delete all resources within it.
# task 06-delete-namespace
# - Delete the namespace(s) to clean up.
kubectl delete -f Namespace.yaml
kubectl delete namespace ingress-nginx
We can now show how to accomplish the same task (bringing traffic from outside the cluster to our workloads) using the newer GatewayAPI.
As always, create a namespace to isolate the resources.
# task 01-create-namespace
# - Create a namespace for these examples and set it as default.
kubectl apply -f Namespace.yaml
kubens 04--gatewayapi
We can use the same test Deployment, ClusterIP, and NodePort configurations as before, but we need to recreate in the new namespace.
# task 02-apply-deployment
# - Apply the Deployment configuration and services.
kubectl apply -f Deployment.yaml
kubectl apply -f Service.nginx-clusterip.yaml
kubectl apply -f Service.nginx-nodeport.yaml
GKE comes with a GatewayAPI implementation out of the box (it must be enabled in the cluster config).
We can use that to bring traffic to our cluster and eventually our test deployment.
# task 03-apply-gateway-route-gke
# - Apply the GKE Gateway and HTTPRoute resources.
kubectl apply -f Gateway.gke.yaml
kubectl apply -f HTTPRoute.gke.yaml
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
name: gke
spec:
gatewayClassName: gke-l7-global-external-managed
listeners:
- name: http
protocol: HTTP
port: 80
allowedRoutes:
kinds:
- kind: HTTPRoute
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
name: gke
spec:
parentRefs:
- name: gke
hostnames:
- "gateway-example-gke.com"
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- name: nginx-nodeport
kind: Service
port: 80
💡 As with Ingress, you can retrieve the load balancer IP and either set up a DNS record or modify your /etc/hosts
file to route traffic locally.
The Kong Ingress Controller as a 3rd party option for using GatewayAPI. We can install it via Helm.
Note: In preparing for this course, I tried a handful of controllers which claimed varying degrees of support for GatewayAPI. Many required significant work to get working, but Kong worked first try. Hopefully the available options continue to improve and mature as time goes on!
# task 04-install-kong-ingress-controller
# - Install Kong Ingress Controller using Helm.
helm upgrade --install kong ingress \
--repo https://charts.konghq.com \
-n kong \
--create-namespace \
--version 0.12.0
The Kong controller did rely on a slightly newer version of the GatewayAPI specification than GKE installed by default. I upgraded by using:
# Apply necessary Gateway API CustomResourceDefinitions (CRDs)
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.1.0/experimental-install.yaml
With the controller installed, we can now create the corresponding resources. In this case we have to create a GatewayClass
as well (this was created for us in the case of the GKE controller)
# task 05-apply-gatewayclass-gateway-route
# - Apply the GatewayClass, Gateway, and HTTPRoute using Kong.
kubectl apply -f GatewayClass.kong.yaml
kubectl apply -f Gateway.kong.yaml
kubectl apply -f HTTPRoute.kong.yaml
apiVersion: gateway.networking.k8s.io/v1beta1
kind: GatewayClass
metadata:
name: kong
annotations:
konghq.com/gatewayclass-unmanaged: "true"
spec:
controllerName: konghq.com/kic-gateway-controller
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
name: kong
spec:
gatewayClassName: kong
listeners:
- name: proxy
port: 80
protocol: HTTP
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
name: kong
spec:
parentRefs:
- name: kong
hostnames:
- "gateway-example-kong.com"
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- name: nginx-clusterip
kind: Service
port: 80
💡 NOTE: To enable TCPRoutes for Kong, refer to the Kong documentation.
Finally, clean up by deleting the namespace, which will also remove all resources.
# task 06-delete-namespace
# - Delete the namespace(s) to clean up.
kubectl delete -f Namespace.yaml
kubectl delete namespace kong