Getting Started with Kubernetes Kit
- Requirements
- Add Kubernetes Kit Dependency
- Session Replication Backend
- Build & Deploy the Application
- Gateway
- Scaling the Application
This tutorial guides you through setting up and deploying an application with Kubernetes Kit in a local Kubernetes cluster.
Requirements
This tutorial assumes that you have the following software installed on your local machine:
Additionally, you’ll need to download a new Vaadin project from start.vaadin.com.
Add Kubernetes Kit Dependency
To get started, add Kubernetes Kit as a dependency to the project:
Source code
pom.xml
pom.xml<dependency>
<groupId>com.vaadin</groupId>
<artifactId>kubernetes-kit-starter</artifactId>
</dependency>Then add the following to the application configuration file:
Source code
application.properties
application.properties1
vaadin.devmode.sessionSerialization.enabled=true
2
vaadin.serialization.transients.include-packages=com.example.application-
This property enables the session serialization debug tool during development.
-
This property defines the classes which should be inspected for transient fields during session serialization. In this case, inspection is limited to classes within the starter project. For more information, see Session Replication.
Session Replication Backend
You don’t need to enable session replication if you only need rolling updates.
High availability and the possibility to scale applications up and down in a cluster are enabled by storing session data in a backend that is accessible to the cluster. This tutorial uses Hazelcast for this purpose. However, Redis is also supported.
You’ll need to add the Hazelcast dependency to the project:
Source code
pom.xml
pom.xml<dependency>
<groupId>com.hazelcast</groupId>
<artifactId>hazelcast</artifactId>
</dependency>Then add the following property to the application configuration file:
Source code
application.properties
vaadin.kubernetes.hazelcast.service-name=hazelcast-serviceNext, deploy the Hazelcast service to the cluster by running the following command:
Source code
terminal
kubectl apply -f https://raw.githubusercontent.com/hazelcast/hazelcast/master/kubernetes-rbac.yaml|
Note
|
Deploying to Another Namespace
If you want to deploy to another namespace than Source codeterminal |
Deploy a load balancer service to your cluster. Create the following Kubernetes manifest file:
Source code
hazelcast.yaml
hazelcast.yamlapiVersion: v1
kind: Service
metadata:
name: hazelcast-service
spec:
selector:
app: my-app
ports:
- name: hazelcast
port: 5701
type: LoadBalancerThen deploy the manifest to your cluster:
Source code
terminal
kubectl apply -f hazelcast.yamlRun the following command to see that the load balancer service is running:
Source code
terminal
kubectl get svc hazelcast-serviceYou should see the following output (the IP number can be different):
Source code
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hazelcast-service LoadBalancer 10.96.178.190 <pending> 5701:31516/TCP 18hBuild & Deploy the Application
The next step is to build a container image of the application and deploy it to your Kubernetes cluster.
To do this, clean the project and create a production build of the application:
Source code
terminal
mvn clean packageNext, create the following Dockerfile file in the project directory:
Source code
Dockerfile
FROM openjdk:17-jdk-slim
COPY target/*.jar /usr/app/app.jar
RUN useradd -m myuser
USER myuser
EXPOSE 8080
CMD java -jar /usr/app/app.jarOpen a terminal to the project directory and use Docker to build a container image for the application. Tag it with version 1.0.0. Note the required period . at the end of the line:
Source code
terminal
docker build -t my-app:1.0.0 .|
Note
|
Image Not Found by Cluster
Depending on the Kubernetes cluster you’re using, you may need to publish the image to a local registry or push the image to the cluster. Otherwise, the image will not be found. Refer to your cluster documentation for more information. If you’re using Source codeterminalIn a production environment you can publish the image to a registry that is accessible by the cluster. |
Create a deployment manifest for the application:
Source code
app-v1.yaml
app-v1.yamlapiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-v1
spec:
replicas: 4
selector:
matchLabels:
app: my-app
version: 1.0.0
template:
metadata:
labels:
app: my-app
version: 1.0.0
spec:
containers:
- name: my-app
image: my-app:1.0.0
# Sets the APP_VERSION environment variable for the container which is
# used during the version update to compare with the new version
env:
- name: APP_VERSION
value: 1.0.0
ports:
- name: http
containerPort: 8080
- name: multicast
containerPort: 5701 1
---
apiVersion: v1
kind: Service
metadata:
name: my-app-v1
spec:
selector:
app: my-app
version: 1.0.0
ports:
- name: http
port: 80
targetPort: http-
The multicast port
5701is only used for session replication using Hazelcast.
Now deploy the manifest to your cluster:
Source code
terminal
kubectl apply -f app-v1.yamlRun the following command to verify that you have four pods running:
Source code
terminal
kubectl get podsYou should see output similar to the following:
Source code
NAME READY STATUS RESTARTS AGE
my-app-v1-f87bfcbb4-5qjml 1/1 Running 0 22s
my-app-v1-f87bfcbb4-czkzr 1/1 Running 0 22s
my-app-v1-f87bfcbb4-gjqw6 1/1 Running 0 22s
my-app-v1-f87bfcbb4-rxvjb 1/1 Running 0 22sGateway
To access the application from outside the cluster, you need to set up a gateway. This tutorial uses Envoy Gateway as the Gateway API implementation, which provides built-in support for session persistence — a requirement for Vaadin applications running with multiple replicas.
|
Important
|
Ingress NGINX Retirement
The Kubernetes community has announced that the Ingress NGINX controller is being retired, with best-effort maintenance only until March 2026. The Gateway API is the recommended replacement. This tutorial uses Envoy Gateway as the Gateway API implementation, but other implementations that support session persistence should also work. |
Install Envoy Gateway in your cluster using Helm:
Source code
terminal
helm install eg oci://docker.io/envoyproxy/gateway-helm --version v1.6.0 -n envoy-gateway-system --create-namespaceThen create a gateway manifest:
Source code
gateway.yaml
gateway.yamlapiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: eg
spec:
controllerName: gateway.envoyproxy.io/gatewayclass-controller
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: public-gateway
spec:
gatewayClassName: eg
listeners:
- name: http
protocol: HTTP
port: 80
allowedRoutes:
namespaces:
from: AllDeploy it to your cluster:
Source code
terminal
kubectl apply -f gateway.yamlNext, create an HTTP route manifest to direct traffic to the application with cookie-based session persistence:
Source code
route-v1.yaml
route-v1.yamlapiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: my-app
spec:
parentRefs:
- name: public-gateway
rules:
- matches:
- path:
type: PathPrefix
value: /
backendRefs:
- name: my-app-v1
port: 80
sessionPersistence:
sessionName: vaadin-session 1
type: Cookie-
The
sessionNamefield sets the name of the session persistence cookie. Kubernetes Kit needs to know this name so it can remove the cookie when the user switches to a new version (see Rolling Updates). IfsessionNameis omitted, the gateway implementation generates an implementation-specific cookie name.
Then add the following property to the application configuration to match the session cookie name:
Source code
application.properties
application.propertiesvaadin.kubernetes.sticky-session-cookie-name=vaadin-sessionDeploy the manifest to your cluster:
Source code
terminal
kubectl apply -f route-v1.yamlThe application should now be available at localhost.
|
Note
|
Accessing Application Locally
To access the application from your local machine, it may be necessary to use the Source codeterminalThen forward a local port: Source codeterminalThe application should now be available at |
Scaling the Application
You can use kubectl commands to increase or reduce the amount of pods used by the deployment. For example, the following command increases the number of pods to five:
Source code
terminal
kubectl scale deployment/my-app-v1 --replicas=5You can also simulate the failure of a specific pod by deleting it by name like so:
Source code
terminal
kubectl delete pod/<pod-name>Remember to substitute the name with your application pod’s name. You can see the names of all pods with the kubectl get pods command.
If you’ve enabled session replication, this can be used to check that it’s performing as expected. If you open the application and then delete the pod to which it’s connected, you shouldn’t lose session data after the next user interaction.