Deploying a Spring Boot app as a native GraalVM image with Docker
In this final part of the Building an AI chatbot in Java series, we will deploy the Spring Boot AI chatbot application we've built as a GraalVM native image.
Requirements
The instructions in the article assume you are working with the Hilla Spring Boot application we've built in the series, but the general instructions will work for any Spring Boot 3.0+ application. You can find the source of the chatbot application below.
You will also need a GraalVM JDK. You can use SDKMAN to install one if needed.
Source code for the completed application
You can find the completed source code for the application on my GitHub, https://github.com/marcushellberg/docs-assistant.
Registering runtime hints for native compilation
Hilla includes support for native compilation for the framework out of the box. The only things we need to configure are the things that are specific to our application.
In our case, we need to make the compiler aware of 2 things:
- Resources that should be available to load runtime.
- Classes that we'll use for reflection.
Registering runtime resources
We can register the token library encoding files with a RuntimeHintsRegistrar
in Application.java
:
@SpringBootApplication
@ImportRuntimeHints(Application.Hints.class)
public class Application implements AppShellConfigurator {
// Register runtime hints for the token library
public static class Hints implements RuntimeHintsRegistrar {
@Override
public void registerHints(RuntimeHints hints, ClassLoader classLoader) {
hints.resources().registerPattern("com/knuddels/jtokkit/*.tiktoken");
}
}
public static void main(String[] args) {
SpringApplication.run(Application.class, args);
}
}
Registering classes for reflection
Both OpenAIService
and PineconeService
use reflection to map the JSON response to Java objects. We can inform the compiler that we'll need these classes available runtime with @RegisterReflectionForBinding
annotations. I prefer to add these to the methods that use reflection to make it easier to track them.
Add annotations on the following methods.
PineconeService.java
@RegisterReflectionForBinding({QueryResponse.class})
public Mono<List<String>> findSimilarDocuments(List<Double> embedding, int maxResults, String namespace) {
...
}
OpenAIService.java
@RegisterReflectionForBinding({ModerationRequest.class, ModerationResponse.class})
private Mono<ModerationResponse> sendModerationRequest(ChatCompletionMessage message) {
...
}
@RegisterReflectionForBinding(EmbeddingResponse.class)
public Mono<List<Double>> createEmbedding(String text) {
...
}
@RegisterReflectionForBinding({ChatCompletionChunkResponse.class})
public Flux<String> generateCompletionStream(List<ChatCompletionMessage> messages) {
...
}
Compiling a GraalVM native image locally
Make sure the compilation works by running the following Maven command:
mvn -Pproduction -Pnative native:compile
The production
profile tells Hilla to generate optimized frontend bundles, and the native
profile allows us to do the AOT compilation.
Once the compilation finishes (it'll take a few minutes), you can run the application binary in the target
folder:
target/docs-assistant
If all goes well, the app should start in a fraction of a second, and you should be able to access it on localhost:8080
.
Compiling a GraalVM image in a Docker image
The end-goal is to create a Docker image that we can deploy in the cloud. For this, let's create a Dockerfile
:
# First stage: JDK with GraalVM
FROM ghcr.io/graalvm/jdk:22.3.2 AS build
# Update package lists and Install Maven
RUN microdnf update -y && \ microdnf install -y maven gcc glibc-devel zlib-devel libstdc++-devel gcc-c++ && \ microdnf clean all
WORKDIR /usr/src/app
# Copy pom.xml and download dependencies
COPY pom.xml .
RUN mvn dependency:go-offline
COPY . .
RUN mvn -Pnative -Pproduction native:compile
# Second stage: Lightweight debian-slim image
FROM debian:bookworm-slim
WORKDIR /app
# Copy the native binary from the build stage
COPY --from=build /usr/src/app/target/docs-assistant /app/docs-assistant
# Run the application
CMD ["/app/docs-assistant"]
The Docker file is in two stages:
- A build stage that sets up the needed environment to build the image.
- A second stage that creates a slim container with only the native binary.
Deploying the Docker container to fly.io
Now that you have a docker image, you can deploy it to virtually any cloud provider.
If you already have a preferred way of deploying your containers: congrats, you are done!
In my case, I'm going to deploy to Fly.io because it is super simple. We don't need to worry about setting up container registries, services, etc. It will use the Dockerfile to build and deploy the image for us.
Begin by signing up, then installing the flyctl
CLI tool.
Next, create a new app with the following command, answering all the questions.
fly launch
Update the fly.toml
file with some additional config:
app = "YOUR_APP_NAME"
primary_region = "YOUR_PRIMARY_REGION"
[env]
PORT = "8080"
[http_service]
internal_port = 8080
force_https = true
auto_stop_machines = true
auto_start_machines = true
min_machines_running = 0
Set the needed environment variables as secrets:
flyctl secrets set PINECONE_API_URL=XXX
flyctl secrets set PINECONE_API_KEY=XXX
flyctl secrets set OPENAI_API_KEY=XXX
Finally, deploy the app:
fly deploy
You can open the app with
fly open
Extra: build and deploy using GitHub actions
If you want to automate the build and deployment, you can use GitHub actions. Here is my sample workflow file that runs whenever a new release is published (the build is so time-consuming that I don't want to run it for every commit).
Create a Fly.io access token and set it as the FLY_API_TOKEN
secret in GitHub.
name: Deploy to Fly.io
on:
release:
types: [published]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Install Fly
uses: superfly/flyctl-actions/setup-flyctl@master
- name: Deploy to Fly
run: flyctl deploy --remote-only
env:
FLY_API_TOKEN: ${{ secrets.FLY_API_TOKEN }}
Conclusion
You've now learned how to build and deploy a ChatGPT-based AI bot with custom documentation using Hilla, Spring Boot, and React.
Missed an article? Read the entire Building an AI chatbot in Java series.