In my recent blog post, where I showed you how to estimate or measure the memory consumption of your Java application, I promised to provide some simple tips for optimizing your application’s memory usage. Some of the following tips are Vaadin-specific, while others are applicable to JVM services in general. As discussed in the previous article, despite Vaadin Flow’s stateful nature, you can host a ton of users even on modest hardware. However, there are a couple of pitfalls that may end up consuming excessive resources.
Before diving into these optimizations, I urge you to pause and think. For the vast majority of applications, engaging in any of these optimizations is simply premature optimization. The first step is to determine if you’re facing any issues using profilers or observability tooling. If you are, try and pinpoint their exact source. Do you have too large caches or memory leaks, or are your sessions really too big? Identify any clear sweet spots you could optimize or consider whether investing in additional server memory might be the most cost-effective solution.
#1 Lazy loading of large data sets
Probably the most common mistake leading to excessive memory consumption is binding large in-memory collections of data to components. The usage of the setItems(List)
method in Grid or ComboBox is super simple and efficient, even with relatively large datasets. These components automatically load only a fraction of the data to the browser side. However, with most backends, you may end up loading separate copies of your entire database tables into the JVM memory for each and every user of your application.
In most cases, I encourage you to use the setItems(List)
method because it offers the simplest approach. Though, if you are expecting a combination of relatively large data sets and a substantial number of users, it should raise a red flag. Luckily, the recently improved APIs make it dead simple to lazily load only the currently visible portion of the whole data from your backend. Our updated documentation provides simple examples of how to bind data lazily, also between your server and the database.
// Trivial one-liner, but stores all contacts in memory
grid.setItems(repo.findAll()); // findAll() returns a List
// Slightly more complex, but only stores the currently
// needed page in memory
grid.setItems(q -> {
PageRequest pr = VaadinSpringDataHelpers.toSpringPageRequest(q);
return repo.findAll(pr).stream();
});
Code example: Two ways to bind domain objects to Vaadin Grid; the first one is easier, while the second one uses much less memory when dealing with large amounts of data.
#2 Release resources earlier
Another common sweet spot for reducing memory pressure is to release resources earlier. If your user navigates away from the Vaadin app, its previous view will be held in the server memory until the session expires (usually set to 30 minutes in servlet containers by default). If a user leaves the window open without any activity, a feature called heartbeating can potentially extend the session’s duration even further. For applications with short-lived UI interactions, such as my hobby app with approximately 1-minute sessions, a significant portion of your server memory might be allocated to holding references to already orphaned UIs.
The de-facto fix to address this memory issue has been to adjust the session time-out and heartbeat interval. However, it’s important to consider that sessions may involve different types of users, and setting an eager session timeout may, for example, force your users to log in more often to your system.
In the upcoming release of Vaadin 24.1, there will be a major improvement (my contribution 😎) addressing this issue. Now most orphaned UIs will be closed eagerly using the so-called Beacon API. If you require this behavior for an older Vaadin version, there is an add-on available, along with a JS hack allowing you to release resources immediately after a user leaves your app.
#3 Don’t reference large amounts of read-only data
Vaadin is built to write highly interactive UIs for data-centric business apps. It is not specifically optimized for displaying a large amount of read-only data like static web pages or data visualizations. However, the ability to build web UIs using pure Java and the potential usage of Vaadin for dynamic sections of your application often leads to its utilization for static parts of your website or application as well. In these cases, it is not uncommon for the read-only part of your application, which displays a large chunk of text or graphics, to consume a significant portion of your JVM heap.
Although the built-in components in Vaadin are not optimized for read-only data, there are some easy tricks you can do in your custom components or by using add-ons. Previous versions of Vaadin had “native” features that supported this type of usage. With Vaadin Flow, the core idea is to use Element.excuteJS
(which is a fire-and-forget style method) to pass data to the client side instead of using Element.setAttribute/Property
, which caches the values on the server side. Also, it’s important to avoid referencing heavy objects elsewhere within your actual component class.
One practical example of this “hack” is the LightChart add-on. It is an extended version of the Vaadin Charts, which uses this method to set the actual data plotted in the browser and clears references to the dataset once the component has been rendered. Although you may lose support for certain interactive features, this approach can lead to game-changing memory savings when visualizing large amounts of read-only data. If, instead, you are showing a large amount of text content in your UI, you should check the RichText component in the Viritin add-on. It uses a similar approach by moving the HTML markup to the client side.
#4 Use properly sized nodes
As discussed in my previous post, it is probably not wise to have a large number of small nodes in a typical JVM setup. Moreover, if you drop in Vaadin or any other approach that uses server memory for individual users, it becomes even less efficient. OS, default JVM, and modern software stacks require memory counted in hundreds of megabytes just to serve the first user efficiently.
Above is an example of how using excessively small nodes in a cluster can result in inefficient memory usage.
Memory is so cheap these days that it really doesn’t make sense to work with nodes that have limited memory, such as 512MB unless you know you will only have a handful of concurrent users. With larger nodes, a much larger portion of memory can be efficiently utilized to serve actual users or for caching purposes, thereby enhancing CPU and database performance.
#5 Optimize the required base memory
As a last effort, you could look into slimming down the memory needed to serve that first user of the system. This means finding an optimized operating system, shutting down irrelevant services (GUI or other servers), using a more lightweight software stack, optimizing the used JVM (e.g., using the module system, tuning garbage collection, considering OpenJ9 instead of HotSpot, or even opting for native compilation.
These types of optimizations have their limits. They may have negative consequences on other parts of scalability and can consume a significant amount of time. In addition, they might negatively affect your developer experience or slow down your deployment pipeline.
Therefore, I recommend considering these tunings only as a final effort, particularly if you are looking into extreme memory savings in larger deployments or if you are constrained by hardware limitations, such as in an embedded system. Otherwise, you will most likely serve your customers better by fixing a bug or implementing a new feature!
New to Vaadin Flow? Learn to build and deploy a modern web app 100% in Java with Vaadin Quick Start ->