13.4.2 GraalVM and Micronaut FAQ

How does Micronaut manage to run on GraalVM?

Micronaut features a Dependency Injection and Aspect-Oriented Programming runtime that uses no reflection. This makes it easier for Micronaut applications to run on GraalVM since there are limitations particularly around reflection on SubstrateVM.

How can I make a Micronaut application that uses picocli run on GraalVM?

Picocli provides a picocli-codegen module with a tool for generating a GraalVM reflection configuration file. The tool can be run manually or automatically as part of the build. The module’s README has usage instructions with code snippets for configuring Gradle and Maven to generate a cli-reflect.json file automatically as part of the build. Add the generated file to the -H:ReflectionConfigurationFiles option when running the native-image tool.

What about other Third-Party Libraries?

Micronaut cannot guarantee that third-party libraries work on GraalVM SubstrateVM, that is down to each individual library to implement support.

I Get a “Class XXX is instantiated reflectively…​” Exception. What do I do?

If you get an error such as:

  1. Class myclass.Foo[] is instantiated reflectively but was never registered. Register the class by using org.graalvm.nativeimage.RuntimeReflection

You may need to manually tweak the generated reflect.json file. For regular classes you need to add an entry into the array:

  1. [
  2. {
  3. "name" : "myclass.Foo",
  4. "allDeclaredConstructors" : true
  5. },
  6. ...
  7. ]

For arrays this needs to use the Java JVM internal array representation. For example:

  1. [
  2. {
  3. "name" : "[Lmyclass.Foo;",
  4. "allDeclaredConstructors" : true
  5. },
  6. ...
  7. ]

What if I want to set the heap’s maximum size with -Xmx, but I get an OutOfMemoryError?

If you set heap’s maximum size in the Dockerfile that you use to build your native image, you will probably get runtime an error like this:

  1. java.lang.OutOfMemoryError: Direct buffer memory

The problem is that Netty is trying to allocate 16MiB of memory per chunk with its default settings for io.netty.allocator.pageSize and io.netty.allocator.maxOrder:

  1. int defaultChunkSize = DEFAULT_PAGE_SIZE << DEFAULT_MAX_ORDER; // 8192 << 11 = 16MiB

The simplest solution is to specify io.netty.allocator.maxOrder explicitly in your Dockerfile’s entrypoint. A working example with -Xmx64m:

  1. ENTRYPOINT ["/app/application", "-Xmx64m", "-Dio.netty.allocator.maxOrder=8"]

If you want to go further, you can also experiment with io.netty.allocator.numHeapArenas or io.netty.allocator.numDirectArenas. You can find more information about Netty’s PooledByteBufAllocator in the official documentation.