Running LLVM on GraalVM

GraalVM provides an implementation of the lli tool to directly executeprograms in LLVM bitcode form.

In contrast to static compilation that is normally used for LLVM basedlanguages, LLI first interprets the bitcode and then dynamically compiles thehot parts of the program using the GraalVM compiler. This allows seamlessinteroperability with the dynamic languages supported by GraalVM.

Run programs in LLVM bitcode format:

  1. lli [LLI Options] [GraalVM Options] [Polyglot Options] filename [program args]

Where filename is a single executable that contains LLVM bitcode.

Note: LLVM bitcode is platform dependent. The program must be compiled tobitcode for the appropriate platform.

Compiling to LLVM Bitcode

GraalVM can execute C/C++, Rust, and other languages that can be compiled toLLVM bitcode. As a first step, you have to compile the program to LLVM bitcodeusing an LLVM frontend such as clang. C/C++ code can be compiled to LLVMbitcode using the clang shipped with GraalVM.

To download a pre-built LLVM toolchain for GraalVM, execute the followingcommands:

  1. $ gu install llvm-toolchain
  2. $ export LLVM_TOOLCHAIN=$(lli --print-toolchain-path)

Here is some example C code named hello.c:

  1. #include <stdio.h>
  2. int main() {
  3. printf("Hello from GraalVM!\n");
  4. return 0;
  5. }

You can compile hello.c to an executable with embedded LLVM bitcode as follows:

  1. $ $LLVM_TOOLCHAIN/clang hello.c -o hello

You can then run hello on GraalVM like this:

  1. $ lli hello
  2. Hello from GraalVM!

External library dependencies

If the bitcode file depends on external libraries, GraalVM will automaticallypick up the dependencies from the binary headers.

For example:

  1. #include <unistd.h>
  2. #include <ncurses.h>
  3. int main() {
  4. initscr();
  5. printw("Hello, Curses!");
  6. refresh();
  7. sleep(1);
  8. endwin();
  9. return 0;
  10. }

This can be run with:

  1. $ $LLVM_TOOLCHAIN/clang hello-curses.c -lncurses -o hello-curses
  2. $ lli hello-curses

Running C++

For running C++ code, the GraalVM LLVM runtime requires thelibc++ standard library from the LLVM project. TheLLVM toolchain shipped with GraalVM automatically links against libc++.

  1. #include <iostream>
  2. int main() {
  3. std::cout << "Hello, C++ World!" << std::endl;
  4. }

Compile the code with clang++:

  1. $ $LLVM_TOOLCHAIN/clang++ hello-c++.cpp -o hello-c++
  2. $ lli hello-c++
  3. Hello, C++ World!

Running Rust

The LLVM toolchain that is bundled with GraalVM does not come with the Rustcompiler. To install Rust, run the following in your terminal, then follow theonscreen instructions:

  1. curl https://sh.rustup.rs -sSf | sh

Here is an example Rust program:

  1. fn main() {
  2. println!("Hello Rust!");
  3. }

This can be compiled to bitcode with the —emit=llvm-bc flag:

  1. $ rustc --emit=llvm-bc hello-rust.rs

To run the Rust program, we have to tell GraalVM where to find the Ruststandard libraries.

  1. $ lli --lib $(rustc --print sysroot)/lib/libstd-* hello-rust.bc
  2. Hello Rust!

Interoperability

GraalVM supports several other programming languages, including JavaScript,Python, Ruby, and R. While LLI is designed to run LLVM bitcode, it also providesan API for programming language interoperability that lets you execute code fromany other language that GraalVM supports.

Dynamic languages like JavaScript usually access object members by name. Sincenormally names are not preserved in LLVM bitcode, it must be compiled with debuginfo enabled (the LLVM toolchain shipped with GraalVM will automatically enabledebug info).

The following example demonstrates how you can use the API for interoperabilitywith other programming languages.

Let us define a C struct for points and implement allocation functions:

  1. // cpart.c
  2. #include <polyglot.h>
  3. #include <stdlib.h>
  4. #include <stdio.h>
  5. struct Point {
  6. double x;
  7. double y;
  8. };
  9. POLYGLOT_DECLARE_STRUCT(Point)
  10. void *allocNativePoint() {
  11. struct Point *ret = malloc(sizeof(*ret));
  12. return polyglot_from_Point(ret);
  13. }
  14. void *allocNativePointArray(int length) {
  15. struct Point *ret = calloc(length, sizeof(*ret));
  16. return polyglot_from_Point_array(ret, length);
  17. }
  18. void freeNativePoint(struct Point *p) {
  19. free(p);
  20. }
  21. void printPoint(struct Point *p) {
  22. printf("Point<%f,%f>\n", p->x, p->y);
  23. }

Make sure LLVMTOOLCHAIN resolves to the GraalVM LLVM toolchain (lli —print-toolchain-path),then compile _cpart.c with (the polyglot-mock library defines the polyglotAPI functions used in the example):

  1. $ $LLVM_TOOLCHAIN/clang -shared cpart.c -lpolyglot-mock -o cpart.so

You can access your C/C++ code from other languages like JavaScript:

  1. // jspart.js
  2. // Load and parse the LLVM bitcode into GraalVM
  3. var cpart = Polyglot.evalFile("llvm" ,"cpart.so");
  4. // Allocate a light-weight C struct
  5. var point = cpart.allocNativePoint();
  6. // Access it as if it were a JS object
  7. point.x = 5;
  8. point.y = 7;
  9. // Pass it back to a native function
  10. cpart.printPoint(point);
  11. // We can also allocate an array of structs
  12. var pointArray = cpart.allocNativePointArray(15);
  13. // We can access this array like it was a JS array
  14. for (var i = 0; i < pointArray.length; i++) {
  15. var p = pointArray[i];
  16. p.x = i;
  17. p.y = 2*i;
  18. }
  19. cpart.printPoint(pointArray[3]);
  20. // We can also pass a JS object to a native function
  21. cpart.printPoint({x: 17, y: 42});
  22. // Don't forget to free the unmanaged data objects
  23. cpart.freeNativePoint(point);
  24. cpart.freeNativePoint(pointArray);

Run this JavaScript file with:

  1. $ js --polyglot jspart.js
  2. Point<5.000000,7.000000>
  3. Point<3.000000,6.000000>
  4. Point<17.000000,42.000000>

Polyglot C API

There are also lower level API functions for directly accessing polyglot valuesfrom C. See the Polyglot Referenceand the documentation comments in polyglot.h for more details.

For example, this program allocates and accesses a Java array from C:

  1. #include <stdio.h>
  2. #include <polyglot.h>
  3. int main() {
  4. void *arrayType = polyglot_java_type("int[]");
  5. void *array = polyglot_new_instance(arrayType, 4);
  6. polyglot_set_array_element(array, 2, 24);
  7. int element = polyglot_as_i32(polyglot_get_array_element(array, 2));
  8. printf("%d\n", element);
  9. return element;
  10. }

Compile it to LLVM bitcode:

  1. $ $LLVM_TOOLCHAIN/clang polyglot.c -lpolyglot-mock -o polyglot

And run it, using the —jvm argument to run GraalVM in the JVM mode, since we areusing a Java type:

  1. $ lli --jvm polyglot
  2. 24

Embedding in Java

GraalVM can also be used to embed LLVM bitcode in Java host programs.

For example, let us write a Java class Polyglot.java that embeds GraalVM torun the previous example:

  1. import java.io.*;
  2. import org.graalvm.polyglot.*;
  3. class Polyglot {
  4. public static void main(String[] args) throws IOException {
  5. Context polyglot = Context.newBuilder().
  6. allowAllAccess(true).build();
  7. File file = new File("polyglot");
  8. Source source = Source.newBuilder("llvm", file).build();
  9. Value cpart = polyglot.eval(source);
  10. cpart.execute();
  11. }
  12. }

Compiling and running it:

  1. $ javac Polyglot.java
  2. $ java Polyglot
  3. 24

See the Embedding documentationfor more information.

Source-Level Debugging

You can use GraalVM’s Debugger to debug the programyou compiled to LLVM bitcode.To use this feature, please make sure to compile your program with debug information by specifying the -gargument when compiling with clang (the LLVM toolchain shipped with GraalVM will automatically enabledebug info). This gives you the ability to step through the program’s source code and set breakpoints in it.To also be able to inspect the local and global variables of your program you may pass—llvm.enableLVI=true as argument to lli.This option is not enabled per default as it can significantly decrease your program’s run-time performance.

LLVM Compatibility

GraalVM works with LLVM bitcode versions 3.8 to 9.0. We recommend using theversion of LLVM that is shipped with GraalVM.

Optimization Flags

In contrast to the static compilation model of LLVM languages, in GraalVM themachine code is not directly produced from the LLVM bitcode, but there is anadditional dynamic compilation step by the GraalVM compiler.

In this scenario, first the LLVM frontend (e.g. clang) does optimizations onthe bitcode level, and then the GraalVM compiler does its own optimizations on top of thatduring dynamic compilation. Some optimizations are better when doneahead-of-time on the bitcode, while other optimizations are better left for thedynamic compilation of the GraalVM compiler, when profiling information is available.

The LLVM toolchain that is shipped with GraalVM automatically selects therecommended flags by default.

In principle, all optimization levels should work, but for best results wesuggest compiling the bitcode with optimization level -O1.

Cross-language interoperability will only work when the bitcode is compiledwith debug information enabled (-g), and the -mem2reg optimization isperformed on the bitcode (compiled with at least -O1, or explicitly using theopt tool).

LLI Command Options

—print-toolchain-path: print the path of the LLVM toolchain bundled withGraalVM. This directory contains compilers and tools that can be used tocompile C/C++ programs to LLVM bitcode for execution with GraalVM.

-L <path>/—llvm.libraryPath=<path>: a list of paths where GraalVM willsearch for library dependencies. Paths are delimited by :.

—lib <libs>/—llvm.libraries=<libs>: a list of libraries to load. The listcan contain precompiled native libraries (.so/.dylib) and bitcodelibraries (*.bc). Files with a relative path are looked up relative tollvm.libraryPath. Entries are delimited by :.

—llvm.enableLVI=<true/false>: enable source-level symbol inspection in thedebugger. This defaults to false as it can decrease run-time performance.

—llvm.managed enable a managed execution mode for LLVM IR code, which means memory allocations from LLVM bitcode are done on the managed heap. This article explains the managed execution in every detail.

—version prints the version and exit.

—version:graalvm prints GraalVM version information and exit.

Expert and Diagnostic Options

Use —help and —help:<topic> to get a full list of options.

Limitations and Differences to Native Execution

LLVM code interpreted or compiled with the default configuration of GraalVMCommunity or Enterprise editions will not have the same characteristics as thesame code interpreted or compiled in a managed environment, enabledwith the —llvm.managed option on top of GraalVM Enterprise. Thebehavior of the lli interpreter tool used to directly execute programsin LLVM bitcode format differs between native and managed modes. Thedifference lies in safety guarantees and cross-language interoperability.

In the default configuration, cross-language interoperability requires bitcodeto be compiled with the debug information enabled (-g), and the -mem2regoptimization is performed on the bitcode (compiled with at least -O1, orexplicitly using the opt tool). These requirements can be overcome in amanaged environment of GraalVM EE that allows native code to participate in thepolyglot programs, passing and receiving the data from any other supportedlanguage. In terms of security, the execution of native code in a managedenvironment passes with additional safety features: catching illegal pointeraccesses, accessing arrays outside of the bounds, etc..

There are certain limitations and differences to the native execution depending on the GraalVM edition.Consider them respectively.

Limitations and Differences to Native Execution on Top of GraalVM CE

The LLVM interpreter in GraalVM Community Edition environment allows executing LLVM bitcode within amultilingual context. Even though it aspires to be a generic LLVM runtime, thereare certain fundamental and/or implementational limitations that users need tobe aware of.

The following restrictions and differences to native execution (i.e., bitcode compiled down to native code) exist when LLVM bitcode is executed with the LLVM interpreter on top of GraalVM CE:

  • The GraalVM LLVM interpreter assumes that bitcode was generated to target the x86_64 architecture.
  • Bitcode should be the result of compiling C/C++ code using clang version 7, other compilers/languages, e.g., Rust, might have specific requirements that are not supported.
  • Unsupported functionality – it is not possible to call any of the following functions:
    • clone()
    • fork()
    • vfork()
    • setjmp(), sigsetjmp(), longjmp(), siglongjmp()
    • Functions of the exec() function family
    • Pthread functions
    • Code running in the LLVM interpreter needs to be aware that a JVM is running in the same process, so many syscalls such as fork, brk, sbrk, futex, mmap, rt_sigaction, rt_sigprocmask, etc. might not work as expected or cause the JVM to crash.
    • Calling unsupported syscalls or unsupported functionality (listed above) via native code libraries can cause unexpected side effects and crashes.
  • Thread local variables
    • Thread local variables from bitcode are not compatible with thread local variables from native code.
  • Cannot rely on memory layout
    • Pointers to thread local variables are not stored in specific locations, e.g., the FS segment.
    • The order of globals in memory might be different, consequently no assumptions about their relative locations can be made.
    • Stack frames cannot be inspected or modified using pointer arithmetic (overwrite return address, etc.).
    • Walking the stack is only possible using the Truffle APIs.
    • There is a strict separation between code and data, so that reads, writes and pointer arithmetic on function pointers or pointers to code will lead to undefined behavior.
  • Signal handlers
    • Installing signal handlers is not supported.
  • The stack
    • The default stack size is not set by the operating system but by the option —llvm.stackSize.
  • Dynamic linking
    • Interacting with the LLVM bitcode dynamic linker is not supported, e.g., dlsym/dlopen can only be used for native libraries.
    • The dynamic linking order is undefined if native libraries and LLVM bitcode libraries are mixed.
    • Native libraries cannot import symbols from bitcode libraries.
  • x86_64 inline assembly is not supported.
  • Undefined behavior according to C spec
    • While most C compilers map undefined behavior to CPU semantics, the GraalVM LLVM interpreter might map some of this undefined behavior to Java or other semantics. Examples include: signed integer overflow (mapped to the Java semantics of an arithmetic overflow), integer division by zero (will throw an ArithmeticException), oversized shift amounts (mapped to the Java behavior).
  • Floating point arithmetics
    • Some floating point operations and math functions will use more precise operations and cast the result to a lower precision (instead of performing the operation at a lower precision).
    • Only the rounding mode FE_TONEAREST is supported.
    • Floating point exceptions are not supported.
  • NFI limitations (calling real native functions)
    • Structs, complex numbers, or fp80 values are not supported as by-value arguments or by-value return values.
    • The same limitation applies to calls back from native code into interpreted LLVM bitcode.
  • Limitations of polyglot interoperability (working with values from other GraalVM languages)
    • Foreign objects cannot be stored in native memory locations. Native memory locations include:
      • globals (except the specific case of a global holding exactly one pointer value);
      • malloc’ed memory (including c++ new, etc.);
      • stack (e.g. escaping automatic variables).
  • LLVM instruction set support (based on LLVM 7.0.1)
    • A set of rarely-used bitcode instructions are not available (va_arg, catchpad, cleanuppad, catchswitch, catchret, cleanupret, fneg, callbr).
    • The instructions with limited support:
      • atomicrmw (only supports sub, add, and, nand, or, xor, xchg);
      • extract value and insert value (only supports a single indexing operand);
      • cast (missing support for certain rarely-used kinds);
      • atomic ordering and address space attributes of load and store instructions are ignored.
    • Values – assembly constants are not supported (module-level assembly and any assembly strings).
    • Types:
      • There is no support for 128-bit floating point types (fp128 and ppc_fp128), x86_mmx, half-precision floats (fp16) and any vectors of unsupported primitive types.
      • The support for fp80 is limited (not all intrinsics are supported for fp80, some intrinsics or instructions might silently fall back to fp64).
  • A number of rarely-used or experimental intrinsics based on LLVM 7.0.1 are not supported because of implementational limitations or because they are out of scope:
    • experimental intrinsics: llvm.experimental.*, llvm.launder.invariant.group, llvm.strip.invariant.group;
    • trampoline intrinsics: llvm.init.trampoline, llvm.adjust.trampoline;
    • general intrinsics: llvm.var.annotation, llvm.ptr.annotation, llvm.annotation, llvm.codeview.annotation, llvm.trap, llvm.debugtrap, llvm.stackprotector, llvm.stackguard, llvm.ssa_copy, llvm.type.test, llvm.type.checked.load, llvm.load.relative, llvm.sideeffect;
    • specialised arithmetic intrinsics: llvm.canonicalize, llvm.fmuladd;
    • standard c library intrinsics: llvm.fma, llvm.trunc, llvm.nearbyint, llvm.round;
    • code generator intrinsics: llvm.returnaddress, llvm.addressofreturnaddress, llvm.frameaddress, llvm.localescape, llvm.localrecover, llvm.read_register, llvm.write_register, llvm.stacksave, llvm.stackrestore, llvm.get.dynamic.area.offset, llvm.prefetch, llvm.pcmarker, llvm.readcyclecounter, llvm.clear_cache, llvm.instrprof*, llvm.thread.pointer;
    • exact gc intrinsics: llvm.gcroot, llvm.gcread, llvm.gcwrite;
    • element wise atomic memory intrinsics: llvm.*.element.unordered.atomic;
    • masked vector intrinsics: llvm.masked.*;
    • bit manipulation intrinsics: llvm.bitreverse, llvm.fshl, llvm.fshr.

Limitations and Differences to Managed Execution on Top of GraalVM EE

A managed execution for LLVM intermediate representation code is GraalVMEnterprise Edition feature and can be enabled with —llvm.managed commandline option. In the managed mode, GraalVM LLVM prevents access to unmanagedmemory and uncontrolled calls to native code and operating system functionality.The allocations are performed in the managed Java heap, and accesses to thesurrounding system are routed through proper Truffle API and Java API calls.

All the restrictions from the default native LLVM execution on GraalVM apply to the managed execution, but with the following differences/changes:

  • Platform independent
    • Bitcode must be compiled for the a generic linux_x86_64 target, using the provided musl libc library, on all platforms, regardless of the actual underlying operating system.
  • C++
    • C++ is currently not supported in a managed mode.
  • Native memory and code
    • Calls to native functions are not possible, thus only the functionality provided in the supplied musl libc and by the GraalVM LLVM interface is available.
    • Loading native libraries is not possible.
    • Native memory access is not possible.
  • System calls
    • System calls with only limited support are read, readv, write, writev, open, close, dup, dup2, lseek, stat, fstat, lstat, chmod, fchmod, ioctl, fcntl, unlink, rmdir, utimensat, uname, set_tid_address, gettid, getppid, getpid, getcwd, exit, exit_group, clock_gettime, arch_prctl.
    • The functionality is limited to common terminal IO, process control and file system operations.
    • Some syscalls are implemented as a noop and/or return errors warning that they are not available, e.g. chown, lchown, fchown, brk, rt_sigaction, sigprocmask, futex.
  • Musl libc
    • The musl libc library behaves differently than the more common glibc in some cases.
  • The stack
    • Accessing the stack pointer directly is not possible.
    • The stack is not contiguous, and accessing memory that is out of the bounds of a stack allocation (e.g., accessing neighboring stack value using pointer arithmetics) is not possible.
  • Pointers into the managed heap
    • Reading parts of a managed pointer is not possible.
    • Overwriting parts of a managed pointer (e.g., using bits for pointer tagging) and subsequently dereferencing the destroyed managed pointer is not possible.
    • Undefined behavior in C pointer arithmetics applies.
    • Complex pointer arithmetics (e.g., multiplying pointers) can convert a managed pointer to an i64 value – the i64 value can be used in pointer comparisons but cannot be dereferenced.
  • Floating point arithmetics
    • 80-bit floating points only use 64-bit floating point precision.
  • Dynamic linking
    • The interaction with the LLVM bitcode dynamic linker is not supported, e.g., dlsym/dlopen cannot be used. This does not allow to load native code.