Numerous language workbenches have been proposed over the past decade to ease the
definition of Domain-Specific Languages (DSLs).Language workbenches enable DSL designers
to specify DSLs using high-level metalanguages, and to automatically generate their
implementation (e.g., parsers, interpreters) and tool support (e.g., editors, debuggers).
However, little attention has been given to the performance of the resulting interpreters.
In many domains where performance is key (e.g., scientific and high-performance computing),
this forces DSL designers to handcraft ad-hoc optimizations in interpreter implementations,
or lose compatibility with tool support. In this paper, we propose to systematically exploit
domain-specific information of DSL specifications to derive optimized Truffle-based language
interpreters executed over the GraalVM. Those optimizations are provided at no extra cost for
the DSL designer. They are of course not as efficient as handcrafted optimizations, but do not
require extra time or knowledge from the DSL designer (which industrial DSL designers often lack).
We implement our approach on top of the Eclipse Modeling Framework (EMF) by complementing its
existing compilation chain with Truffle-specific information, which drives GraalVM to benefit
from optimized just-in-time compilation. A key benefit of our approach is that it leverages existing
DSL specifications and does not require additional information from DSL designers who remain oblivious
of Truffle’s low-level intricacies and JIT optimizations in general while staying compatible with tool
support. We evaluate our approach using a representative set of four DSLs and eight conforming programs.
Compared to the standard interpreters generated by EMF running on GraalVM, we observe an average speed-up
of x1.14, ranging from x1.07 to x1.26. Although the benefits vary slightly from one DSL or program to another,
we conclude that our approach yields substantial performance gains while remaining non-intrusive of EMF abstractions.