Erlang on Xen

Erlang on Xen uses a new highly compatible Erlang VM – codenamed LING – capable of running as a Xen guest OS.

LING VM runs directly on top of Xen hypervisor. It does not require Linux or anything. This takes away numerous administrative, scalability, and performance issues.

To give you a hands-on experience with Erlang on Xen, was created a public Amazon Machine Image (AMI).

The Erlang on Xen Build Service lets you generate a bootable Xen image for your Erlang application. The build service website contains instructions how to create a sample Xen image in under 5 min.

Rediscovering a cloud of the future

Today’s computing clouds, often advertised as elastic, are rather rigid. When compared to the stiffness of iron, they achieve the elasticity of wood. Rubber-like clouds are still on drawing boards.

Let us have a peek at the sketch of the Erlang on Xen cloud. There is a list of statements in the center. The top three say:

  • Smaller cheap-to-create OS-less instances provisioned on demand
  • Reduced cloud stack, sharing infrastructure with user applications
  • System administrators not necessary

There are a few more items on the list. Some are trivial, some, we believe, are too valuable to spoil. So I am clipping the rest, including the one that mentions ‘robotics’.


OS-less instances is what Erlang on Xen all about. Such instances start so fast you do not need to have anything pre-started. When your running application wants to use, for example, a message queue, one of these happens:

  • no message queue started — start it, then use it
  • message queue is available — use it
  • message queue is busy — spawn a copy and use it

Note that instances are only spawned by other instances just like Unix process forked from existing processes. On as-needed basis.

The startling outcome of the on-demand provisioning is that an application that does not do a useful work consumes no resources. Ten physical servers may now host a million client applications. A single Facebook-scale infrastructure may host a Facebook-like application for each human on Earth.


The bottom of the drawing board is all about how to make clouds a welcoming home for a database. Databases need a finer grain of control over their instances. First, a database may request two instances never to share a single physical node. This instance may contain replicas of an in-memory database and hosting them side by side negates their purpose of existence. Second, a database may stumble upon a query that requires scanning almost all its data scattered over several physical disks. The best strategy for such a query is to spawn instances of the nodes that have these disks attached and skim the data without shoveling everything through the network.


Virtualization is featured profoundly throughout our vision. The same goes for OpenFlow-aware network switches. Everything else is taken with a grain of salt – is it there to replicate a homely computing world of the 90s or it truly helps to weave a fabric of the future cloud?


On the left of the drawing board is a mockup of the cloud’s GUI. Frankly speaking, it resembles Visual Studio-style IDE more than anything else. The bulk of it is about editing source code. It also allows selection of services/components the cloud application needs. The trick is that all these services are ephemeral. None of them exist. They get created when first used.

The dark machinery behind the IDE bakes instance images and deploy them to the cloud the moment the user clicks the ‘Run’ button. The running application can be paused, variable values inspected, breakpoints set. All the usual debugging stuff is possible.

The remarkable observation is that there is no separate ‘administrator’ GUI, as well as no mention of Chef or Puppet. Instances are provisioned and configured by the application code. The monitoring is done by a logging/monitoring component added to the application. What other tasks justify a separate interface for an administrator? While you are inventing such task, we move on without.

On runtime code compilation

It seems every modern Erlang package wants to compile code at runtime and load it dynamically. A good example is the lager application that recompiles modules when the logging level changes.

For a time being, we discouraged the practice in Erlang on Xen. A tiny performance benefit of the recompiled logging function comes at the expense of the loading of multiple heavy-weight modules that comprise the compiler. We have gone lengths tryling to avoid importing the compiler at runtime. For instance, when porting ejabberd to Erlang on Xen we have refactored its logging facility not to use dynamic compilation.

There was another issue that hindered compilation of code at runtime. LING uses a different bytecode format. The LING bytecode found in .ling files is similar to the ‘true’ BEAM bytecode documented here. .beam files contain ‘generic’ version of the bytecode that requires a (complex) transformation step during code loading. Thus ‘c(module)’ typed in the Erlang on Xen shell used to return a not_a_beam error. It was still possible to load the compiled code but it required a remote call to the build service to perform .beam-to-.ling tranformation.

We decided to add the full runtime compilation/loading capability to Erlang on Xen. Now erlang:load_module() checks the magic number inside the binary and if it senses a .beam code it does the transformation automatically. This increases the startup latency somewhat. A new command line option -nobeam removes the capability and the related startup latency hit.

A glimpse of a truly elastic cloud

We see that migration to clouds of a large part of existing IT infrastructure is almost finished. What’s next?

From our point of view, the next wave of cloud services will use newly introduced (not inherited) features of modern platforms (like virtual data rooms software comparison). In particular, resource management in datacenters will start to avoid multi-tenancy and reduce the number of pre-started instances.

To make this real new instances must be started and resources released as fast as possible. A small demo we prepared – called Zerg – shows that OS-less technologies, such as Erlang on Xen, is a step in the right direction.

Upon reception of an HTTP request, the demo spawns a new Xen domain with LING VM and a web application written in Erlang. After serving a single request the domain simply shuts itself down and frees all resources. The whole process takes 1.5-2sec.

The demo uses libvirt connector library called verx – the excellent work of Michael Santos. Our special thanks go to Michael. The library let us launch 25k+ demo instances flawlessly.

libvirt itself did not behave as a software stolen from the future. The all-encompassing API to virtualisation capabilities on average took 3x more time than the total of starting the Xen, booting Erlang VM, and running the web application, including all preparatory and configuration phases. Weird, but true.

Our experiments show that libvirt is not prepared to spawn instances in parallel. The limitation seems to be introduced by libvirt itself, not by the underlying hypervisor. We have to dig deeper down the stack to spawn instances faster.

Surely, the demo itself has a little practical value. Yet it gives a glimpse of tomorrow: think about incremental dynamic map/reduce that can survive any combinatorial explosion, or scalable web servers that can consume harshest spikes of load. The application design will have to be radically different too to embrace such rapid resource allocation/deallocation.