Zebra – The ZeroVM and Swift Playground


The ZeroVM community faces a variety of challenges unique to open source infrastructure projects.  Two of the biggest challenges are providing developers a way to easily experiment with ZeroVM, and finding a way to effectively test the technology at scale.  Other open source communities, such as OpenStack, have faced similar challenges and addressed these issues by creating projects like TryStack.

We are very excited to announce that the ZeroVM team has built a playground environment called Zebra that will serve a similar purpose for the ZeroVM community.  Zebra is a ZeroCloud deployment, or an OpenStack Swift cluster with the ZeroCloud middleware installed on the proxy nodes and ZeroVM installed on the storage nodes.  But before we get into more detail about Zebra, let’s dig into some of the key reasons we decided to build Zebra in the first place.

Developer Enablement

Zebra is the perfect platform for developers to get exposed to the ZeroVM technology without having to go through the trouble of standing up their own deployment.   We realized that there are really two distinct groups of developers who have expressed interest in ZeroVM, those who are interested in tinkering with the core ZeroVM platform, and those who are interested in building applications on top of ZeroVM.  From what we have seen, the second group of developers is a much larger group than the first.  For these developers, Zebra is an environment where they can deploy and test their ZeroVM applications without having to standup or manage their own infrastructure.  We believe that lowering the barrier to entry is going to be a key enabler to continuing to grow the ZeroVM community, and Zebra is a great step in that direction.

Testing at Scale

Distinguished Google Fellow Urs Hölzle is quoted as saying “at scale, everything breaks”.  In our experience, this is particularly true when you are building a distributed system.  Prior to building Zebra, the ZeroVM community was limited to running virtual clusters deployed in the public cloud for testing.  While there is a lot of testing and validation to be done in these types of environments there is no substitute for testing on a physical cluster.  The variability of performance (specifically network and I/O performance) makes testing a technology like ZeroVM almost impossible in the public cloud.  In addition, without being able to stress a cluster to the point of breaking it is very hard to understand where the true limits of the software really lie.  Zebra provides us a chance to do meaningful performance testing and to really understand the limits of the technology.

More About Zebra

So what exactly is Zebra?  As mentioned earlier, Zebra is a ZeroCloud deployment, which means it is an OpenStack Swift cluster with ZeroVM installed.  For those of you not familiar with OpenStack Swift, it is the object storage offering that is a part of OpenStack, and it is the technology that powers services such as Rackspace’s Cloud Files product.  Normally with object storage services you are limited to storing and retrieving static objects.  The addition of ZeroVM allows the execution of arbitrary code on your data without extracting it form the storage system.  This combination of technologies creates a converged storage and computing platform that allows you to do everything from searching your objects in-situ to defining and executing complex map-reduce operations without a dedicated cluster.

In terms of hardware, Zebra consists of four racks of gear located in a datacenter just outside of Washington DC.  Each of these racks is setup as a separate Swift “zone” and each of these zones has 5 storage nodes.  Each storage node has a 45 disk JBOD for a grand total of 900 disks in the environment.  The storage nodes are intentionally heterogeneous so we can experiment with different CPU/Storage ratios.  Some are quad proc, while others are dual proc, some have faster memory speeds, while others have faster CPUs.  This diversity in configurations will allow us to observe different workloads on different hardware so we can begin to understand where the real bottlenecks are in this type of system.

Next week we will be posting a follow up post where we will go into more detail about how Zebra works and what types of workloads tend to be well suited to Zebra.  In the meantime, if you are interested in experimenting with ZeroVM please send us a note at zebra-requests@zerovm.org.  Include some information about what you are interested in trying out on Zebra as we will be sending out invitation codes over the next few weeks.