Limulus FAQ (Update March 21, 2012)


Is this a real cluster? Yes, it works just like a real cluster. The basic system has four motherboards which are cluster nodes in a standard off the shelf case with a single power supply. The main motherboard is always powered and functions just like a workstation. The three compute nodes can be powered on when needed. The software is identical to that running on large clusters.

Can I do real HPC work on such a system? Yes. Considering that surveys have shown that around 40% of HPC users use less than 16 cores (over 50% use less than 32 cores) it should be a very usable system.

How fast is it? The latest Limulus version achieved 200 GFLOPS (58% of peak) Note, benchmarking results in HPC are very application specific. While a GPU might match the performance of a small cluster for some applications, it is not a general purpose computing device and therefore is not as flexible as a cluster. Take a look at The Norbert Limulus Cluster and The Commercial Limulus Cluster for more details.

What Does Limulus mean? Limulus is an acronym for LInux MULti-core Unified Supercomputer.

What is the difference between a Limulus and 6/8/12/16 core workstation? In terms of core count, there is no difference. In terms of performance there can be a big difference. A multi-core SMP system (such as a dual socket workstation or server motherboard) can provide many cores, but depending on the workload, you may not be able to get effective use from all the cores due to memory contention (See  Benchmarking A Multi-Core Processor For HPC and  Exercising Multi-core). In a cluster design, like Limulus, each node has one processor socket with exclusive access to the local memory. In addition, unused nodes can be powered off.

What is the intended market? There are several ares where a personal cluster can useful (i.e where you own the reset switch)

  • System administrators - a cluster sandbox to try new things, test software packages
  • Software developers - a private software development environment
  • Academic projects - instructional hardware, student projects, learn to run real HPC codes
  • Cloud staging - stage and develop cloud HPC software before launching it to the cloud
  • Small scale production work - test ideas, run applications under your control
  • Small and medium business HPC - explore how HPC an help manufacturing without a huge investment
  • Big data/Hadoop - try and test big data projects (up to 15 TB) without the overhead of a cluster

Obtaining A Limulus

Can I buy one? Check out  Basement Supercomputing.

Can I build one myself? Yes, you have two options. Use individual cases (less functionality, but it still works) or use the same case we use and one of our soon to be announced kits. (Q1 2012) Note, we will provide kits if there is enough interest, please let us  know.

What does it cost? It depends on what you put in it! It is possible to build/buy a rather hefty personal HPC system for under $5000.

Why is the price point so low? The price is low because Limulus is designed to use as much commodity (high volume hardware) as possible. While this level of hardware (desktop computers) has lower performance than the large server counter parts, the base technology is identical. Thus, Limulus maximizes price/performance/power for personal HPC usage.

How do I keep informed on the Limulus Project? Submit  Questions or join the  Limulus Announce List. Or, join the Twitter feed (see above).


How many cores can you fit on one case? Currently, single Limulus system can provide at least 16 cores. As technology progresses we expect that number to increase.

What kind of processors do you use? Our current designs include processors from AMD and Intel (Sandy-bridge). For the nodes we use low power (65 Watt) quad-core x86_64 processors.

What kind of motherboards do you use? We can use almost any standard Micro-ATX motherboard. However, we prefer to test them before we recommend any specific motherboard. There are geometry and component issues (i.e. Gigabit Ethernet chipset) that may make some boards more desirable than others.

How do you fit those extra motherboards in a standard case? We designed some custom parts to hold the motherboards, switches etc. We tried to keep the cost of the custom parts as low as possible and at the same time not require any case modifications or special tools. We also took the time create a clean design to keep the cabling neat. With our parts kit, a Limulus can built with a screw driver, just like any other home build system.

Can I attach a keyboard and monitor to the node motherboards? Yes. There is a front panel that provides video, USB, and a power switch for each motherboard.

Why don't you pack a bunch of 12-core processors into the case? Because Limulus is designed with a heat/power/performance/noise envelope. An HPC server can pack in cores because in a data center, there is dedicated power, cooling, and a tolerance for fan noise. Have you ever run an HPC server (or two) next to your desk?

Why don't you pack a bunch of GPU processors into the case? Using GPU's is a great solution if it fits your problem, but GPUs require more power and more heat must be removed from the case. Thus, these devices (as currently used in HPC) don't fit into the Limulus heat/power/performance/noise envelope. The AMD Fusion devices with a SIMD unit (GP-GPU) integrated into the processor is an interesting approach that we will be investigating.

Do you use dual socket motherboards? No. Limulus is designed to use single socket Micro ATX motherboards. These offer a balance of expansion (RAM and PCIe slots), power and, size. We are looking at Mini-ITX boards as well.

Can I connect multiple cases? Yes. A second Limulus case can be connect to the first one with a single cable.

How are the nodes connected? Gigabit Ethernet (GigE). There is room for two 8 port switches in the case, thus we can have two GigE networks to all nodes.

Can the node motherboards have a hard drive? UPDATE: The latest release of the software detects if there any attached drives on the nodes when shutting down. If drives are detected, they will be placed in standby mode using hdparm. An orderly shutdown/reboot of the head node will cause an orderly shutdown of the worker nodes as well placing any attached drives in standby mode.

OLD RESPONSE: Yes. There are four optional removable drive bays. Each can be connected directly to a node or configured as a RAID array for the main node. There are also an SSD bay and a thin DVD bay. If you choose to connect the hard drives to each of the nodes, it is advisable to put the drive into standby mode by issuing a

hdparm -Y /dev/hda

just before the node is turned off. Since drives stay powered up (and spinning) when node is powered down, this step will ensure the drive is place in low power mode until the node is rebooted.

Can I add expansion cards to the node motherboards? There is a bracket for adding one low profile PCIe card.

Can I add expansion cards to the main motherboard? Yes. All slots are available, however, big long cards will not fit (i.e. huge video cards).

Can I add video cards to the node motherboards? No it would create too much heat to move with the current fans.

How big it the power supply? 850 Watts (this may increase depending what is in the case)

Does it have ECC memory? The current hardware does not support ECC (Error Correcting memory). The need for ECC depends on your needs. In our experience, and in our testing, we have found excellent results with quality memory (not the bargain price priced). We have never had a problem with non-ECC memory in our personal cluster systems. We have run (self checking) codes for days without any issues. Of course if you are planing to run a 3 week parallel job with no check-pointing you may want to consider ECC memory.

Can it use IB or 10 GigE? Potentially, however, for many applications we do not see the need for additional cost.


How much power does a Limulus use? Running HPL (16 cores) we measure between 500-600 Watts. Of course it also depends what you put in it (i.e. disk, video card, etc.)

How many standard wall plugs does it use? One.

Can I manually turn nodes off and on? Yes.

Can I automatically turn nodes off and on? Yes and you can even integrate this into the batch scheduler.

Does it create a lot of heat? Like all electronic devices, it generates heat. Unlike a high-end servers, it would make a poor space heater. We use 65 Watt processors for the nodes and a 95W processor for the main node.

How loud is it? It is very quiet. The use of large fans helps reduce the noise considerably. It can sit next to a desk in an office without any huge impact on the ambient noise environment (i.e. conversations, listening to music, phone calls are fine).


What software does it run? Linux of course. We will be providing a basic cluster software stack. All the base packages are open source and built on top of Scientific Linux 6.X. We will be making this software available very soon (both RPMS and SRPMS). The goal is a turn-key ready to use system.

Will updated software be available? Yes.

Will software support be available? Yes.

Is it the same software that runs on big clusters? Yes.

Will there be open source application software available? Yes.

Can I install my own software? Yes, this is the open source platform, you control your destiny!

Can I run commercial software? If it can run on a large Linux cluster, it can probably run on a Limulus system. Currently, we use Scientific Linux 6.X, which is a community rebuild of Red Hat 6.X. Of course it all depends on the software and vendor policies.

Can I run Windows on it? Probably, but we have not tried.