Changes between Version 5 and Version 6 of LimulusFAQ
- Timestamp:
- 12/23/11 10:08:54 (13 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
LimulusFAQ
v5 v6 1 1 [[PageOutline(1-2,Contents,pullout)]] 2 = Limulus FAQ (Update August 1, 2011) =2 = Limulus FAQ (Update december 22, 2011) = 3 3 4 4 == General/Audience/Market == … … 8 8 9 9 '''Can I do real HPC work on such a system?''' 10 Yes. Considering that surveys have shown that around 40% of HPC users use less than 16 cores (over 50% use less than 32 cores) it should be a very usable system. 10 Yes. Considering that surveys have shown that around 40% of HPC users use less than 16 cores (over 50% use less than 32 cores) it should be a very usable system. The latest . 11 11 12 12 '''How fast is it?''' 13 In 2008 we achieved 53.44 GFLOPS with 8 cores (Intel Core2). At the time this put the price/performance at $39.15/GFLOP (double precision). We are currently testing Sandy-bridge processors we expect continued impressive results. Note, benchmarking results in HPC are very application specific. While a GPU might match the performance of a small cluster for some applications, it is not a general purpose computing device and therefore is not as flexible as a cluster.Take a look at [wiki:NorbertLimulus The Norbert Limulus Cluster] for more information on our prototype system.13 The latest Limulus version achieved 200 GFLOPS (58% of peak) Note, benchmarking results in HPC are very application specific. While a GPU might match the performance of a small cluster for some applications, it is not a general purpose computing device and therefore is not as flexible as a cluster. Take a look at [wiki:NorbertLimulus The Norbert Limulus Cluster] for more information on our prototype system. 14 14 15 15 … … 33 33 34 34 '''Can I buy one?''' 35 Very soon (Fall 2011) 35 Yes 36 36 37 37 '''Can I build one myself?''' … … 65 65 66 66 '''Why don't you pack a bunch of 12-core processors into the case?''' 67 Because Limulus is designed with a heat/power/performance envelope. An HPC server can pack in cores because in a data center, there is dedicated power, cooling, and a tolerance for fan noise. Have you ever run an HPC server (or two) next to your desk?'''67 Because Limulus is designed with a heat/power/performance/noise envelope. An HPC server can pack in cores because in a data center, there is dedicated power, cooling, and a tolerance for fan noise. Have you ever run an HPC server (or two) next to your desk?''' 68 68 69 69 '''Do you use dual socket motherboards?''' 70 No. Limulus is designed to use single socket Micro ATX motherboards. These offer a balance of expansion (RAM and PCIe slots), power and, size. 70 No. Limulus is designed to use single socket Micro ATX motherboards. These offer a balance of expansion (RAM and PCIe slots), power and, size. We are looking at Mini-ITX boards as well. 71 71 72 72 '''Can I connect multiple cases?''' … … 78 78 79 79 '''Can the node motherboards have a hard drive?''' 80 Yes. There are four optional removable drive bays. Each can be connected directly to a node or configured as a RAID array for the main node. There are also two SSD baysand a thin DVD bay.80 Yes. There are four optional removable drive bays. Each can be connected directly to a node or configured as a RAID array for the main node. There are also an SSD bay and a thin DVD bay. 81 81 82 82 '''Can I add expansion cards to the node motherboards?''' … … 113 113 114 114 '''Does it create a lot of heat?''' 115 Like all electronic devices, it generates heat. Unlike a high-end servers, It would make a poor space heater. We use 65 Watt processors for the nodes and a 95W processors for the main node.115 Like all electronic devices, it generates heat. Unlike a high-end servers, it would make a poor space heater. We use 65 Watt processors for the nodes and a 95W processors for the main node. 116 116 117 117 '''How loud is it?''' … … 121 121 122 122 '''What software does it run?''' 123 Linux of course. We will be providing a basic [wiki:LimulusSoftware cluster software] stack. All the base packages are open source and built on top of Scientific Linux 6. 0. We will be making this software available very soon (both RPMS and SRPMS). The goal is a turn-key ready to use system.123 Linux of course. We will be providing a basic [wiki:LimulusSoftware cluster software] stack. All the base packages are open source and built on top of Scientific Linux 6.X. We will be making this software available very soon (both RPMS and SRPMS). The goal is a turn-key ready to use system. 124 124 125 125 '''Will updated software be available?''' … … 139 139 140 140 '''Can I run commercial software?''' 141 If it can run on a large Linux cluster, it can probably run on a Limulus system. Currently, we use Scientific Linux 6. 0, which is a community rebuild of Red Hat 6.0. Of course it all depends on the software and vendor policies.141 If it can run on a large Linux cluster, it can probably run on a Limulus system. Currently, we use Scientific Linux 6.X, which is a community rebuild of Red Hat 6.X. Of course it all depends on the software and vendor policies. 142 142 143 143 '''Can I run Windows on it?'''