Supermicro's Twin: Two nodes and 16 cores in 1U
by Johan De Gelas on May 28, 2007 12:01 AM EST- Posted in
- IT Computing
Analysis
The Supermicro Twin 1U combined with Intel's latest quad core technology offers an amazing amount of processing power in a 1U box. Just a bit more than two years ago, four cores in a 1U server was not a common setup. Now we get four times as many cores in the same space. Even better is that the second node increases power requirements by only 55%, while a second server would probably double the power needed. Considering the very competitive price, we can conclude that the Supermicro 1U Twin is probably the most attractive offer on the market today for those looking for a HPC node or rendering farm node.
The situation is different for the other markets that Supermicro targets: "data center and high-availability applications". What makes the 1U Twin so irresistible for the HPC and rendering people resulted in a few shortcomings for the HA applications, for example the heavy duty web applications. Although there is little doubt in our mind that Supermicro has used a high-quality, high-efficiency power supply, the fact remains that it is a single point of failure that can take down both nodes. Of course, with a decent UPS protection, you can take away the number one killer of power supplies: power surges. And it must be said that several studies have shown that failing hardware causes only 10% of the total downtime. About 50% of that, or 5% in total, is the result of a failed PSU. With a high quality PSU and a UPS with power surge protection, that percentage will be much lower. So it will depend on your own situation whether this is a risk you are willing to take. More than a third of the downtime is caused by software problems, and another third is planned downtime for upgrades and similar tasks. Those are downtimes that the Supermicro Twin with its two nodes is capable of avoiding with software techniques such as NLB and other forms of clustering.
A lot of the SMBs we are working with are looking collocate their HA applications servers and would love to run two web (NLB) and database servers (clustered) in only 2U. Right now, those servers typically take 4U and 6U, and the Supermicro twin could reduce those collocation costs considerably.
The biggest shortcoming is one that can probably be easily resolved by Supermicro: the lack of an SAS controller. It is not the higher performance of SAS that makes us say that, but VMWare's lack of support for SATA drives and controllers. A few Supermicro Twin 1U together with a shared storage solution (FC SAN, iSCSI SAN) could be an ideal platform to virtualize: you could run a lot of virtual nodes on the different physical nodes which results in consolidation and thus cost reduction, and the physical nodes offer high availability. However, VMWare ESX server does not support SATA controllers very well, so booting from a SAN is then the only option, which increases the complexity of the setup. A SAS controller would allow users to boot from two mirrored disks.
A SAS controller and a redundant power supply would make the Supermicro Twin 1U close to perfect. But let's be fair: the Supermicro Server Twin 1U is an amazing product for its primary market. It's not every day that we meet a 16 core server which saves you 100W of power and cuts rack space collocation in half... all for a very competitive price. Also, two nodes each with eight cores will remain a very interesting solution for applications such as rendering farms even when the quad core Xeon MP ("Tigerton") and AMD's quad core Opteron ("Barcelona") arrive. The reason is that performance will be competitive and that the price of four socket quad core systems will almost certainly be quite high. The Supermicro Twin 1U is an interesting idea which has materialized in an excellent product.
Advantages:
The Supermicro Twin 1U combined with Intel's latest quad core technology offers an amazing amount of processing power in a 1U box. Just a bit more than two years ago, four cores in a 1U server was not a common setup. Now we get four times as many cores in the same space. Even better is that the second node increases power requirements by only 55%, while a second server would probably double the power needed. Considering the very competitive price, we can conclude that the Supermicro 1U Twin is probably the most attractive offer on the market today for those looking for a HPC node or rendering farm node.
The situation is different for the other markets that Supermicro targets: "data center and high-availability applications". What makes the 1U Twin so irresistible for the HPC and rendering people resulted in a few shortcomings for the HA applications, for example the heavy duty web applications. Although there is little doubt in our mind that Supermicro has used a high-quality, high-efficiency power supply, the fact remains that it is a single point of failure that can take down both nodes. Of course, with a decent UPS protection, you can take away the number one killer of power supplies: power surges. And it must be said that several studies have shown that failing hardware causes only 10% of the total downtime. About 50% of that, or 5% in total, is the result of a failed PSU. With a high quality PSU and a UPS with power surge protection, that percentage will be much lower. So it will depend on your own situation whether this is a risk you are willing to take. More than a third of the downtime is caused by software problems, and another third is planned downtime for upgrades and similar tasks. Those are downtimes that the Supermicro Twin with its two nodes is capable of avoiding with software techniques such as NLB and other forms of clustering.
A lot of the SMBs we are working with are looking collocate their HA applications servers and would love to run two web (NLB) and database servers (clustered) in only 2U. Right now, those servers typically take 4U and 6U, and the Supermicro twin could reduce those collocation costs considerably.
The biggest shortcoming is one that can probably be easily resolved by Supermicro: the lack of an SAS controller. It is not the higher performance of SAS that makes us say that, but VMWare's lack of support for SATA drives and controllers. A few Supermicro Twin 1U together with a shared storage solution (FC SAN, iSCSI SAN) could be an ideal platform to virtualize: you could run a lot of virtual nodes on the different physical nodes which results in consolidation and thus cost reduction, and the physical nodes offer high availability. However, VMWare ESX server does not support SATA controllers very well, so booting from a SAN is then the only option, which increases the complexity of the setup. A SAS controller would allow users to boot from two mirrored disks.
A SAS controller and a redundant power supply would make the Supermicro Twin 1U close to perfect. But let's be fair: the Supermicro Server Twin 1U is an amazing product for its primary market. It's not every day that we meet a 16 core server which saves you 100W of power and cuts rack space collocation in half... all for a very competitive price. Also, two nodes each with eight cores will remain a very interesting solution for applications such as rendering farms even when the quad core Xeon MP ("Tigerton") and AMD's quad core Opteron ("Barcelona") arrive. The reason is that performance will be competitive and that the price of four socket quad core systems will almost certainly be quite high. The Supermicro Twin 1U is an interesting idea which has materialized in an excellent product.
Advantages:
- Cuts necessary rack space in half, superb computing density
- Very high performance/power ratio, simply superior power supply
- Highest performance/price ratio today
- Excellent track record of Supermicro
- No SAS controller (for now?)
- Hard to virtualize with ESX server
- Cold swap PSU
28 Comments
View All Comments
SurJector - Tuesday, June 5, 2007 - link
I've just reread your article. I'm a little bit surprised by the power:idle load
1 node : 160 213
2 nodes: 271 330
increase: 111 117
There is something wrong: the second node adds only 6W (5.5W counting efficiency) of power consumption ?
Could it be that some power-saving options are not set on the second node (speedstep, or similar) ?
Nice article though, I bet I'd love to have a rack of them for my computing farm. Either that or wait (forever ?) for the Barcelona equivalents (they should have better memory throughput).
Super Nade - Saturday, June 2, 2007 - link
Hi,The PSU is built by Lite-On. I owned the PWS-0056 and it was built like a tank. Truely server grade build quality.
Regards,
Super Nade, OCForums.
VooDooAddict - Tuesday, May 29, 2007 - link
Here are the VMWare ESX issues I see. ... They basically compound the problem.- No Local SAS controller. (Already mentioned)
- No Local SAS requires booting from a SAN. This means you will use your only PCIe slot for a SAN Hardware HBA as ESX can't boot with a software iSCSI.
- Only Dual NICs on board and with the only expansion slot taken up by the SAN HBA (Fiber Channel or iSCSI) you already have a less then ideal ESX solution. --- ESX works best with a dedicated VMotion port, Dedicated Console Port, and at least one dedicated VM port. Using this setup you'll be limited to a dedicated VMotion and a Shared Console and VM Port.
The other issue is of coarse the non redundant power supply. While yes ESX has a High Availability mode where it restarts VMs from downed hardware. It restarts VMs on other hardware, doesn't preserve them. You could very easily loose data.
Then probably the biggest issue ... support. Most companies dropping the coin on ESX server are only going to run it on a supported platform. With supported platforms from the Dell, HP and IBM being comparatively priced and the above issues, I don't see them winning ANY of the ESX server crowd with this unit.
I could however see this as a nice setup for the VMWare (free) Virtual Server crowd using it for virtualized Dev and/or QA environments where low cost is a larger factor then production level uptime.
JohanAnandtech - Wednesday, May 30, 2007 - link
Superb feedback. I feel however that you are a bit too strict on the dedicated ports. A dedicated console port seems a bit exagerated, and as you indicate a shared Console/vmotion seems acceptable to me.DeepThought86 - Monday, May 28, 2007 - link
I thought it interesting to note how poor the scaling was on the web server benchmark when going from 1S to 2S 5345 (107 URL's/s to 164). However the response times scaled quite well.Going from 307 ms to 815 ms (2.65) with only a clockspeed difference of 2.33 cs 1.86 (1.25) is completely unexpected. Since the architecture is the same, how can a 1.25 factor in clock lead to a 2.65 factor in performance? Then I remembered you're varying TWO factors at once making it impossible to compare the numbers.... how dumb is that in benchmark testing??
Honestly, it seems you guys know how to hook up boxes but lack the intelligence to actually select test cases that make sense, not to mention analyse your results in a meaningful way
It's also a pity you guys didn't test with the AMD servers to see how they scaled. But I guess the article is meant to pimp Supermicro and not point out how deficient the Intel system design is when going from 4-cores to 8
JohanAnandtech - Tuesday, May 29, 2007 - link
I would ask you to read my comments again. Webserver performance can not be measured by one single metric unless you can keep response time exactly the same. In that case you could measure throughput. However in the realworld, response time is never the same, and our test simulates real users. The reason for this "superscaling" of responstimes is that the slower configurations have to work with a backlog. Like it or not, but that is what you see on a webserver.
We have done that already here for a number of workloads:
http://www.anandtech.com/cpuchipsets/intel/showdoc...">http://www.anandtech.com/cpuchipsets/intel/showdoc...
This article was about introducing our new benches, and investigating the possibilities of this new supermicro server. Not every article can be an AMD vs Intel article.
And I am sure that 99.9% of the people who will actually buy a supermicro Twin after reading this review, will be very pleased with it as it is an excellent server for it's INTENDED market. So there is nothing wrong with giving it positive comments as long as I show the limitations.
TA152H - Tuesday, May 29, 2007 - link
Johan,I think it's even better than you didn't bring into the AMD/Intel nonsense, because it tends to take focus away from important companies like Supermicro. A lot of people aren't even aware of this company, and it's an extremely important company that makes extraordinary products. Their quality is unmatched, and although they are more expensive, it is excellent to have the option of buying a top quality piece. It's almost laughable, and a bit sad, when people call Asus top quality, or a premium brand. So, if nothing else, you brought an often ignore company into people's minds. Sadly, on a site like this where performance is what is generally measured, if you guys reviewed the motherboards, it would appear to be a mediocre, at best product. So, your type of review helps put things in their proper perspective; they are a very high quality, reliable, innovative company that is often overlooked, but has a very important role in the industry.
Now, having said that (you didn't think I could be exclusively complimentary, did you?), when are you guys going to evaluate Eizo monitors??? I mean, how often can we read articles on junk from Dell and Samsung, et al, wondering what the truly best monitors are like? Most people will buy mid-range to low-end (heck, I still buy Samsung monitors and Epox motherboards sometimes because of price), but I also think most people are curious about how the best is performing anyway. But, let's give you credit where it's due, it was nice seeing Supermicro finally get some attention.
DeepThought86 - Monday, May 28, 2007 - link
Also, looking at your second benchmark I'm baffled how you didn't include a comparison of 1xE5340 vs 2x5340 or 1x5320 vs 2x5320 so we could see scaling. You just have a comparison of Dual vs 2N, where (duh!) the results are similar.Sure, there's 1x5160 vs 2x5160 but since the number of cores is half we can't see if memory performance is a bottleneck. Frankly, if Intel had given you instruction on how to explicitly avoid showing FSB limitations in server application they couldn't have done a better job.
Oh wait, looks like 2 Intel staffers helped on the project! BIG SURPRISE!
yacoub - Monday, May 28, 2007 - link
http://images.anandtech.com/reviews/it/2007/superm...">http://images.anandtech.com/reviews/it/2007/superm...Looks like the top DIMM is not fully seated? :D
MrSpadge - Monday, May 28, 2007 - link
Nice one.. not everyone would catch such a fault :)MrS