Expensive Quad Sockets vs. Ubiquitous Dual Sockets
by Johan De Gelas on October 6, 2009 1:00 AM EST- Posted in
- IT Computing
vApus Mark I: Performance-Critical applications virtualized
You might remember from our previous article that the vApus Mark I, our in-house developed virtualization benchmark, is designed to measure the performance of "heavy" performance-critical applications. Virtualization vendors are very actively promoting that you should virtualize these OLTP and heavy websites too, so that you can let the virtualization software dynamically manage them. In other words, if you want high-availability, load balancing, and low power (by shutting down servers which are not used), everything should be virtualized.
That is where vApus Mark I comes in: one OLAP, one DSS, and two heavy websites are combined in one tile. These are the kind of demanding applications that still received their own dedicated and natively running machine a year ago. vApus Mark I shows what will happen if you virtualize them. If you want to fully understand our benchmark methodology, vApus Mark I has been described in great detail here. We enabled large pages as it is generally considered a best practice with AMD's RVI and Intel's EPT.
vApus Mark I uses four VMs with four server applications:
- An SQL Server 2008 x64 database running on Windows 2008 64-bit, stress tested by our in-house developed vApus test.
- Two heavy duty MCS eFMS portals running PHP, IIS on Windows 2003 R2, stress tested by our in-house developed vApus test.
- One OLTP database, based on Oracle 10G Calling Circle benchmark of Dominic Giles.
The beauty is that vApus (stress testing software developed by the Sizing Servers Lab) uses actions made by real people (as can be seen in logs) to stress test the VMs, not some benchmarking algorithm.
To make things more interesting, we enabled and disabled HT-assist on the quad Opteron 8435 platform. HT-assist (described here in detail) steals 1MB from the L3 cache, reducing the size of the L3 cache to 5MB. The 1MB of cache is used as a very fast directory which eliminates a lot of snoop traffic. Eliminating snoop traffic reduces the "bandwidth pressure" on the CPU interconnects (hence the name HyperTransport Assist), but more importantly it reduces the latency of a cache request.
Thanks to HT Assist, the 24 Opteron cores communicate and perform about 9% faster. That is not huge, but it widens the gap with the dual Xeon somewhat. The dual Xeon X5570 keeps up with the much more expensive quad socket Intel server: eight cores are just as fast as 24.
Two tiles, 4 VMs and 4 vCPUs per VM: a total of 32 vCPUs are active in the previous test. 32 vCPUs are harder to schedule on a hex-core CPU, and especially on 24 cores in total. So let us see what happens if we reduce the total amount of vCPUs to 24 vCPUs.
8 VMs, 2 tiles of vApus Mark I, 24 vCPUs
We reduced the number of vCPUs on the web portal VMs from 4 to 2. That means that we have:
- Two times 4 vCPUs for the OLAP test
- Two times 4 vCPUs for the OLTP test
- Two times 2 vCPUs for the web test
That makes a total of 24 vCPUs. The 32 vCPU test is somewhat biased towards the quad-core CPUs such as the Xeon X5570 while the test below favors the hex-cores.
The "Dunnington" platform beats the 16 thread, 8 core Nehalem server but it is nothing to write home about: the 24 core machine outperforms Intel's latest dual socket by 6%. The advantage of the Opteron 8435 compared to the Xeon X7460 shrinks from 28 to 21%, but that is still a tangible performance advantage. Our understanding of virtualization performance is growing. Take a look at the table below.
Virtualization Testing Results | |||
Server System Comparison | vApus Mark I - 24 vCPUs |
vApus Mark I - 32 vCPUs |
VMmark |
Quad Xeon X7460 vs. Dual Xeon X5570 2.93 |
6% | -2% | -15% |
Quad Opteron 8435 vs. Dual Xeon X5570 2.93 |
29% | 26% | 21% |
Quad Opteron 8435 vs. Quad X7460 |
21% | 28% | 42% |
Dual Xeon X5570 2.93 vs. Dual Opteron 2435 |
11% | 30% | 54% |
Notice how the VMmark benchmark absolutely prefers the new "Nehalem" platform: the Dual Xeon X5570 is 54% faster, while it is only 11-30% on vApus Mark I. The quad Opteron 8435 is up to 30% faster than Intel's speed demon, while VMmark indicates only a 21% lead. But notice that vApus Mark I is also more friendly towards the Intel hex-core: VMmark tell us that eight Nehalems are 15% faster than 24 Dunnington cores. vApus Mark I tells us that the quad X7460 is about as fast as the dual Xeon X5570. So why is VMmark so much happier on the Xeon X5570 server? The answer might be found in the table below.
One VMmark tile generates about 21,000 interrupt per second, 22 MB/s of Storage I/O and 55 Mbit/s of network traffic. We have profiled vAPUS Mark in depth before. The table below compares both benchmarks from a Hypervisor point of view.
Virtualization Benchmarks Profiling | ||
vApus Mark I (Dual Xeon X5570) |
VMmark (Dual Xeon X5570) |
|
Total interrupts per second | 2 x 19 K = 38 K/s | 17 * 21 = 357 K/s |
Storage | 2 x 4.1MB/s = 8.2MB/s | 17*22 = 374 MB/s |
Network | 2 x 50M bit/s = 100Mbit/s | 17* 55 MB/s = 935 Mbit/s |
VMmark places a lot more stress on the hypervisor and the way it handles I/O. It produces about 10 times more interrupts and almost a 100 times more storage I/O. We know from our profiling that vApus Mark I does a lot of page management, which is a logical result of the application choice (databases that open and close connections) and the amount of memory per VM.
The result is that VMmark with its huge number of VMs per server (up to 102 VMs!) places a lot of stress on the I/O systems. The reason for the Intel Xeon X5570's crushing VMmark results cannot be explained by the processor architecture alone. One possible explanation may be that the VMDq (multiple queues and offloading of the virtual switch to the hardware) implementation of the Intel NICs is better than the Broadcom NICs that are typically found in the AMD based servers.
32 Comments
View All Comments
rbbot - Tuesday, October 6, 2009 - link
Surely the high price of 8GB Dimms isn't going to last very long, especially with Samsung about to launch 16GB parts soon.Calin - Wednesday, October 7, 2009 - link
8GB DIMMs have two markets: one would be upgrade from 4GB or 2GB parts in older servers, the other would be more memory in cheaper servers. As the demand can be high, it all depends on the supply - and if the supply is low, prices are high.So, don't count on the price of 8GB DIMMs to decrease soon
Candide08 - Tuesday, October 6, 2009 - link
One performance factor that has not improved much over the years is the decrease in percentage of performance gains for additional cores.A second core adds about 60% performance to the system.
Third, fourth, fifth and sixth cores all add lower (decreasing) percentages of real performance gains - due to multi-core overhead.
A dual socket dual core system (4 processors) seems like the sweet spot to our organization.
Calin - Wednesday, October 7, 2009 - link
If your load is enough to fit into four processors, then this is great. However, for some, this level of performance is not enough, and more performance is needed - even if paying four times as much for twice as much performancehifiaudio2 - Tuesday, October 6, 2009 - link
FYI the R710 can have up to 192gb of ram...12x16GB
not cheap :) but possible
JohanAnandtech - Tuesday, October 6, 2009 - link
at $300 per GB, or the price of 2 times 4 GB DIMMs, I don't think 16 GB DIMMs are going to be a big success right now. :-)wifiwolf - Wednesday, October 7, 2009 - link
for at least 5 years you meanmamisano - Tuesday, October 6, 2009 - link
Great article, just have a question about the power supplies. Why do the quad-core servers need a 1200W PSU if the highest measured load was 512W? I know you would like to have some head-room but it looks to me that a more efficient 750 - 900W PSU may have provided better power consumption results... or am I totally wrong? :)JarredWalton - Tuesday, October 6, 2009 - link
Maximum efficiency for most PSUs is obtains at a load of around 40-60% (give or take), so if you have a server running mostly under load you would want a PSU rated at roughly twice the load power. (Plus a bit of headroom, of course.)JohanAnandtech - Wednesday, October 7, 2009 - link
Actually, the best server PSUs are now at maximum efficiency (+/- 3%) between 30 and 95% load.For example:
http://www.supermicro.com/products/powersupply/80P...">http://www.supermicro.com/products/powersupply/80P...
And the reason why our quads are using 1000W PSUs (not 1200) is indeed that you need some headroom. We do not test the server with all DIMM slots filled and you also need to take in account that you need a lot more power when starting up.