Our Company

Headache-free HPC!

Advanced Clustering Technologies is a leading provider of high performance Beowulf clusters, Linux-based servers and workstations.
Advanced Clustering — YOUR source for turnkey high performance computing hardware, software, and services.

Customer testimonial

"By the way, I probably have said this before, but Billy is awesome. RMA's can be such a pain with some companies, but he makes it painless." -- Damon G. at NASA Goddard Space Flight Center.

Find out even more reasons to purchase from us

Home Newsletters October 2012

October 2012





October 2012       




Toll-free: 866.802.8222

International: 913.643.0300


Join us at SC12!


SC12 in Salt Lake City is just around the corner, and if you're planning to go, we'd love for you to stop by our booth to say hi! We will be located in Booth #2401, November 12-15.


If you need more information about SC12, please email us at This e-mail address is being protected from spambots. You need JavaScript enabled to view it or visit sc12.supercomputing.org.


Looking forward to seeing you there!

New features of NVIDIA's Tesla K20 GPU promise fast, efficient performance


NVIDIA is gearing up for the release of its new Tesla K20 GPU, which should start shipping at the end of this year. Unlike its currently available K10, which is designed for single-precision computing, the K20 is optimized for parallel-computing with two new features: Hyper-Q and Dynamic Parallelism.


The Hyper-Q feature allows the GPU to tackle up to 32 message passing interface (MPI) processes simultanously, as opposed to earlier GPU models that could only handle one MPI process at a time. In recent tests, NVIDIA engineers saw a 2.5 times increase in speed with Hyper-Q while running molecular simulation code. 


Without this new feature, MPI processing wasn't much faster than with CPUs alone, and the GPU was vastly underutilized. With Hyper-Q turned on, the GPU works at a much higher capacity.


The second new feature available in the K20, Dynamic Parallelism, allows the GPU to distribute work among its cores rather than sending requests back to the CPU each time subsequent calculations need to be made. The GPU launches new threads as needed on its own, thus freeing the CPUs to perform other work.


With no need to continuously transfer data between the GPU and CPU, NVIDIA testing shows that performance doubles. CUDA code can be reduced to half the size since the constant GPU-CPU communication is reduced and doesn't have to be managed as tightly.


For more information on Hyper-Q and Dynamic Parallelism, along with performance graphs and more detailed examples, please visit NVIDIA's blog posts on these subjects:






If you have any questions about this up-and-coming product, please contact us at This e-mail address is being protected from spambots. You need JavaScript enabled to view it .

Conference roundup:

Oklahoma Supercomputing Symposium


On October 2-3, we attended the 11th Oklahoma Supercomputing Symposium in Norman, OK, as a silver-level sponsor of the event. The Symposium, hosted by Oklahoma University and Dr. Henry Neeman, brings in researchers, scholars and students from across the region to get the latest information on what is happening not only with OU's Supercomputing Center, but supercomputing in general.


From talks on how to prepare MRI grant requests to campus champions to managing a cluster, we have always found this event to be an integral part to developing relationships with the very people who use our equipment.


The show kicked off with a poster session and an informal setting for everyone to get a chance to catch up with each other. Then bright and early the next morning, the Symposium begain in earnest. Some of this year's topics included petascale computing; parallel thinking; an update from Intel's Senior Director of HPC, Dr. Stephen Wheat; XCEDE; and "Cowboy," Oklahoma State's new 48 TFLOPs cluster, which they purchased from Advanced Clustering earlier this year.


This was our ninth year attending the Symposium, and we are already looking forward to next year's! For more information on this event, please visit symposium.oscer.ou.edu.