ClusterVision Completes Top500 Class HPC Cluster at the University of Paderborn

The 200 Tflops cluster system, which is located at the Paderborn Center for Parallel Computing (PC²), contains 614 compute nodes with 10,000 cores, and incorporates the latest GPU and many-core acceleration technologies available.
By: ClusterVision BV
 
 
The Padeborn HPC Cluster
The Padeborn HPC Cluster
March 17, 2013 - PRLog -- Paderborn Germany, 18 March, 2013 — ClusterVision, Europe’s dedicated specialist in high performance computing solutions, has announced the successful completion of a new high performance computing (HPC) cluster system at the University of Paderborn. The 10,000 core, 200 Tflops system, which is part of a 4 million Euro HPC systems and infrastructure investment by the University of Paderborn, and its partners Bielefeld University, Hamm-Lippstadt University of Applied Sciences East Westphalia, Lippe University of Applied Sciences, and Bielefeld University of Applied Sciences, is the most recent HPC cluster at Paderborn and is anticipated to claim a high ranking position in the next publication of the Top500 World supercomputer listings.

The new HPC cluster is located at the Paderborn Center for Parallel Computing (known as PC²). PC² is an integral interdisciplinary institute of the University of Paderborn, specialising in distributed and parallel high performance computing. Staff and students at PC² work alongside collaboration partners to investigate a range of high performance computing topics in research, development and emerging practical applications.

“This system is a powerful compute resource for all researchers in the region of East Westphalia and Lippe, and our partners in Germany and Europe,” Prof.Dr. Holger Karl, head of the PC² board.

Since its foundation in 1991 the PC² facility has housed several generations of the University’s HPC technology, including several Top500 ranked systems.  PC² currently has 5 HPC systems many of which are used to develop methods and principles for the future construction and efficient use of distributed and parallel computer systems. The new HPC cluster system will offer approximately 10,000 Intel Xeon processor cores arranged in 614 compute nodes, giving an anticipated performance in the range of 200 Tflops. A combination of 32 NVIDIA K20 GPUs and 8 Intel Xeon Phi coprocessors provide 40 Tflops, giving it a high top-half ranking by the standards of the current Top500 listing.

In addition to running HPC applications PC² works with the European Commission and other partners to conduct fundamental research and development in a number of HPC related areas. These include performance acceleration, fault tolerant middleware and system software, and network virtualisation.

Performance acceleration using custom machines and many-cores architecture is one of the forefront research projects at PC². Custom computing machines use massively parallel, programmable hardware, for example field-programmable gate arrays (FPGA), to build processing units that are highly tailored to specific applications. FPGA based systems have been shown to be able to accelerate computationally intensive applications by orders of magnitude over traditional architectures. Alongside FPGA’s, researchers at PC² are also exploring the use of a number of other acceleration technologies. These include many-core architectures, GP-GPU coprocessors and floating point arrays. In designing the new Paderborn system ClusterVision therefore worked closely with staff at PC² to incorporate some of the very latest acceleration technologies available.

As primary contractor for the design and build of the Paderborn cluster, ClusterVision brought together the technology and expertise of a large number of HPC partners, including ASUS, Intel, Dell, Supermicro, NVIDIA, Knürr Emerson, Bright Computing, and the Fraunhofer Institute.

The Paderborn system configuration comprises 614 compute nodes. Of these 572 are ASUS E7 rack-mount compute servers, each with 2 Intel Xeon E5-2670 (16 cores), 2.6GHz processors. The 552 smaller compute nodes have 64 GByte of main memory per node, whilst 20 larger nodes are enhanced with 256 GByte of main memory.  

Compute power is significantly enhanced with 40 additional GPU nodes, arranged in 2 different configurations. Type-1 enhances the Intel Xeon E5-2670 processors with 32 NVIDIA Tesla K20 GPUs. Type-2 takes the same arrangement but uses 8 of Intel’s own Xeon Phi co-processors to achieve the required acceleration. Running alongside other PC²  systems this novel arrangement allows researchers to optimally run a range of applications, and specifically to investigate the relative performance of the leading HPC acceleration technologies which are available today.

The system also has 2 SMP nodes, using Dell PowerEdge R820 servers with four 8-Core Xeon E5-4650 processors, and 6 front-end and management and administration nodes in arranged in Supermicro Superserver 825TQ-R740WB  and 1027GR-TRF chassis. Dell also provide the storage components for the 45 TByte capacity in its PowerVault MD3200 series units, along with much of the cabling and interconnect switchgear. The interconnect system itself is a fast QDR InfiniBand, 40 Gbit/s system from Mellanox Technologies. All of the systems components are housed in 14 42U Emerson/Knürr server racks, 12 of which incorporate Knürr’s backdoor chilled water cooling technology.

The software components of the cluster include Bright Cluster Manager from Bright Computing, which is used to provide the provisioning and administrative management of the system, and FraunhoferFS (FhGFS) the parallel file-system from the Fraunhofer Institute for Technological and Industrial Mathematics (ITWM).

The size and complexity of the new Paderborn system required a high level of expertise in engineering assembly and on-site integration. ClusterVision began off-site engineering the assembly components at their headquarters in Amsterdam in November 2012, followed by an intensive 4-man on-site build at the end of the year. The build and software installation process culminated in a program of system provisioning, burn-in testing, and benchmark tuning, with acceptance and handover being successfully completed in early February 2013.  Post-delivery services include user support and maintenance for the Bright Cluster Manager and Fraunhofer FhGFS software installations and multi-year warranty and repair provisions for the critical hardware components.

“It is always exciting for our company to work on Top500 class systems like the new HPC cluster at Paderborn. Large scale, complex systems like this understandably represent a showcase of possibility to the HPC community, both in academia and commercial enterprise, and enable our team to draw upon and demonstrate all of their experience in system design and their expertise in build and configuration,” Christopher Huggins, Commercial Director at ClusterVision.

“The engineering sciences must take full advantage of exponentially increasing computing power. The new HPC cluster in Paderborn offers excellent opportunities for research and development,” Prof.Dr.-Ing. Jadran Vrabec, member of the PC² board.
End
Source:ClusterVision BV
Email:***@clustervision.com Email Verified
Tags:High performanec computing
Industry:Computers, Education
Location:North Rhine-Westphalia - Germany
Account Email Address Verified     Account Phone Number Verified     Disclaimer     Report Abuse
ClusterVision News
Trending
Most Viewed
Daily News



Like PRLog?
9K2K1K
Click to Share