The purpose of HPC Cluster is to provide high performance computing resources that an individual computer cannot provide. It allows users to run the computational jobs on those connected computers at the same time (known as parallel computing) to achieve higher processing performance.
Our third generation HPCC (Coral), providing more than 2,000 computing cores, 85 servers and 34 graphics processing units (GPU), and provide huge memory computing server with 3TB RAM. The overall computing performance is increased by 12.5 times compared with the last generation, occupying a leading position of the HPCC within the higher education institutes in Macau in 2019. The HPCC Coral provides a better computing capability to the various research teams, and give support to various research areas, including Bio-Medical, Analysis of Chinese Medicine, Precision Medicine, Medical Imaging, Materials Sciences, Engineering Design, Simulation Analysis, etc.
The Super Intelligent Computing Center (SICC) contains multiple sets of computing platforms to provide PaaS for unit use, and GPU computing platforms for deep learning. Set of computing platforms to provide services:
VDC & EC2 service: It mainly provides PaaS (Platform as a Service), and uses to create VDC and VM for long-term use by users. It can not only allocate resources to multiple users, but also provide resources for users to build customized platforms and services. Providing more than 2,000 computing cores, 48 servers and 16 graphics processors (GPUs), it is equipped with a computing server with a large memory of 10TB and a capacity of 500TB. Compared with HPCC, this platform is for units and teams.
GPU supercomputer service: NVIDIA DGX-2 is equipped with 16 fully interconnected Tesla-v100 GPUs as the core, becoming the first intelligent computing system with 2 gigabytes per second floating-point operation (petaFLOPS), which is in deep learning can reach the level of traditional large-scale supercomputing centers.
GPU computing platform: – Lico GPU cluster with 12 GPU servers and 48 GPUs, and equipped with 3 storage servers and 2 network servers for multi-users to use and share GPU resources simultaneously; – Scheduling GPU cluster with 3 login nodes and 12 GPU servers with 70 GPUs, provides computing capability to various research teams.