Service Overview
The Super Intelligent Computing Center (SICC) contains multiple sets of computing platforms to provide GPU computing platforms for deep learning and Virtual Data Center for unit use. Set of computing platforms to provide services:
- Huawei Cloud Stack (HCS): provide PaaS (Platform as a Service), mainly used to create VDC and VM for long-term use by users. It can not only allocate resources to multiple users, but also provide resources for users to build customized platforms and services. HCS provides more than 5,000 virtual computing cores, with 48 physical servers and 16 graphics processors (GPUs), and is equipped with a computing server with a large memory of 10TB and a capacity of 600TB.
- DGX Cluster: This platform consists of 3 x DGX-H800, 4 x DGX-A100 and 1 x DGX-2, with NetApp SSD Storage 210TB and 2 x 200GB Switches. NVIDIA DGX-2 is equipped with 16 fully interconnected Tesla-v100 GPUs as the core, as the first intelligent computing system with 2 gigabytes per second floating-point operation (petaFLOPS). NVIDIA DGX A100 is the world’s first AI system built on the NVIDIA A100 Tensor Core GPU, integrating 8 x A100 GPUs with 320GB of GPU memory, as 5 gigabytes per second floating-point operation (petaFLOPS). NVIDIA DGX H800 is integrating 8 x H800 GPUs with 640GB of GPU memory, as 32 gigabytes per second floating-point operation (petaFLOPS). The AI computing platform adopts a scalable architecture, so that the complexity and scale of the model are not restricted by the limitations of the traditional architecture, that can deal with many complex AI challenges. Each DGX is in deep learning can reach or even exceed the level of traditional large-scale supercomputing centers.
Getting started
Access
SSH Client (For off-campus access, please connect to SSL VPN first.)
Apply
- Please fill the Application Form
User Manual and Q&A
Equipment
HCS (Huawei Cloud Service)
Node | Quantity | Computing Cores | RAM (GB) | HDD (TB) | GPU |
Management Node | 3 | 96 | 1344 | — | — |
Network Node | 2 | 64 | 768 | — | — |
Compute and Storage Node | 24 | 960 | 15360 | 500 | — |
GPU Computing Node | 2 | 96 | 768 | — | 16 x NVIDIA® Tesla® V100 |
Login Information: https://console.sicc.um.edu.mo/
DGX-Cluster (GPU Super Computing Service)
Node | Quantity | Computing Cores | RAM (GB) | HDD (TB) | SSD(TB) | GPU |
---|---|---|---|---|---|---|
Head Node | 1 | 32 | 128 | 6 | — | — |
Compute node DGX-2 | 1 | 96 | 1500 | — | 30 | 16 x NVIDIA® Tesla® V100 |
Compute node DGX-A100 | 4 | 1024 | 4000 | — | 60 | 32 x NVIDIA® A100-40GB |
Storage node NetApp | 1 | — | — | — | 210 | — |
Nvidia MSN Switch | Maximum Speed 200GbE |
All DGX and NetApp Storage are connected by Nvidia MSN Switch with speed 200GbE .
Login and Job Submit Information: SSH Client and Submit jobs by Slurm
GPU Cluster: Lico & Cambricon Cluster
Node | Quantity | Computing Cores | RAM (GB) | HDD (TB) | SSD (TB) | GPU |
Head Node | 1 | 40 | 128 | — | — | — |
Login Node | 3 | 120 | 768 | — | — | — |
GPU Computing Node | 12 | 480 | 3072 | — | — | 48 x Cambricon MLU100 GPU Deep Learning Card |
Storage Node | 1 | — | — | 28.8 | 11.2 | — |
GPU Cluster for job Scheduling (Hosting Service Center for Professor’s servers)
Node | Quantity | Computing Cores | RAM (GB) | HDD (TB) | SSD (TB) | GPU |
Head Node | 1 | 32 | 128 | 1.2 | — | — |
Login Node | 2 | 64 | 256 | 2.4 | — | — |
GPU Computing Node | 12 | 608 | 3152 | 30 | 19.4 |
50 x GeForce RTX 2080 Ti |
Storage Node | — | — | — | 100 | 11.2 | — |
Contact us
- for HCS Service, please contact your department technician
- for typical cluster operations, please refer to http://services.sicc.um.edu.mo:8443/explore/repos (Campus Network)
IOTSC will provide technical support as much as possible, if assistance is required.
Acknowledgement in Research Publications
Please acknowledge the support of Super Intelligent Computing Center (SICC) in your research report, journal, or publications. This information is very important for us to acquire funding for new resources. The author(s) of the paper can word the acknowledgement, and below is the recommend acknowledgement for publications:
Acknowledgement |
THIS WORK WAS PERFORMED IN PART AT SICC WHICH IS SUPPORTED BY SKL-IOTSC, UNIVERSITY OF MACAU. |