The storage of the HPC cluster is managed by IBM Spectrum Scale version 5.1.0, which is a high-performance parallel file system designed to handle large volumes of data with high throughput and low latency. The storage is split into two filesystems: projects and robbyfs.
The robbyfs filesystem includes two partitions named scratch and homes. Scratch has a storage capacity of 297TB, while homes has a storage capacity of 8TB. The projects filesystem has a storage capacity of 517TB. The scratch partition is typically used for temporary data storage during computation, while the homes partition is used to store user home directories and related files.
The system manages SATA disks for data storage, which are slower but offer a high capacity and are cost-effective for large data storage. In contrast, the system uses SSD disks for metadata storage, which are faster and provide low latency access to data.
The interconnection between the storage and compute nodes is managed by Infiniband network EDR, which is a high-speed interconnect designed for HPC applications. This network offers low latency and high bandwidth communication between the nodes, enabling fast data transfers and parallel processing.
Overall, the storage system of the HPC cluster is designed to handle large volumes of data with high throughput and low latency, making it suitable for high-performance computing applications.