Scientific Computing Core Facility

Scientific Computing Core Facility

Scientific Computing Core Facility

Scientific Computing Core Facility

Scientific Computing Core Facility

Scientific Computing Core Facility

Overview

The Scientific Computing Core Facility (SCC) is a specialized service to support the computational needs of researchers. This service provides access to advanced computing resources and specialized software packages, as well as expert technical support and training on how to use these resources effectively. The goal of the SCC is to help researchers solve complex computational problems and accelerate their research by providing access to cutting-edge technology and expertise. The services offered include high-performance computing (HPC), data management, virtualization, among others.

Our service is designed to be highly scalable and flexible, allowing users to allocate resources as needed for their specific tasks. We provide comprehensive support to our users, including technical support, troubleshooting, and consultation services. Our mission is to help users maximize the potential of our computational cluster and ensure that they have a seamless and productive experience.

 

HPC

High-Performance Computing (HPC) refers to the use of advanced computing techniques and technologies to solve complex problems that require significant computational power. HPC is a broad term that encompasses the use of powerful computers (supercomputing) or computing clusters. These systems use parallel processing and high-speed networks to tackle and solve computational problems that are beyond the capabilities of conventional computers.
HPC is used for analyzing massive datasets. It enables scientists and researchers to process and analyze vast amounts of data to extract valuable insights, patterns, and trends.

HPC plays a crucial role in scientific research:

Genome Sequencing and Analysis: it is used for processing and analyzing massive genomic datasets. Whole genome sequencing generates vast amounts of data that require powerful computational tools for analysis, variant calling, and comparative genomics.

Phylogenetic Analysis: in reconstructing evolutionary relationships among species by analyzing large sets of genetic data, enabling researchers to understand evolutionary patterns and biodiversity.

Protein Structure Prediction: it is used for simulating and predicting protein structures. Understanding protein structures is vital for drug discovery and understanding diseases.

Molecular Dynamics Simulations: it facilitates simulations of protein-ligand interactions and protein folding, providing insights into molecular behavior. These simulations are crucial for drug design and understanding biological processes at the atomic level.

Biological Pathway Modeling: it is employed for modeling complex biological pathways and networks. These models help in understanding how genes, proteins, and metabolites interact, providing insights into cellular processes and diseases.

Large-Scale Biological Simulations: it enables simulations of biological systems at a large scale, allowing researchers to study interactions within cells, tissues, and even entire organisms.

Drug-Genome Interactions: it is used to analyze how individual genetic variations influence responses to drugs. This knowledge is essential for personalized medicine, where treatments are tailored to an individual's genetic makeup.

Drug Discovery: it accelerates virtual screening and molecular docking experiments, allowing scientists to explore large chemical libraries and identify potential drug candidates more quickly and accurately.

Genomic Profiling: it helps in analyzing cancer genomes to identify genetic mutations associated with cancer. Understanding the genomic basis of cancer aids in developing targeted therapies.

Drug Resistance Modeling: HPC is used to model how cancers develop drug resistance over time, leading to the discovery of more effective treatment strategies.

Big Data Analysis: processing vast biological datasets, allowing researchers to mine data for patterns and correlations. This data mining helps in understanding complex biological phenomena and identifying biomarkers for diseases.

Image Segmentation: enabling parallel processing of image segmentation algorithms, allowing the simultaneous analysis of multiple image slices or frames. This parallelism significantly speeds up tasks such as cell or object segmentation.

Machine Learning: supporting machine learning algorithms for predictive modeling and pattern recognition. Machine learning techniques are applied to biological data for tasks such as predicting protein functions, disease outcomes, image recognition and drug responses.

 

Marvin

The SCC has a comprehensive computational cluster consisting of 56 nodes and 2 nodes with GPU, all of which are equipped with 1136 cores and 10TB of memory. The filesystem uses a Spectrum Scale of 700TB, which provides ample storage for large datasets and files. We use the SLURM job scheduler and EasyBuild software management system, which allow users to easily manage and execute their jobs and applications on the cluster.

 

Virtualization

In addition, we offer a Proxmox Virtualization Cluster that can host web services, containers, and other applications. This cluster is highly customizable and provides users with the ability to create and deploy their own virtual machines and applications.

 

Training Courses

We also provide computational training courses to help users learn how to effectively use the cluster and its various components. These courses cover a range of topics, including programming, data analysis, and machine learning, among others.

 

Scientific Computing Core Facility

PRBB building (Mar campus)
Doctor Aiguader, 88
08003 Barcelona

 +34 93 316 08 06

[email protected]