Matlab Tutorial Matlab Tutorial

This simple tutorial shows the steps to run a matlab file (.m) in MARVIN cluster.

Suppose we declare an array a with five values, such as:

a = [1 2 3 4 5]

 

Then, we would like to add one to each value of the array we just declared and save the new values in an array called c:

c = a + 1

 

Once you save these lines of code in a file, we have to create a sbatch file (.sh) so we can run it on MARVIN.

#!/bin/bash

 

#SBATCH --job-name=mat_example

#SBATCH -o slurm.%j.out

#SBATCH -e slurm.%j.err

#SBATCH --cpus-per-task=1

#SBATCH --mem-per-cpu=4G

 

echo "Running mat_example.m, on $SLURM_JOB_NUM_NODES node"

 

module load MATLAB/2018a

matlab -nodisplay < [name_of_your_matlab_file].m

The modules available in the cluster are:

  • MATLAB/2016a
  • MATLAB/2017b
  • MATLAB/2018a

You can use whichever module fits better your necessities. In addition, you may use Matlab licenses on the cluster, the Slurm queue system provides resources to use this license:

  • To show the current status of licenses:

scontrol show lic

  • If you want to use 2 licenses as resource:

sbatch -L matlab:2 <job-script>.sh

or inside the bash script:

#SBATCH -L matlab:2

Next, you will need to upload these files to your cluster account, you can do it using an application such as FileZilla (for macOS and windows) or you can upload it from your terminal if you are working with linux/ubuntu:

$ scp ~/(path of the file in yout computer) [email protected]:/homes/users/user_name/(folder where you want your file to be)

 

Now, log in to the cluster and go to the folder where you saved the matlab and sbatch files and you can either:

  • Run the code in an interactive session, type in the termainal

$ interactive

 Then, load the modules that you need:

$ module load MATLAB/2018a

And run your code:

$ matlab -nodisplay -nojvm < [name_of_your_matlab_file].m

See that we do not need the sbatch script but still we recommend to use it, as shown in the next alternative.

  • Submit the sbatch job. Right after you accessed the folder where you saved the files, send the sbatch code to the cluster and MARVIN will immediately read it and run your code:

$ sbatch -L matlab:1 [name_of_your_sbatch_file].sh

You don't even need to run your .m file. Now, type:

$ ll

 You will see that there are two new files generated, one called slurm.(some number).out and slurm.(the same number).err. To run your code you have to:

$ more slurm.(number).out

And if the result is not what you expected, you can run the .err file to check what error might have happened.

 

MARVIN cluster allows the user to run parallel matlab code. You can learn more about matlab parallel functions here.

We are going to use parfor to execute parallel for-loops and to get a better understanding on how to run parallel code on the cluster.

First of all, prior to performing parallel computations, we have to set up the parallel enviornment by first, getting the current cluster we are working on, which can be done using parcluster or getCurrentCluster function. Then we continue by making use of parpool, where str2num(getenv('SLURM_CPUS_ON_NODE')) gets the number of nodes either from the sbatch file, where you control how many nodes you want with ntasks, or, if it is not mentioned, gets the maximum available nodes.

pc = parcluster('local')

parpool(pc, str2num(getenv('SLURM_CPUS_ON_NODE')))

parfor i = 1:100

          disp(i)

end

 

Save this code and now we have to prepare the sbatch file:

#!/bin/bash

#SBATCH --job-name=mat_parfor

#SBATCH --cpus-per-task=2

#SBATCH --mem-per-cpu=4G

#SBATCH --nodes=2

#SBATCH --ntasks=16

#SBATCH -o slurm.%j.out

#SBATCH -e slurm.%j.err

 

echo "Running mat_parfor.m, on $SLURM_JOB_NUM_NODES nodes"

 

module load MATLAB/2018a

matlab -nodisplay -nojvm < [name_of_your_matlab_file].m

 

Upload your matlab code with the indications used in "Simple Matlab Tutorial". Log in to the cluster and go to the folder where you have saved the files. Now run the files, first launching the sbatch file:

$ sbatch -L matlab:1 [name_of_your_sbatch_file].sh

Now, type:

$ ll

You will see that there are two new files generated, one called slurm.(some number).out and slurm.(the same number).err. To run your code you have to:

$ more slurm.(number).out

And if the result is not what you expected, you can run the .err file to check what error might have happened.

 

Contrary to the simple case, on the parallel instance we cannot use interactive since, by default, it only works with 1 node.