Launch a Simulation via Linux Terminal
-
Source set_nFX_environment.sh (*.csh)
from the nanoFluidX installation directory.
Note: This sets paths to the CUDA and MPI executables packaged with nanoFluidX.
- Navigate to the directory containing the nanoFluidX case (*.cfg and *.prtl files).
-
Execute
nvidia-smi
.If NVIDIA drivers are properly installed, this command will show the available GPU devices are available. The number of GPUs should be determined according to the number of particles. Ensure there is at least 2M particles per GPU to scale efficiently. To quickly count the number of lines (particles) inside the .prtl file from the terminal, use thewc
command:wc EGBX_1mm.prtl -l 5673046 EGBX_1mm.prtl
-
Once you know which GPUs to use, enter the launch command string:
CUDA_VISIBLE_DEVICES=0,1,2,3 nohup mpirun -np 4 $nFX_SP -i EGBX_1mm.cfg &> output.txt &
CUDA_VISIBLE_DEVICES=0,1,2,3 Set the GPUs you want to use, based on the GPU ID number. NB: If you are going to use all the GPUs in a machine then this is not required. nohup Prevent the case from crashing in case the sshconnection is interrupted. mpirun Launch OpenMPI -np 4 Number of GPUs/ranks to be used for the simulation. Must match the CUDA_VISIBLE_DEVICES setting $nFX_SP nanoFluidX binary. NB: On some systems this may require the full path to the executable -i EGBX_1mm.cfg Specify the inputfile (*.cfg) for the solver &> output.txt Pipe all the output to a log file (including error messages) & hang up (send the job to the background)