|
2 | 2 | title: Build |
3 | 3 | subtitle: Building MPAS on NSF NCAR Derecho |
4 | 4 | --- |
| 5 | + |
| 6 | +It's good practice to compile the model in an interactive job, |
| 7 | +to avoid stressing the login nodes, |
| 8 | +_and_ to ensure that nodes detected at build time are the same as those used at run time[^derecho-nodes]. |
| 9 | + |
| 10 | +[^derecho-nodes]: |
| 11 | + On Derecho, the login nodes have the |
| 12 | + [same processors](https://ncar-hpc-docs.readthedocs.io/en/latest/compute-systems/derecho/#derecho-hardware) |
| 13 | + as the compute nodes, but this is not always the case on other systems. |
| 14 | + |
| 15 | +```bash |
| 16 | +qsub -I -l walltime=3600 -l select=1:ncpus=4:mem=80gb -A UTAM0025 -q develop |
| 17 | +``` |
| 18 | + |
| 19 | +After our session starts, we need to load some modules. |
| 20 | + |
| 21 | +::::{tab-set} |
| 22 | + |
| 23 | +:::{tab-item} Intel 2023 |
| 24 | + |
| 25 | +```{code} bash |
| 26 | +:filename: modules-intel-2023.sh |
| 27 | +:caption: These are the modules we've used for the referenced papers. |
| 28 | +
|
| 29 | +# NCAR Derecho modules for MPAS-A |
| 30 | +# `source` or `.` this file to use |
| 31 | +
|
| 32 | +module --force purge |
| 33 | +module load ncarenv/23.06 |
| 34 | +module load intel/2023.0.0 |
| 35 | +module load cray-mpich/8.1.25 |
| 36 | +module load parallel-netcdf/1.12.3 |
| 37 | +
|
| 38 | +# Hack to get pnetcdf working again |
| 39 | +export LD_LIBRARY_PATH="/glade/u/apps/derecho/23.06/spack/opt/spack/parallel-netcdf/1.12.3/cray-mpich/8.1.25/oneapi/2023.0.0/blyr/lib:$LD_LIBRARY_PATH" |
| 40 | +``` |
| 41 | + |
| 42 | +::: |
| 43 | + |
| 44 | +:::{tab-item} Intel 2025 |
| 45 | + |
| 46 | +```{code} bash |
| 47 | +:filename: modules-intel-2025.sh |
| 48 | +:caption: Updated modules, using `ifx` instead of `ifort`. |
| 49 | +
|
| 50 | +# NCAR Derecho modules for MPAS-A |
| 51 | +# `source` or `.` this file to use |
| 52 | +
|
| 53 | +module --force purge |
| 54 | +module load ncarenv/24.12 |
| 55 | +module load intel/2025.1.0 |
| 56 | +module load cray-mpich/8.1.29 |
| 57 | +module load parallel-netcdf/1.14.0 |
| 58 | +
|
| 59 | +# Tell mpifort to use ifx |
| 60 | +export MPICH_FC=ifx |
| 61 | +``` |
| 62 | + |
| 63 | +::: |
| 64 | + |
| 65 | +:::: |
| 66 | + |
| 67 | +```{tip} |
| 68 | +NCAR also [provides](https://ncar-hpc-docs.readthedocs.io/en/latest/pbs/) |
| 69 | +`qcmd` and `qinteractive` shortcut commands that you can use, |
| 70 | +but the above `qsub` should work on other PBS-based systems as well. |
| 71 | +
|
| 72 | +The [`develop` queue](https://ncar-hpc-docs.readthedocs.io/en/latest/pbs/charging/#derecho-queues) |
| 73 | +has a 6-hour walltime limit and is intended for testing and development. |
| 74 | +Above we have requested 3600 seconds (1 hour). |
| 75 | +``` |
| 76 | + |
| 77 | +First we build the model initialization program. |
| 78 | +This is what we use to generate initial conditions and such. |
| 79 | + |
| 80 | +```bash |
| 81 | +make -j4 intel CORE=init_atmosphere |
| 82 | +``` |
| 83 | + |
| 84 | +Now, assuming the `init_atmosphere_model` executable was built successfully, |
| 85 | +we can build the main program. |
| 86 | + |
| 87 | +```bash |
| 88 | +make -j4 intel CORE=atmosphere |
| 89 | +``` |
| 90 | + |
| 91 | +If this completes successfully, we should have an `atmosphere_model` executable. |
0 commit comments