Building WRF on ACT systems
WRF has many options that may be unique to any particular installation. This article is to help you get up and running with WRF as quickly as possible without having to rediscover the right settings. Below are the steps to build all dependencies for WRF 3.6 as of August 2014.
Background
Systems installed by ACT make use of Environment Modules. Modules let you load predefined environments, and in this case they’re used to load different MPI and compiler environments. Choosing a module to set a particular environment is important to having a working WRF build, as the dependencies all need to be built by the same compiler and referencing the same MPI library.
The default compiler in CentOS 6 installations is gcc 4.4.7. This version of GCC does not include support for current and previous CPU instruction sets such as AVX and AVX2. Because of this, the default gcc should not be used when building WRF. All module configs ending in /gcc use this default compiler.
Other modules installed by ACT by default use gcc-4.7.2. While this version of GCC does support the AVX/AVX2 extensions used by newer CPUs, the installed version did not have –with-ppl and –with-cloog options at boot time, which are required for WRF. Thus, the /gcc-4.7.2 compiler shouldn’t be used.
If you have the Intel compilers available and the compiler environment was loaded at module build time, then all MPI libraries and their respective modules will have been built at the time. These modules end in /intel, and will provide and environment that successfully builds WRF. Be sure that if you are using Intel compilers version 14, that you have at least version 14.0.3 installed, as earlier versions have bugs that will cause the builds to fail.
WRF-ARW: Intel compilers 14.0, OpenMPI 1.8
Start off by loading the environment as described above.
module load intel openmpi-1.8/intel
szlib
./configure
make
make check
make install
hdf5
FC=ifort CC=icc ./configure –prefix=/opt/openmpi1.8-intel –with-szlib=/act/src/szip-2.1/szip/lib –enable-fortran
make
make check
make install
make check-install
netcdf
git clone https://github.com/Unidata/netcdf-c.git; cd netcdf-c # To update, just git pull
autoreconf -i -f
CC=icc F77=ifort FC=ifort CPPFLAGS=”-I/opt/openmpi1.8-intel/include” LDFLAGS=”-L/opt/openmpi1.8-intel/lib” ./configure –prefix=/opt/openmpi1.8-intel –enable-shared
make clean
make check
sudo make install
git clone https://github.com/Unidata/netcdf-fortran.git; cd netcdf-fortran # To update, just git pull
autoreconf -i -f
CC=icc NETCDF=/opt/openmpi1.8-intel LD_LIBRARY_PATH=/opt/openmpi1.8/intel/lib:${LD_LIBRARY_PATH} F77=ifort FC=ifort CPPFLAGS=-I”/opt/openmpi1.8-intel/include” LDFLAGS=”-L/opt/openmpi1.8-intel/lib” LIBS=”-lnetcdf -lhdf5_hl -lhdf5 -lz -lcurl” ./configure –prefix=/opt/openmpi1.8-intel –enable-shared
make clean
make check
sudo make install
WRFV3
./clean -a
export NETCDF=/opt/openmpi1.8-intel
export PHDF5=/opt/openmpi1.8-intel
export JASPERLIB=/usr/lib64
export JASPERINC=/usr/include/jasper
export WRFIO_NCD_LARGE_FILE_SUPPORT=1
./configure
Cho0se 16. Linux x86_64 i486 i586 i686, ifort compiler with icc (dm+sm)
Choose 1 (basic nesting)
./compile wrf
WPS
#(has WRF env vars loaded above)
./configure
# Choose option Linux x86_64, Intel compiler with dmpar or dm+sm
# No need to run parallel, and parallel builds fail with stupid errors.
Edit configure.wps, change the SFC line to this for Intel compiler builds to not fail when WRF was built with DM+SM:
SFC = ifort -openmp
./compile |& tee compile.log
ncview
yum install libXaw-devel
./configure –with-nc-config=/opt/openmpi1.8-intel/bin/nc-config
make
make install
Categories
- Getting Support (5)
- Hardware (35)
- Areca Raid Arrays (3)
- InfiniBand (10)
- LSI Raid Arrays (9)
- NVIDIA Graphics Cards (1)
- Racks (1)
- Troubleshooting (8)
- Software (11)
- ACT Utilities (5)
- HPC apps & benchmarks (1)
- Linux (3)
- Schedulers (3)
- SGE / Grid Engine (1)
- TORQUE (1)
- Tech Tips (17)
Request a Consultation from our team of HPC and AI Experts
Would you like to speak to one of our HPC or AI experts? We are here to help you. Submit your details, and we'll be in touch shortly.