============================================================================== SHARC based on the GFDL land model LM4 set-up ------------------------------------------------------------------------------- REQUIREMENTS AND DEPENDENCIES: * Code compilation: - FORTRAN compiler. Any compiler that implements F2003 standard (or its sufficiently large subset) should work. - C compiler - NetCDF library, with FORTRAN interfaces - Perl - git * Data preparation and pre-processing: - Python - netCDF4 (http://unidata.github.io/netcdf4-python/) - numpy * Output visualisation: - any visualisation tool capable of reading netCDF files (Python + netcdf4, MATLAB, R, ...) * Directory path should not contain spaces: while reasonable efforts have been made to make compile and run scripts are as general as possible, there is no guarantee that the spaces in the working directory paths would not break anything. Therefore, the full path to the directory where the model is unpacked should not contain spaces, tabs, or any other characters that can be interpreted as word separators by Unix scripts and utilities ======================================================================================== CHECKOUT: The checkout script is provided for reference only, since parts of it rely on access to the internal GFDL git repository, which users outside GFDL are unlikely to have. However, the land model itself (subdirectory "lm4p" in the source code tree) is publicly accessible from https://github.com/NOAA-GFDL/lm4.git. In the future, we should be able to provide checkout of the same (or equivalent) code from publicly released code on GitHub. ======================================================================================== COMPILATION: This configuration was tested with Intel compiler v15.0.3 and GNU fortran v6.3.0 on Mac OS X. It should also compile with Intel of GNU FORTRAN compilers on other platforms. Most of the model and infrastructure is written in FORTRAN2003, which by this time should be well supported by all available compilers. Before compiling, you will need to install NetCDF library (http://www.unidata.ucar.edu/software/netcdf/), with its dependencies and (important, but not default!) its FORTRAN interfaces. There two compilation scripts in this directory: compile-ifort -- for the Intel FORTRAN compiler compile-gfort -- for GNU FORTRAN compiler mac-compile-gfort -- for GNU FORTRAN compiler and Mac OS The LM4-SHARC has been complied using ./mac-compile-gfort (on a Mac OS environment) For example, to compile LM4-SHARC using GNU FORTRAN in Mac, go to the directory where you unpacked the archive, and execute the following command: ./compile-gfort It will take few minutes. You will see the message "NOTE: make successful" if everything was OK. If not, look for error messages. The compile-gfort can take several command line options; here is a short usage information: compile-gfort [-hdfp0] -h = print this help message and exit -f = force recompile from scratch -p = regenerate pathnames -t (repro|debug|fast|O0) = set compilation target options; default is "repro" Note that in the compile, LDFLAGS variable explicitly specifies the path where the NetCDF libraries are installed. On you computer the library may be in a different place, so most likely you will need to change a path in LDFLAGS variable. The compile script uses a couple of helper tools from "tools" subdirectory: "mkmf" to generate the the dependencies among source code files (this is why Perl is needed), and "git-version-string" to record versions of the code, for provenance checks. There is a similar script, called "compile-ifort", that uses Intel FORTRAN compiler; this is the compiler that we primarily use in our development, as it provides some performance benefits. Admittedly the performance of the code compiled with open-source GNU FORTRAN gets better with its development, so the difference is not huge. ======================================================================================== RUNNING PREINDUSTRIAL (CONSTANT CO2) EXPERIMENTS: Use provided script "run-lm4p1-pi" to run your experiments. This scripts assembles the necessary data in a temporary directory, runs the mode executable, stores the output in the specified output directory, and cleans up temporary directory. You can control the parameters of your run with the command line arguments, and by modifying certain setting files. The script is set up to use first 30 years of Sheffield et al. (2006) forcing (1948-1977), recycling it as necessary. The atmospheric CO2 concentration is set to pre-industrial value (around 284 ppmv). There is no human land use in this run, so there is no agriculture of any sort. Here is a brief info on runs script command-line arguments: run-lm4p1-pi [-vhsw] [-x executable] [-i init-conditions] [-I input-dir] [-W work-dir] -t run-time -P point -O output-dir -v = increase verbosity -d = use debug executable -x = use specified executable -h = print this help message and exit -s = do not delete temporary working directory -w = turn on watch point output -t = time intervals to run, for example 3y,2m,1d -i = use specified initial condition -b = specify initial date of simulation -P = use specified point -I = use specified input dir -O = use specified output directory -W = use specified working directory; implies -s For example, this command would run the SHARC model for 10 years, for the Providence location: ./run-lm4p1-pi-hb-providence -P providence -v -O providence_test1 -t 10y In this example, the output data are saved in the directory "providence_test1". The -P ("point") argument of the run script specifies the location you want to run for; -t specifies the length of the run. After the succesful completion of this script, the output directory ("providence_test1" in this example) will contain three subdirectories: "ascii" contains few textual outputs from the model. These files are useful when you want to make sure what exact settings were used by the experiment, since the model prints them all to the log file. "restart" contains the final and intermediate states of the model. These states can be used to continue experiment, or to branch off another one. "history" is where the real output data are. After a run it will contain a number of netcdf files with all the variables you choose to save. You can control what variables you want to save, with what frequency, and in what files, by modifying diagnostic table files located in the "input/diag/" directory. You can also turn various diag tables on and off in the runscript, by modifying the definition of variable "diag_tables" at the beginning of the script. Brief description of the format of diag table file can be found at the end of that file. The list of the diag fields available in the model can be found in the file "doc/diag-fields.html" There exists a variety of tools that allow to read/visualize data model output data saved in netcdf format. We use ferret (https://ferret.pmel.noaa.gov/Ferret/) extensively; some people like ncview (http://meteora.ucsd.edu/~pierce/ncview_home_page.html). Of course, MATLAB can read netcdf filess, as can Python (https://code.google.com/p/netcdf4-python/) ======================================================================================== TUNING GROUNDWATER PARAMETERS: Groundwater parameters such as saturated hydraulic conductivity and effective porosity can be adjusted in the directory of: src/lm4p/soil/soil_tile.F90 (~line 365-390) To run the SHARC for a catchment, such catchment properties as total_catchment_area, reach_length, reach_width, and hyd_cond_horz_gw, eff_porosity_gw and chrt_hlsp_slope_angle (i.e., bedrock slope) have to be defined as model arguments. Ex) real :: hyd_cond_horz_gw = 0.001 ! mm/s real :: eff_porosity_gw = 0.05 ! effective porosity (m3/m3) real :: chrt_hlsp_slope_angle = 0.00 ! radian (1 rad x 180/pi = degree); e.g., 10 deg =0.174 rad real :: total_catchment_area = 990400 ! m2 real :: reach_length = 1300 ! m real :: reach_width = 1.0 ! m integer :: total_number_of_init_tiles = 100 ! number of parent tiles ======================================================================================== TROUBLESHOOTING: If the model crashes, the typical error message is, unfortunately, not very informative: usually it is "temperature is out of bounds in calculations of saturated water vapor pressure." Even if the following diagnostics points to a line number in the code, it is the line that triggered the check, not the line where the problem actually originated. The first thing to try in such cases would be to try to compile the executable with debug flags ("-t debug" command-line option of the compilation scripts) and try to re-run the segment where the crash occurred with debug executable. There is a chance that the extended checks that compiler inserts in debug executable can catch the root cause of the problem, like floating point overflow, or index out of bounds. If that does not help, the model includes its own debug capability, so called "watch point": it can print a lot of internal state information as the calculation progresses, which can be analyzed for possible issues that trigger the crash. Unfortunately doing so requires knowledge of the model code and the meaning of the model variables reported in the output. The run scripts have -w command line option that triggers this capability. Note that this option can produce huge amount of output, literally gigabytes of text, so you may want to minimize the amount of time the model runs with this option. One way to do this is to modify the run script by adding start_watching and stop_watching to the land_debug_nml For example, if the model crashes on, say, 1991-07-01 13:30:00, you can set &land_debug_nml watch_point = $watchPoint start_watching = 1991, 07, 01 stop_watching = 1991, 07, 03 / to start saving the data at the beginning of that day. To save the output to a file for analysis, simply redirect the output to a file: ./run-lm4p1-post1978 -P Audubon -v -O hist -t 60y -w > crashlog.txt ======================================================================================== FILES: src/ -- directory containing model code input/ -- input data directory data/ -- prescribed data used by the model diag/ -- diagnostic tables, some used by the runscript, some provided as an example forcing/ -- meteorological forcing driving the model grids/ -- grid specifications, for each of the locations. tools/ -- tools used for compilation, for creation of grid spec files, etc. compile-gfort -- compilation script for GNU FORTRAN compile-ifort -- compilation script for Intel FORTRAN README.txt -- this file run-lm4p1-pi -- run script run-lm4p1-pre1978 -- run script run-lm4p1-post1978 -- run script ======================================================================================== DIAG TABLE CONFIGURATION FILE ENTRIES (not all input values are used): "file_name", output_freq, "output_units", format, "time_units", "long_name", where output_freq: > 0 output frequency in "output_units" = 0 output frequency every time step =-1 output frequency at end of run output_units = units used for output frequency (years, months, days, minutes, hours, seconds) time_units = units used to label the time axis (days, minutes, hours, seconds) Examples: "land_month", 1, "months", 1, "days", "time" -- for monthly output "land_daily", 1, "days", 1, "days", "time" -- for daily output "land_each", 0, "days", 1, "days", "time" -- for each time step output (produces huge files) "land_static", -1, "days", 1, "days", "time" -- output at the end of the run, typically for static data FORMAT FOR FIELD ENTRIES (not all input values are used) "module_name", "field_name", "output_name", "file_name" "time_sampling", time_avg, "other_opts", packing time_avg = .true. or .false. packing = 1 double precision = 2 float = 4 packed 16-bit integers = 8 packed 1-byte (not tested?) For example: "vegn", "lai", "lai", "land_month", "all", .true., "none", 2