This article comes from the URL: http://websrv.cs.umt.edu/isis/index.php/CESM_exercise_I:_checkout,_build,_and_run_model
_____________________________________________________________________________________
Contents[hide ] |
In this exercise you will get a copy of the Community Earth System Model, create a case, and then modify, build and run the code. CESM is very user-friendly in the sense that once you have it installed on your local computer, you need to execute only a few commands in order to run the model. These commands can be summarized as follows:
Here you will learn how each command works. Then you will use CESM to compute the surface mass balance of the Greenland ice sheet.
CESM has already been installed on a BNU supercomputer. So you just need to log in and copy the code to your home directory.
First, open a terminal window and log onto the machine.
> ssh -X -l login_name 210.31.66.230
Then enter your password.
You should now be in your home directory, /home/login_name . To see the directory contents:
> ls
Now copy a CESM tarball to your home directory:
> cp /home/jidy/cesm1_0_2.tar.gz .
Unzip and untar the file:
> tar -xzvf cesm1_0_2.tar.gz
A long list of files will scroll down the screen. Look again at the contents of your home directory:
> ls
You should now have a directory called cesm1_0_2, containing a recent release of CESM version 1.0. Go to this directory:
> cd cesm1_0_2
Take a quick look at the code:
> ls
You will see directories called models and scripts . As we will see, the scripts directory contains useful scripts for creating new cases and tests. Let's first look in the models directory:
> cd models > ls
The models directory contains several subdirectories:
NOTE: In CESM, ice refers to the sea ice model. The ice-sheet model is not in the ice directory, but in the glc directory. Let's take a quick tour of glc :
> cd glc > ls
The active ice sheet model is in the cism (Community Ice Sheet Model) directory. The directories sglc and xglc contain the stub and dead components.
Go to the CISM directory:
> cd cism > ls
The source code resides in several subdirectories:
The other CESM models (atm , lnd , ocn , and ice ) are structured in a similar way.
There are many ways to configure and run CESM. For example, you can run various combinations of active, data, and stub components at different grid resolutions. A particular combination of components and grids is called a case .
You will now create a case that consists of an active land component (CLM, the Community Land Model), an active ice-sheet component (CISM, the Community Ice Sheet Model), a data atmosphere component, and stub ocean and sea ice components. The data atmosphere component is from an NCEP reanalysis at T62 (~1.5 deg) resolution. The land model will (among many other things) compute a surface mass balance that is used to force the ice sheet model.
To create a case, go to the scripts directory.
> cd ~/your-home-dir /cesm1_0_2/scripts > ls
We will run a script called create_newcase . A simple way of creating a new case (if running on an NCAR machine called bluefire ) might look as follows:
> ./create_newcase -case casename -mach bluefire -res 1.9x2.5_gx1v6 -compset IG
where
Many other compsets are allowed. For examples, please see the CESM User's Guide: http://www.cesm.ucar.edu/models/cesm1.0/cesm/
Here is the command you would use on the BNU machine to create an IG case called (naturally enough) IGcase:
> ./create_newcase -case IGcase -mach generic_linux_intel -compset IG -res f19_g16 -scratchroot ~/your-home-dir /swgfs -din_loc_root_csmdata /mnt/swgfs/pub/CCSM4/inputdata -max_tasks_per_node 8
where
NOTE: The different parts of the command are shown in separate lines to make them easier to read. But you should type them like this:
> ./create_newcase -case IGcase -mach generic_linux_intel -compset IG -res f19_g16 -scratchroot ~/your-home-dir /swgfs -din_loc_root_csmdata /mnt/swgfs/pub/CCSM4/inputdata -max_tasks_per_node 8
Now create a case using the command above. If successful, you will see on your screen a message like this:
> Successfully created the case for generic_linux_intel
Next, you will configure and build CESM for your IG case. Go to your new case directory:
> cd IGcase > ls
There are several files with the suffix xml . These files are used to set various model options and parameters.
Look at env_conf.xml :
> less env_conf.xml
For ice-sheet modeling, a setting of interest is $GLC_GRID. The default value is gland20, which refers to the 20-km Greenland grid. If you wanted to run with a finer grid, you would change this to gland10 or gland5. But for this exercise, the 20-km grid works fine.
Next, look at env_mach_pes.xml :
> less env_mach_pes.xml
This file determines how many processors are assigned to each model component. For our case the processors will be assigned to each component sequentially during each time step. In other words, all the processors available will be used first to run the atmosphere model, then to run the land model, and so on until the end of the time step.
The default number of processors is 64. Since we have a limited number of processors available, we will change env_mach_pes.xml to assign 16 processors sequentially to each model.
Although xml files can be edited manually, it is safer to use the xmlchange command. Type the following to make changes in env_mach_pes.xml :
> xmlchange -file env_mach_pes.xml -id NTASKS_ATM -val 16 > xmlchange -file env_mach_pes.xml -id NTASKS_LND -val 16 > xmlchange -file env_mach_pes.xml -id NTASKS_ICE -val 16 > xmlchange -file env_mach_pes.xml -id NTASKS_OCN -val 16 > xmlchange -file env_mach_pes.xml -id NTASKS_CPL -val 16
These changes must be made before you configure the code. Currently, the glc component runs on only one processor, so you should not change the value of NTASKS_GLC.
Now configure the code by typing this:
> ./configure -case
Look again in the case directory:
> ls
If the configure is successful, you will have several new scripts:
Now the code is ready to build. But before we build it, let's make things more interesting by making a small change. Your IG case is configured to run with atmospheric data for the year 2003 from an NCEP reanalysis. What if the surface air temperature were higher or lower than the values in the reanalysis?
To find out, you can change a single line of code in CLM. The safest way to change a file is to make a copy of the file in one of your SourceMods directories and change it there, leaving the original version unchanged. In this way you can easily keep track of your code changes.
The file you will modify is called DriverInitMod.F90, and it is part of the CLM source code. Type the following to copy it to the CLM SourceMods directory:
> cp ~/your-home-dir /cesm1_0_2/models/lnd/clm/src/biogeophys/clm_driverInitMod.F90 SourceMods/src.clm/
Now go to the CLM SourceMods directory:
> cd SourceMods/src.clm > ls
There will be a single file, clm_driverInitMod.F90 . When the code is built, any files that are in this directory will automatically replace files of the same name in the CLM source code directory (in this case, ~/ your-home-dir/cesm1_0_2/models/lnd/clm/src/biogeophys/clm_driverInitMod.F90 ).
You will edit the version of clm_driverInitMod.F90 in the CLM SourceMods directory. Find this line of code:
> tbot_c = tbot_g-lapse_glcmec*(hsurf_c-hsurf_g) ! sfc temp for column
This part of the code sets the surface air temperature, tbot_c , for glacier columns. Recall that in each grid cell containing glacial ice, the glaciated area is divided into several columns, each with a different elevation. The surface air temperature at the mean gridcell elevation is tbot_g . For each column, tbot_g is adjusted by adding a term proportional to the elevation difference between the mean gridcell elevation hsurf_g and the local column elevation hsurf_c .
You will change this line to something like this:
> tbot_c = tbot_g-lapse_glcmec*(hsurf_c-hsurf_g) + 1.0 ! sfc temp for column, plus one degree
When you are ready to make this change, please tell one of the instructors. We will make sure that each group chooses a different value for the temperature change.
In this way you will have a crude version of a global warming (or cooling) simulation. Later we will see how this temperature change affects the surface mass balance of the Greenland ice sheet.
Once you've made your code changes in SourceMods/src.clm, go back to the case directory (that is, go back to the ~/ your-home-dir/cesm1_0_2/scripts/IGcase/ directory) and build the model:
> cd ../.. > ls > nohup IGcase.generic_linux_intel.build &
Recall that the nohup command will force the build to occur in the background and the & symbol will give you your original shell back (thus, if your putty session fails, your build will still take place and you can log back in later and come back to it).
If the build fails, you will get an error message pointing you to a log file in another directory. If you look at the bottom of that log file, you will usually see what went wrong. Because we put the job in the background using nohup , you will not see any updates on the build printed to your screen. In order to check on the status of your build you can type
> less nohup.out
If you are not sure what to do, please ask for assistance from an instructor.
The build will take about 20 minutes, so we'll take a break now. If you rebuild later after making minor changes, the build should not take as long.
If the code builds successfully, you will see on your screen a message like this:
> CCSM BUILDEXE SCRIPT HAS FINISHED SUCCESSFULLY
Now you are ready to run CESM. Go to your case directory if you are not there already:
> cd ~/your-home-dir /cesm1_0_2/scripts/IGcase
First look at the file env_run.xml :
> less env_run.xml
This file contains a number of options that you may want to change after you build the code, but before you submit a run.
For now, the most important options are STOP_N and STOP_OPTION, which determine the length of the model run. The default values are STOP_OPTION = ndays and STOPN = 5. This means the code will run for 5 days--just long enough to make sure nothing is seriously broken. It is a good idea to try a 5-day run before attempting a longer run.
Go ahead and submit the batch job:
> IGcase.generic_linux_intel.submit
If all is well, you will get a message like this:
> check_case OK > sbatch: Submitted batch job 18242
To check the job status, type this:
> squeue
You should see on your screen a message like this:
> JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON) > 18242 XPART IGcase william R 0:30 2 cn[126-127]
where
If no job is listed under your user name, then your job is done.
If all goes well, the job will start and finish within one or two minutes. To see if the job completed, look at the output file in your case directory:
> less batch-jobid .out
If the job finished successfully, you will see some lines like this near the end:
> CCSM PRESTAGE SCRIPT HAS FINISHED SUCCESSFULLY > Tue Mar 22 15:33:56 CST 2011 -- CSM EXECUTION BEGINS HERE > Tue Mar 22 15:34:25 CST 2011 -- CSM EXECUTION HAS FINISHED > (seq_mct_drv): =============== SUCCESSFUL TERMINATION OF CPL7-CCSM ===============
Some log files will have been written to your logs directory. Let's look at them:
> cd logs > ls
These files have a suffix .gz, which means they have been compressed. Uncompress the glc log file and take a look:
> gunzip glc.log.110322-153330.gz > less glc.log.110322-153330
NOTE: The 12-digit number is a date stamp of the form yymmdd-hhmmss. This date stamp is for a run that began on 22 March 2011 at 1533 hr, 30 sec. Your file name will have a different date stamp.
Scroll through the log file. You will see information about the Glimmer-CISM input parameters, followed by some model diagnostics that are written out during the run. If the run finished successfully, you will see this near the end:
> Successful completion of glc model
There are similar log files for the coupler, land, and atmosphere components (cpl , lnd , and atm ). The logfile with the ccsm prefix combines diagnostics from each component.
Now return to your case directory:
> cd ..
Once you have finished a 5-day run, you are ready to try a longer run. We will run the model for 6 years. In the file env_run.xml , change the values of STOP_OPTION and STOP_N:
> xmlchange -file env_run.xml -id STOP_OPTION -val nyears > xmlchange -file env_run.xml -id STOP_N -val 6
Then submit the batch job as before:
> IGcase.generic_linux_intel.submit
Again, you should see a message like this:
> check_case OK > sbatch: Submitted batch job 18243
After a minute or so, check the job status:
> squeue
When everyone's job is running, we will take a break.
A 6-year run will take about 2 hours to complete. We will look at some results after lunch.