[Ncep.list.nems.announce] regtest script

Dusan Jovic dusan.jovic at noaa.gov
Wed Jun 18 19:21:28 UTC 2014


I have experimented with the new structure of regression test script. 
The current version of main regression tests script is a huge >3500 line 
file which I find it very difficult to follow, difficult to understand 
what's going on and maintain. So I wanted to see if there is a better 
way of making a script that runs various model configurations on 
different machines and using different compiler/esmf combinations.

I created a nems branch ( 
https://svnemc.ncep.noaa.gov/projects/nems/branches/dusan/new_regtest ) 
in which I added a new directory (tests) which contains everything 
related to regression tests. You can find the script rt.sh in this 
directory. It still uses almost exactly the same rt_*.sh scripts to run 
different models and uses similar approach first setting the default 
values of all environment variables and then overwriting only those 
variables that are different for particular test. However instead of 
using a lot of IF/THEN tests to control which tests are executed on 
particular machine I created a new text file (rt.conf) which is read in 
by this script and then based on which options are specified for each 
particular test that test is either executed or not. So this is more 
like declarative approach as opposed to imperative which we use in the 
current script. I think this is much easier to maintain.

The code (and logic) for compiling and running tests is contained is one 
relatively small (can fit on one page) while-do loop near the end of rt.sh.

I still need to check option for creating new baseline results and I 
didn't try to run nuopc tests yet. But other than that it runs standard 
and full regression test just fine. I might have missed some of gfs/gen 
tests, that needs to be tested more.

Please have a look at the new script and let me know what you think. I 
think there are a lot more things we can simplify but this is a good 
starting point. For example all rt_*.sh scripts use almost identical 
code to test the status of the job, which is unnecessary duplication of 
code. Then maybe there is a way of unifying a block of code that 
compares output files with the baseline results. Any suggestion is welcome.

If people find this as a viable alternative to the current version of 
the regression test script maybe we can discuss about it on our next 
NEMS meeting.


More information about the Ncep.list.nems.announce mailing list