[Ncep.list.fv3-announce] fv3gfs release beta test

Rusty Benson - NOAA Federal rusty.benson at noaa.gov
Fri May 12 20:36:34 UTC 2017


Sam,

Here is the excerpt from the c-shell script we use for bypassing the mpirun
wrapper on Jet.

## when using Intel MPI

##  setenv I_MPI_PIN_PROCESSOR_LIST allcores:grain=1,shift=$THREADS

##  mpiexec.hydra -n XX --envall ./cubed.x


## when using MVAPICH2

  # determine which partition you are on

  set partitionID = `cat ${PBS_NODEFILE} | cut -c1 | sort -u`

  switch ( ${partitionID} )

    case 'n':

      set cores_per_node = 8

      breaksw

    case 't':

      set cores_per_node = 12

      breaksw

    case 'u':

      set cores_per_node = 12

      breaksw

    case 's':

      set cores_per_node = 16

      breaksw

    case 'v':

      set cores_per_node = 16

      breaksw

    case 'x':

      set cores_per_node = 24

      breaksw

  endsw

  echo "cores_per_node=${cores_per_node}"


# when running with threads, need to use the following command

    @ npes = ${layout_x} * ${layout_y} * 6

    setenv HYDRA_ENV all

    setenv HYDRA_HOST_FILE ${PBS_NODEFILE}

    @ maxprocnum = ${cores_per_node} - 1

    setenv MV2_CPU_MAPPING  `seq -s: 0 ${nthreads} ${maxprocnum}`

    set run_cmd = "mpiexec.hydra -np ${npes} ./$executable:t"


    setenv MALLOC_MMAP_MAX_ 0

    setenv MALLOC_TRIM_THRESHOLD_ 536870912

    setenv NC_BLKSZ 1M

# necessary for OpenMP when using Intel
    setenv KMP_STACKSIZE 256m

Rusty
--
Rusty Benson, PhD
Modeling Systems Group
NOAA Geophysical Fluid Dynamics Lab
Princeton, NJ

On Fri, May 12, 2017 at 1:39 PM, Samuel Trahan - NOAA Affiliate <
samuel.trahan at noaa.gov> wrote:

> Rusty,
>
> Yes, I have also noticed Jet MVAPICH2 to be problematic when placing
> ranks.  I have not worked on optimizing the FV3 on Jet yet; I only made it
> function.  Once we get closer to the point of running FV3 workflows on Jet,
> we can focus more on optimization.
>
> Sincerely,
> Sam Trahan
>
> On Fri, May 12, 2017 at 1:31 PM, Rusty Benson - NOAA Federal <
> rusty.benson at noaa.gov> wrote:
>
>> I don't know how much experience people have with Jet, but please be
>> aware there is a mpirun wrapper that supersedes the package version (impi,
>> mvapich, etc).  I point this out because the behavior is very different
>> than the experience on Theia, where one uses the package version mpirun
>> directly.
>>
>> We at GFDL have found Jet's wrapper to be particularly bothersome when
>> trying to run with OpenMP as we place/space MPI-ranks explicitly as part of
>> the mpirun request and internally manage threading via namelist options.
>>
>> Rusty
>> --
>> Rusty Benson, PhD
>> Modeling Systems Group
>> NOAA Geophysical Fluid Dynamics Lab
>> Princeton, NJ
>>
>> On Fri, May 12, 2017 at 12:40 PM, Jun Wang - NOAA Affiliate <
>> jun.wang at noaa.gov> wrote:
>>
>>> Jim,
>>>
>>> Sam is working on that, we hope to make it to this release. Thanks.
>>>
>>> Jun
>>>
>>> On Fri, May 12, 2017 at 12:37 PM, James Rosinski - NOAA Affiliate <
>>> james.rosinski at noaa.gov> wrote:
>>>
>>>> Jun and others;
>>>>
>>>> I noticed there is a build capability for fv3gfs on jet, but not any
>>>> run instructions or runjob_jet.sh for jet. Is jet run capability intended,
>>>> or is that for later?
>>>>
>>>> Regards,
>>>> Jim Rosinski
>>>>
>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://www.lstsrv.ncep.noaa.gov/pipermail/ncep.list.fv3-announce/attachments/20170512/0e40a530/attachment.html 


More information about the Ncep.list.fv3-announce mailing list