Desempenho da comunicação MPI Shared Memory no Modelo Meteorológico BRAMS
Abstract
The regional meteorological model BRAMS is executed operationally at CPTEC/INPE in a supercomputer composed of nodes with multicore processors. Its parallel programming employs the message-passing library MPI, being the model domain divided among computational nodes and also among the cores of each node. BRAMS model uses two-sided communication with the standard non-blocking asynchronous functions. However, the recent version 3.0 of MPI supports the new shared memory one-sided communication in order to optimize communication between processes executed in the same computational node. This work evaluates the communication performance of this new functionality in the parallel execution of the BRAMS model.References
Message Passing Interface Standard. Version 3.1.
Freitas, S. et al. (2016). The brazilian developments on the regional atmospheric modeling system (brams 5.2): An integrated environmental model tuned for tropical areas. Geosci. Model Dev. Discuss., doi, 10.
Gropp, W. (2016). MPI + MPI: Using MPI-3 Shared Memory as a Multicore Programming System.
Hoefler, T., Dinan, J., Buntinas, D., Balaji, P., Barrett, B., Brightwell, R., Gropp,W., Kale, V., and Thakur, R. (2013). Mpi + mpi: a new hybrid approach to parallel programming with mpi plus shared memory. Computing, 95(12):1121–1136.
Souza, C. R., Stephany, S., and Panetta, J. (2017). Análise do desempenho de comunicação usando a funcionalidade de memória compartilhada do MPI 3.0. Anais do XX Encontro Nacional de Modelagem Computacional - ENMC. http://nbcgib.uesc.br/enmc2017.
Freitas, S. et al. (2016). The brazilian developments on the regional atmospheric modeling system (brams 5.2): An integrated environmental model tuned for tropical areas. Geosci. Model Dev. Discuss., doi, 10.
Gropp, W. (2016). MPI + MPI: Using MPI-3 Shared Memory as a Multicore Programming System.
Hoefler, T., Dinan, J., Buntinas, D., Balaji, P., Barrett, B., Brightwell, R., Gropp,W., Kale, V., and Thakur, R. (2013). Mpi + mpi: a new hybrid approach to parallel programming with mpi plus shared memory. Computing, 95(12):1121–1136.
Souza, C. R., Stephany, S., and Panetta, J. (2017). Análise do desempenho de comunicação usando a funcionalidade de memória compartilhada do MPI 3.0. Anais do XX Encontro Nacional de Modelagem Computacional - ENMC. http://nbcgib.uesc.br/enmc2017.
Published
2018-04-13
How to Cite
SOUZA, Carlos R. de; PANETTA, Jairo; STEPHANY, Stephan.
Desempenho da comunicação MPI Shared Memory no Modelo Meteorológico BRAMS. In: REGIONAL SCHOOL OF HIGH PERFORMANCE COMPUTING FROM SÃO PAULO (ERAD-SP), 9. , 2018, São José dos Campos.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2018
.
p. 45-48.
DOI: https://doi.org/10.5753/eradsp.2018.13599.
