Avaliando e Comparando Diferentes Estruturas de Programas Paralelos Através de Modelos Analíticos de Desempenho

  • Jean Marcos Laine USP
  • Edson Toshimi Midorikawa USP

Resumo


Um programa paralelo e distribuído pode ter seu código fonte estruturado de diferentes modos. A organização da divisão e distribuição dos dados é uma atividade crítica para o desempenho final da aplicação. Assim, é importante que exista uma metodologia capaz de auxiliar estudos com o objetivo de comparar diferentes abordagens de soluções e prever qual modelo é o mais adequado para organizar a solução da aplicação. Nesse artigo, demonstramos como a metodologia PEMPIs-Het pode ser utilizada para esse propósito. Os resultados obtidos confirmam a capacidade da metodologia em avaliar e predizer corretamente o desempenho de diferentes estruturas de programas paralelos.

Referências

R. M. Badia, G. Rodríguez, and J. Labarta. Deriving analytical models from a limited number of runs. In PARCO, pages 769–776, 2003.

H. E. Bal and M. Haines. Approaches for integrating task and data parallelism. IEEE Concurrency, 6(3):74–84, 1998.

C. Banino, O. Beaumont, L. Carter, J. Ferrante, A. Legrand, and Y. Robert. Scheduling strategies for master-slave tasking on heterogeneous processor platforms. IEEE Trans. Parallel Distributed Systems, 15(4):319–330, 2004.

O. Beaumont, A. Legrand, and Y. Robert. The master-slave paradigm with heterogeneous processors. IEEE Transactions on Parallel and Distributed Systems, 14(9):897–908, 2003.

D. Culler, J. P. Singh, and A. Gupta. Parallel Computer Architecture: A Hardware/Software Approach (The Morgan Kaufmann Series in Computer Architecture and Design). Morgan Kaufmann, August 1998.

L. M. e Silva and R. Buyya. Parallel Programming Models and Paradigms. Prentice Hall, PTR, New Jersey, USA, 1999. In: BUYYA, R. High Performance Cluster Computing: Programming and Applications.

D. A. Grove. A Performance Modeling System for Message-Passing Parallel Programs. PhD thesis, University of Adelaide, Department of Computer Science, Adelaide, 2003.

D. A. Grove and P. D. Coddington. Communication benchmarking and performance modelling of mpi programs on cluster computers. J. Supercomput., 34(2):201–217, 2005.

D. A. Grove and P. D. Coddington. Modeling message passing programs with a performance evaluating virtual parallel machine. Perform. Eval., 60(1-4):165–187, 2005.

J. Knopp and M. Reich. A workpool model for parallel computing. In Proceedings of the First International Workshop on High Level Programming Models and Supportive Environments (HIPS), 1996.

S. Kumaran and M. J. Quinn. Divide-and-conquer programming on mimd computers. In IPPS ’95: Proceedings of the 9th International Symposium on Parallel Processing, pages 734–741,Washington, DC, USA, 1995. IEEE Computer Society.

J. M. Laine. Uma Metodologia para Desenvolvimento de Programas Paralelos Eficientes em Ambientes Homogêneos e Heterogêneos. PhD thesis, Escola Politécnica da Universidade de São Paulo, 2008.

J. M. Laine and E. T. Midorikawa. Analisando a predição de desempenho com os modelos analíticos gerados pela metodologia pempis-het. WSCAD’07 - VIII Workshop em Sistemas Computacionais de Alto Desempenho, 2007.

J. M. Laine and E. T. Midorikawa. Using analytical models to load balancing in a heterogeneous network of computers. In V. E. Malyshkin, editor, PaCT, volume 4671 of Lecture Notes in Computer Science, pages 559–568. Springer, 2007.

J. M. Laine and E. T. Midorikawa. Uso de modelos analíticos na modelagem de aplicações paralelas distribuídas. WPerformance 2008 - Workshop em Desempenho de Sistemas Computacionais e de Comunicação, 2008.

A. Lastovetsky, I.-H. Mkwawa, and M. O’Flynn. An accurate communication model of a heterogeneous cluster based on a switch-enabled ethernet network. In ICPADS ’06: Proceedings of the 12th International Conference on Parallel and Distributed Systems, pages 15–20, Washington, DC, USA, 2006. IEEE Computer Society.

A. Lastovetsky and J. Twamley. Towards a realistic performance model for networks of heterogeneous computers. pages 39–58. Springer, 2005.

E. T. Midorikawa, H. Oliveira, and J. M. Laine. Pempis: A new methodology for modeling and prediction of mpi programs performance. International Journal of Parallel Programming, 33(5):499–527, October 2005.

E. T. Midorikawa, H. M. Oliveira, and J. M. Laine. Pempis: A new methodology for modeling and prediction of mpi programs performance. In 16th Symposium on Computer Architecture and High Performance Computing (SBAC-PAD 2004), pages 246–253, Foz do Iguaçu, Brazil, October 2004. IEEE Computer Society.

C. D. Polychronopoulos and D. J. Kuck. Guided selfscheduling: a practical scheduling scheme for parallel supercomputers. IEEE Transactions on Computers, C- 36(12):1425–1439, December 1987.

M. J. Quinn. Parallel Programming in C with MPI and OpenMP. McGraw-Hill Education Group, 2003.

W.-C. Shih, C.-T. Yang, and S.-S. Tseng. A performancebased approach to dynamic workload distribution for master-slave applications on grid environments. In GPC, pages 73–82, 2006.

W.-C. Shih, C.-T. Yang, and S.-S. Tseng. A performancebased parallel loop scheduling on grid environments. J. Supercomput., 41(3):247–267, 2007.

A. T. C. Tam and C.-L.Wang. Realistic communication model for parallel computing on cluster. In IWCC ’99: Proceedings of the 1st IEEE Computer Society International Workshop on Cluster Computing, page 92,Washington, DC, USA, 1999. IEEE Computer Society.

V. D. Tran, L. Hluchy, and G. T. Nguyen. Parallel programming with data driven model. pdp, 00:205, 2000.

T. T. Tzen and L. M. Ni. Trapezoidal self-scheduling: A practical scheduling scheme for parallel compilers. IEEE Transactions on Parallel and Distributed Systems, 4(1):87– 98, January 1993.

B. Wilkinson and M. Allen. Parallel programming: techniques and applications using networked workstations and parallel computers. Prentice-Hall, Inc., Upper Saddle River, NJ, USA, 1999.

C.-T. Yang and S.-C. Chang. A parallel loop self-scheduling on extremely heterogeneous pc clusters. In International Conference on Computational Science, pages 1079–1088, 2003.

C.-T. Yang, K.-W. Cheng, and W.-C. Shih. On development of an efficient parallel loop self-scheduling for grid computing environments. Parallel Comput., 33(7-8):467–487, 2007.

C.-T. Yang, W.-C. Shih, and S.-S. Tseng. A dynamic partitioning self-scheduling scheme for parallel loops on heterogeneous clusters. In International Conference on Computational Science (1), pages 810–813, 2006.
Publicado
29/10/2008
Como Citar

Selecione um Formato
LAINE, Jean Marcos; MIDORIKAWA, Edson Toshimi. Avaliando e Comparando Diferentes Estruturas de Programas Paralelos Através de Modelos Analíticos de Desempenho. In: SIMPÓSIO EM SISTEMAS COMPUTACIONAIS DE ALTO DESEMPENHO (SSCAD), 9. , 2008, Campo Grande. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2008 . p. 219-226. DOI: https://doi.org/10.5753/wscad.2008.17687.