Opinião de Brasileiros Sobre a Produtividade no Desenvolvimento de Aplicações Paralelas
Resumo
A partir da popularização das arquiteturas paralelas, surgiram várias interfaces de programação a fim de facilitar a exploração de tais arquiteturas e aumentar a produtividade dos desenvolvedores. Entretanto, desenvolver aplicações paralelas ainda é uma tarefa complexa para desenvolvedores com pouca experiência. Neste trabalho, realizamos uma pesquisa para descobrir a opinião de desenvolvedores de aplicações paralelas sobre os fatores que impedem a produtividade. Nossos resultados mostraram que a experiência dos desenvolvedores é uma das principais razões para a baixa produtividade. Além disso, os resultados indicaram formas para contornar este problema, como melhorar e incentivar o ensino de programação paralela em cursos de graduação.
Referências
Amaral, V., Norberto, B., Goulão, M., Aldinucci, M., Benkner, S., Bracciali, A., Carreira, P., Celms, E., Correia, L., Grelck, C., et al. (2020). Programming languages for data-intensive hpc applications: A systematic mapping study. Parallel Computing, 91:1-17.
Andrade, G., Griebler, D., Santos, R., Danelutto, M., e Fernandes, L. G. (2021). Assessing Coding Metrics for Parallel Programming of Stream Processing Programs on Multi-cores. In 47th Euromicro Conference on Software Engineering and Advanced Applications (SEAA), pages 291-295, Pavia, Italy. IEEE.
Andrade, G., Griebler, D., Santos, R., e Fernandes, L. G. (2022). A parallel programming assessment for stream processing applications on multi-core systems. Computer Standards & Interfaces, pages 1-27.
Caldiera, V. R. B. G. e Rombach, H. D. (1994). The goal question metric approach. Encyclopedia of software engineering, pages 528-532.
Corbin, J. M. e Strauss, A. (1990). Grounded theory research: Procedures, canons, and evaluative criteria. Qualitative sociology, 13(1):3-21.
Ferdinandy, B., Guerrero-Higueras, Á. M., Verderber, é., Miklósi, Á., e Matellán, V. (2019). Analysis of users' first contact with high-performance computing: first approach with ethology researchers. In Proceedings of the Seventh International Conference on Technological Ecosystems for Enhancing Multiculturality, pages 554-557.
Glasow, P. A. (2005). Fundamentals of survey research methodology. Mitre.
Griebler, D., Adornes, D., e Fernandes, L. G. (2014). Performance and Usability Evaluation of a Pattern-Oriented Parallel Programming Interface for Multi-Core Architectures. In The 26th International Conference on Software Engineering & Knowledge Engineering, pages 25-30, Vancouver, Canada. KSI.
Griebler, D., Hoffmann, R. B., Danelutto, M., e Fernandes, L. G. (2018). High-Level and Productive Stream Parallelism for Dedup, Ferret, and Bzip2. International Journal of Parallel Programming, 47(1):253-271.
Hori, A., Jeannot, E., Bosilca, G., Ogura, T., Gerofi, B., Yin, J., e Ishikawa, Y. (2021). An international survey on mpi users. Parallel Computing, 108:1-13.
ISO 9241-11 (2018). Ergonomics of human-system interaction - Part 11: Usability: Definitions and concepts.
López, P. e Baydal, E. (2018). Teaching high-performance service in a cluster computing course. Journal of Parallel and Distributed Computing, 117:138-147.
Maiterth, M., Koenig, G., Pedretti, K., Jana, S., Bates, N., Borghesi, A., Montoya, D., Bartolini, A., e Puzovic, M. (2018). Energy and power aware job scheduling and resource management: Global survey - initial analysis. In 2018 IEEE International Parallel and Distributed Processing Symposium Workshops, pages 685-693. IEEE.
Martínez, M. A., Fraguela, B. B., e Cabaleiro, J. C. (2022). A highly optimized skeleton for unbalanced and deep divide-and-conquer algorithms on multi-core clusters. The Journal of Supercomputing, pages 10434-10454.
McCool, M., Reinders, J., e Robison, A. (2012). Structured parallel programming: patterns for efficient computation. Morgan Kaufmann.
Meade, A., Deeptimahanti, D. K., Johnston, M., Buckley, J., e Collins, J. (2013). Data decomposition for code parallelization in practice: what do the experts need? In 10th International Conference on High Performance Computing and Communications, pages 754-761. IEEE.
Memeti, S. e Pllana, S. (2018). PAPA: a parallel programming assistant powered by ibm watson cognitive computing technology. Journal of computational science, 26:275-284.
Miller, J., Wienke, S., Schlottke-Lakemper, M., Meinke, M., e Müller, M. S. (2018). Applicability of the software cost model COCOMO II to HPC projects. International Journal of Computational Science and Engineering, 17(3):283-296.
Pacheco, P. S. (2011). Introduction to Parallel Programming. Morgan Kaufmann, Burlington, MA, USA.
Peccerillo, B. e Bartolini, S. (2022). Flexible task-dag management in phast library: Data-parallel tasks and orchestration support for heterogeneous systems. Concurrency and Computation: Practice and Experience, 34(2):1-20.
Plale, B. A., Malik, T., e Pouchard, L. C. (2021). Reproducibility practice in high-performance computing: Community survey results. Computing in Science & Engineering, 23(5):55-60.
Rodriguez-Canal, G., Torres, Y., Andújar, F. J., e Gonzalez-Escribano, A. (2021). Efficient heterogeneous programming with fpgas using the controller model. The Journal of Supercomputing, 77(12):13995-14010.
Schlagkamp, S. e Renker, J. (2015). Acceptance of waiting times in high performance computing. In International Conference on Human-Computer Interaction, pages 709-714. Springer.
Szafron, D. e Schaeffer, J. (1996). An experiment to measure the usability of parallel programming systems. Concurrency: Practice and Experience, 8(2):147-166.
Wohlin, C., Runeson, P., Höst, M., Ohlsson, M. C., Regnell, B., e Wesslén, A. (2012). Experimentation in software engineering. Springer.