Influence of overhead on processor allocation for multiple loops
Resumo
We consider two consecutive and independent forall loops and the strategy to allocate processors for their execution. One strategy is to execute each of the two loops consecutively, each time with all the available processors. Another strategy is to execute both loops simultaneously, each with a fraction of the available processors. We verify that the presence of overhead can influence this strategy, since the second strategy implies the use of a smaller number of processors for each individual loop, reducing thus the effect of the overhead. We establish conditions under which the second strategy is better. Finally we consider the special case when there is a single forall loop. We show conditions under which it is more advantageous to split it into two smaller loops and execute them simultaneously, each with a fraction of the available processors.
Referências
Amdahl, G. M. Validity of the single processor approach to achieving large scale computer capabilities. Proc. AFJPS Computer Conference 30, 483 (1967).
Cytron, R. Useful parallelism in a multiprocessing environment. Proceedings of the International Conference on Parallel Processing, IEEE Computer Society Press, 450457 (1985).
Eager, D. L., Zahorjan, J. and Lazowska, E. D. Speedup versus efficiency in parallel systems. IEEE Trans. Computers 38, No. 3, 408-423 (1989).
Flatt, H. P. Further results using the overdead model for parallel systems. IBM Journal Res. Development 35, No. 5/6, 721-726 (1991).
Flatt, H. P. and Kennedy, K. Performance of parallel processors. Parallel Computing 12, No. 1, 1-20 (1989).
Gustafson, J. L., Montry, G. R. and Benner, R. E. Development of parallel methods for a 1024-processor hypercube.
M. Lehman. A survey of problems and preliminary results concerning parallel process ing and parallel processors. Proc. IEEE 54, 1889-1901 (1966).
Polychronopoulos, C. D. Parallel Programming and Compilers, Kluwer Academic Publishers, Boston, 1988.
Rosenfeld, J. L. A case study in programming for parallel processors. Comm. ACM 12, No. 12, 645-655 (1969).