Abstract:
During the last two decades tremendous success has been achieved in the PC based
computing. Most of the PCs are now as powerful as the mid-range/mini computers of
early sixties and seventies. Improvement in the cost performance curve over the last few
decades in the field of computing has strengthened the process of integrating more
power in smaller computing stations. But due to revolutionary changes in the PC based
computing there are always some engineering and/or scientific problems which PCs or the
sequential machines fail to deal with. We need a tremendously high computing speed to
solve these grand challenging problems. Parallel machines are used in such large and
complex applications.
In the first section of this thesis a detail study has been done on different parallel
machines from its architectural as well as application point of view. The generic models
have been discussed and then specific well-known machines, designed by different
manufacturers have been covered.
In the second section of the thesis a simulator has been written to give an abstract idea
to its user about the real processing techniques and sequences in parallel machines. To
develop the simulator a specific well-known problem in engineering and scientific
applications "Matrix Multiplication" has been chosen. The program has been modeled to
perform multiplication operation on two matrices (square or rectangular matrix) of
arbitrarily large dimensions, which the sequential machines normally fail to deal with
satisfactorily.
The simulator gives a step-by step description of the procedures and methodologies
adopted during the processing. It clarifies the computation and data routing pictures in
each of the major steps during the processing. The program has been written in C++. It
runs under DOS and Windows operating system. It supports arbitrarily large matrices up
to the limit the sequential machine's memory supports. The program has been written to
handle integer matrices only for the sake of simplicity. It can be modified very easily to
handle floating point arithmetic as well.
Finally in the third section of the thesis a rigorous analysis has been carried out on
various performance parameters of interest in parallel matrix multiplication algorithms.
The parameters have been recorded and compared with the theoretical behavior to find
out generic characteristic properties for the family of parameters of parallel computing.