diff --git a/reduce/hpc_project2.zip b/reduce/hpc_project2.zip new file mode 100755 index 0000000..897f0cc Binary files /dev/null and b/reduce/hpc_project2.zip differ diff --git a/reduce/report/report.pdf b/reduce/report/report.pdf index 0a3f982..74c8bf4 100755 Binary files a/reduce/report/report.pdf and b/reduce/report/report.pdf differ diff --git a/reduce/report/report.tex b/reduce/report/report.tex index 5f42811..fc3a8e8 100755 --- a/reduce/report/report.tex +++ b/reduce/report/report.tex @@ -369,6 +369,14 @@ The amount of nodes was fixed at 36 for the complete test. The size of the local arrays was increased by a factor of 10 in each iteration starting with just 1 and increased it up to 1000000. The number of repetitions is the same as for the last test at 30. +\begin{figure} + \begin{adjustbox}{center} + \includegraphics[width=0.8\linewidth]{sizeplot} + \end{adjustbox} + \caption{Average runtimes on 36 nodes with an array size of 1 to 1000000.} + \label{fig:nodeplot} +\end{figure} + \FloatBarrier @@ -382,6 +390,11 @@ Although the result of the MPI\_Reduce function seems to very unstable and it va This might be due to a too low number of repetitions, the very short execution time or some other factors. That the binary tree performed better than the Fibonacci tree was also quite surprising, since the communication pattern of the Fibonacci tree is almost round optimal in contrast to the binary tree. + +In the second test it is clearly shown that for larger array sizes the tree algorithms perform much worse. +For us the MPI\_Reduce function had a better performance than our implementations when using more than 100000 array elements. +This was pretty much expected after the first benchmark since for such arrays further optimizations like pipelining would be necessary. + \newpage \section{Appendix} diff --git a/reduce/report/sizeplot.pdf b/reduce/report/sizeplot.pdf new file mode 100755 index 0000000..e3c6492 Binary files /dev/null and b/reduce/report/sizeplot.pdf differ