denk mal das kann man abgeben
This commit is contained in:
parent
db327170d1
commit
37f60c4672
3 changed files with 13 additions and 0 deletions
Binary file not shown.
|
@ -369,6 +369,14 @@ The amount of nodes was fixed at 36 for the complete test.
|
|||
The size of the local arrays was increased by a factor of 10 in each iteration starting with just 1 and increased it up to 1000000.
|
||||
The number of repetitions is the same as for the last test at 30.
|
||||
|
||||
\begin{figure}
|
||||
\begin{adjustbox}{center}
|
||||
\includegraphics[width=0.8\linewidth]{sizeplot}
|
||||
\end{adjustbox}
|
||||
\caption{Average runtimes on 36 nodes with an array size of 1 to 1000000.}
|
||||
\label{fig:nodeplot}
|
||||
\end{figure}
|
||||
|
||||
|
||||
\FloatBarrier
|
||||
|
||||
|
@ -382,6 +390,11 @@ Although the result of the MPI\_Reduce function seems to very unstable and it va
|
|||
This might be due to a too low number of repetitions, the very short execution time or some other factors.
|
||||
That the binary tree performed better than the Fibonacci tree was also quite surprising, since the communication pattern of the Fibonacci tree is almost round optimal in contrast to the binary tree.
|
||||
|
||||
|
||||
In the second test it is clearly shown that for larger array sizes the tree algorithms perform much worse.
|
||||
For us the MPI\_Reduce function had a better performance than our implementations when using more than 100000 array elements.
|
||||
This was pretty much expected after the first benchmark since for such arrays further optimizations like pipelining would be necessary.
|
||||
|
||||
\newpage
|
||||
|
||||
\section{Appendix}
|
||||
|
|
BIN
reduce/report/sizeplot.pdf
Executable file
BIN
reduce/report/sizeplot.pdf
Executable file
Binary file not shown.
Loading…
Reference in a new issue