✨
This commit is contained in:
Binary file not shown.
@ -319,10 +319,10 @@
|
||||
\textbf{Event} & \multicolumn{2}{c||}{\textbf{Minimum time}} & \multicolumn{2}{c||}{\textbf{Maximum time}} & \multicolumn{2}{c||}{\textbf{Mean time}} & \textbf{Standard} \\
|
||||
\textbf{count} & Total & Per event & Total & Per event & Total & Per event & \textbf{deviation}\\ \hline\hline
|
||||
\multicolumn{8}{|c|}{\textbf{Laptop}} \\ \hline
|
||||
1 & 0.62ms & 0.62ms & 24ms & 24ms & 2.5ms & 2.5ms & 3.7ms \\\hline
|
||||
10 & 6.7ms & 0.67ms & 4,000ms & 400ms & 200ms & 20ms & 630ms \\\hline
|
||||
100 & 44ms & 0.44ms & 10,000ms & 100ms & 1,200ms & 12ms & 1,700ms \\\hline
|
||||
1000 & 550ms & 0.55ms & 22,000ms & 22ms & 6,800ms & 6.8ms & 4,700ms \\\hline\hline
|
||||
1 & 0.62ms & 0.62ms & 33ms & 33ms & 2.5ms & 2.5ms & 4.6ms \\\hline
|
||||
10 & 5.5ms & 0.55ms & 2036ms & 203ms & 218ms & 21ms & 495ms \\\hline
|
||||
100 & 51ms & 0.52ms & 4267ms & 42ms & 1372ms & 13ms & 1273ms \\\hline
|
||||
1000 & 462ms & 0.46ms & 20500ms & 20ms & 8165ms & 8.2ms & 5034ms \\\hline\hline
|
||||
\multicolumn{8}{|c|}{\textbf{Desktop}} \\ \hline
|
||||
1 & & & & & & & \\\hline
|
||||
10 & & & & & & & \\\hline
|
||||
@ -333,26 +333,26 @@
|
||||
\caption{The results of the Single Listener performance tests.}
|
||||
\end{table}
|
||||
|
||||
Given the large amount of variability in the results, new performance tests were run, repeating each test more than 50 times (n on the table).
|
||||
Given the large amount of variability in the results, new performance tests were run, repeating each test 1000 times instead.
|
||||
|
||||
\begin{table}[H]
|
||||
\centering
|
||||
|
||||
\centerline{
|
||||
\begin{tabular}{|p{1.1cm}||P{1cm}||P{1.5cm}|P{1.8cm}||P{1.5cm}|P{1.8cm}||P{1.5cm}|P{1.8cm}||P{1.7cm}|}
|
||||
\begin{tabular}{|p{1.1cm}||P{1.5cm}|P{1.8cm}||P{1.5cm}|P{1.8cm}||P{1.5cm}|P{1.8cm}||P{1.7cm}|}
|
||||
\hline
|
||||
\textbf{Event} & \textbf{n} & \multicolumn{2}{c||}{\textbf{Minimum time}} & \multicolumn{2}{c||}{\textbf{Maximum time}} & \multicolumn{2}{c||}{\textbf{Mean time}} & \textbf{Standard} \\
|
||||
\textbf{count} & & Total & Per event & Total & Per event & Total & Per event & \textbf{deviation}\\ \hline\hline
|
||||
\multicolumn{9}{|c|}{\textbf{Laptop}} \\ \hline
|
||||
1 & 1000 & 0.63ms & 0.63ms & 16.0ms & 16.0ms & 2.4ms & 2.4ms & 0.89ms \\\hline
|
||||
10 & 500 & & & & & & & \\\hline
|
||||
100 & 250 & & & & & & & \\\hline
|
||||
1000 & 100 & & & & & & & \\\hline\hline
|
||||
\multicolumn{9}{|c|}{\textbf{Desktop}} \\ \hline
|
||||
1 & 1000 & & & & & & & \\\hline
|
||||
10 & 500 & & & & & & & \\\hline
|
||||
100 & 250 & & & & & & & \\\hline
|
||||
1000 & 100 & & & & & & & \\\hline
|
||||
\textbf{Event} & \multicolumn{2}{c||}{\textbf{Minimum time}} & \multicolumn{2}{c||}{\textbf{Maximum time}} & \multicolumn{2}{c||}{\textbf{Mean time}} & \textbf{Standard} \\
|
||||
\textbf{count} & Total & Per event & Total & Per event & Total & Per event & \textbf{deviation}\\ \hline\hline
|
||||
\multicolumn{8}{|c|}{\textbf{Laptop}} \\ \hline
|
||||
1 & 0.61ms & 0.61ms & 16ms & 16ms & 2.2ms & 2.2ms & 0.8ms \\\hline
|
||||
10 & 4.8ms & 0.48ms & 3053ms & 305ms & 135ms & 14ms & 330ms \\\hline
|
||||
100 & 46ms & 0.46ms & 7233ms & 72ms & 1230ms & 12ms & 1225ms \\\hline
|
||||
1000 & 422ms & 0.42ms & 37,598ms & 37ms & 8,853ms & 8.9ms & 6,543ms \\\hline\hline
|
||||
\multicolumn{8}{|c|}{\textbf{Desktop}} \\ \hline
|
||||
1 & & & & & & & \\\hline
|
||||
10 & & & & & & & \\\hline
|
||||
100 & & & & & & & \\\hline
|
||||
1000 & & & & & & & \\\hline
|
||||
\end{tabular}
|
||||
}
|
||||
\caption{The results of the second suite of Single Listener performance tests.}
|
||||
@ -369,12 +369,18 @@
|
||||
|
||||
Upon examination of the results, an pattern emerges. The minimum recorded response times are consistently around 0.5ms per event for the laptop and 0.3ms per event for the desktop, regardless of the number of events sent. This time likely reflects an ideal scenario where events are registered seamlessly without any delays or issues within the pipeline, thereby showcasing the efficiency potential of the network event triggers in the MEOW system.
|
||||
|
||||
Conversely, the maximum and mean response times showed more variability. This fluctuation in response times may be attributed to various factors such as network latency, the internal processing load of the system, and the inherent unpredictability of concurrent event handling. It's worth noting that the standard deviation in these sets of data was consistently high. This suggests that the variability in the maximum and mean response times are due to high variability among the entire dataset, as opposed to singular outliers.
|
||||
Conversely, the maximum and mean response times showed more variability. This fluctuation in response times may be attributed to various factors such as network latency, the internal processing load of the system, and the inherent unpredictability of concurrent event handling. It's worth noting that the standard deviation in the original sets of data was consistently high. This suggests that the variability in the maximum and mean response times were due to high variability among the entire dataset, as opposed to singular outliers.
|
||||
|
||||
It was observed that for smaller amounts of events (1, 10, 100), the standard deviation decreased with the increase in the number of repeated tests. This trend suggests that the initial variability observed in the maximum and mean response times for these event counts was primarily a result of limited sample size. As more tests were conducted, the influence of extreme values diminished, leading to a lower standard deviation. This affirms the importance of comprehensive testing in performance analysis, as it enables us to converge towards a more 'true' representation of the system's performance.
|
||||
|
||||
However, an intriguing deviation from this trend was observed for 1000 events, where the standard deviation, instead of decreasing, increased with more repeated tests. This suggests that the variability associated with handling a larger number of events is not merely a consequence of limited data but could be indicative of inherent fluctuations or instabilities in the system's performance when managing larger event sets.
|
||||
|
||||
The increased standard deviation for 1000 events sheds light on the fact that as the scale of event handling increases, system performance becomes more susceptible to unpredictabilities, potentially due to factors like system load, network congestion, and other concurrent processes. These findings underscore the need for rigorous and extensive performance testing, particularly for larger event sets, and hint at potential areas for system optimization and robustness in handling large-scale network events.
|
||||
|
||||
\subsubsection{Multiple Listeners}
|
||||
The next performance test investigates how the introduction of multiple listeners affects the overall processing time. This test aims to understand the implications of distributing events across different listeners on system performance. Specifically, we're looking at how having multiple listeners in operation might impact the speed at which events are processed.
|
||||
|
||||
In this test, I will maintain a constant total of 1000 events, but they will be distributed evenly across varying numbers of listeners between 1 and 1000. By keeping the total number of events constant while altering the number of listeners, I aim to isolate the effect of multiple listeners on system performance. Once again, each test will be performed 50 times.
|
||||
In this test, I will maintain a constant total of 1000 events, but they will be distributed evenly across varying numbers of listeners between 1 and 1000. By keeping the total number of events constant while altering the number of listeners, I aim to isolate the effect of multiple listeners on system performance. Each test will be performed 100 times.
|
||||
|
||||
1000 was chosen as the total number of events to be sent due to its realistic representation of a high-load situation. While this number is higher than what I would typically expect the system to handle in a real-life application, it serves to provide a stress test for the system, revealing how it copes under an intensive load. This approach enables the identification of potential bottlenecks, inefficiencies, or points of failure under heavy demand.
|
||||
|
||||
@ -382,23 +388,23 @@
|
||||
|
||||
\begin{table}[H]
|
||||
\centering
|
||||
\begin{tabular}{|p{1.5cm}||P{2.5cm}|P{2.5cm}|P{2.5cm}|}
|
||||
\begin{tabular}{|p{1.5cm}||P{2.5cm}|P{2.5cm}|P{2.5cm}||P{1.9cm}|}
|
||||
\hline
|
||||
\textbf{Listener count} & \textbf{Minimum time} & \textbf{Maximum time} & \textbf{Average time} \\ \hline\hline
|
||||
\multicolumn{4}{|c|}{\textbf{Laptop}} \\ \hline
|
||||
1 & 630ms & 17,000ms & 5,600ms \\\hline
|
||||
10 & 460ms & 25,000ms & 7,600ms \\\hline
|
||||
100 & 420ms & 20,000ms & 7,100ms \\\hline
|
||||
250 & 510ms & 7,900ms & 2,900ms \\\hline
|
||||
500 & 590ms & 1,600ms & 720ms \\\hline
|
||||
1000 & 920ms & 3,200ms & 1,500ms \\\hline\hline
|
||||
\multicolumn{4}{|c|}{\textbf{Desktop}} \\ \hline
|
||||
1 & 240ms & 16,000ms & 5,200ms \\\hline
|
||||
10 & 240ms & 19,000ms & 4,000ms \\\hline
|
||||
100 & 250ms & 10,000ms & 1,000ms \\\hline
|
||||
250 & 270ms & 12,000ms & 900ms \\\hline
|
||||
500 & 310ms & 330ms & 310ms \\\hline
|
||||
1000 & 380ms & 420ms & 400ms \\\hline
|
||||
\textbf{Listener count} & \textbf{Minimum time} & \textbf{Maximum time} & \textbf{Average time} & \textbf{Standard deviation} \\ \hline\hline
|
||||
\multicolumn{5}{|c|}{\textbf{Laptop}} \\ \hline
|
||||
1 & 443ms & 20,614ms & 8,649ms & 4,853ms \\\hline
|
||||
10 & 446ms & 20,624ms & 7,764ms & 4,234ms \\\hline
|
||||
100 & 477ms & 31,026ms & 7,310ms & 4,481ms \\\hline
|
||||
250 & 534ms & 12,485ms & 2,355ms & 2,175ms \\\hline
|
||||
500 & 663ms & 3,321ms & 928ms & 412ms \\\hline
|
||||
1000 & 893ms & 3,592ms & 1,163ms & 380ms \\\hline\hline
|
||||
\multicolumn{5}{|c|}{\textbf{Desktop}} \\ \hline
|
||||
1 & & & & \\\hline
|
||||
10 & & & & \\\hline
|
||||
100 & & & & \\\hline
|
||||
250 & & & & \\\hline
|
||||
500 & & & & \\\hline
|
||||
1000 & & & & \\\hline
|
||||
\end{tabular}
|
||||
\caption{The results of the Multiple Listeners performance tests with 2 significant digits.}
|
||||
\end{table}
|
||||
@ -430,23 +436,23 @@
|
||||
|
||||
\begin{table}[H]
|
||||
\centering
|
||||
\begin{tabular}{|p{1.5cm}||P{2.5cm}|P{2.5cm}|P{2.5cm}|}
|
||||
\begin{tabular}{|p{1.5cm}||P{2.5cm}|P{2.5cm}|P{2.5cm}||P{1.9cm}|}
|
||||
\hline
|
||||
\textbf{Monitor count} & \textbf{Minimum time} & \textbf{Maximum time} & \textbf{Average time} \\ \hline\hline
|
||||
\multicolumn{4}{|c|}{\textbf{Laptop}} \\ \hline
|
||||
1 & 630ms & 17,000ms & 5,600ms \\\hline
|
||||
10 & 450ms & 25,000ms & 6,600ms \\\hline
|
||||
100 & 380ms & 18,000ms & 4,400ms \\\hline
|
||||
250 & 400ms & 13,000ms & 1,800ms \\\hline
|
||||
500 & 440ms & 2,900ms & 720ms \\\hline
|
||||
1000 & 520ms & 2,300ms & 700ms \\\hline\hline
|
||||
\multicolumn{4}{|c|}{\textbf{Desktop}} \\ \hline
|
||||
1 & 240ms & 16,000ms & 5,200ms \\\hline
|
||||
10 & 230ms & 20,000ms & 6,500ms \\\hline
|
||||
100 & 240ms & 18,000ms & 2,900ms \\\hline
|
||||
250 & 250ms & 7,600ms & 800ms \\\hline
|
||||
500 & 260ms & 300ms & 270ms \\\hline
|
||||
1000 & 290ms & 300ms & 290ms \\\hline
|
||||
\textbf{Monitor count} & \textbf{Minimum time} & \textbf{Maximum time} & \textbf{Average time} & \textbf{Standard deviation} \\ \hline\hline
|
||||
\multicolumn{5}{|c|}{\textbf{Laptop}} \\ \hline
|
||||
1 & 468ms & 20,683ms & 8,137ms & 3,410ms \\\hline
|
||||
10 & 521ms & 48,645ms & 8,929ms & 5,391ms \\\hline
|
||||
100 & 444ms & 12,311ms & 4,520ms & 3,091ms \\\hline
|
||||
250 & 469ms & 13,823ms & 1,944ms & 2,089ms \\\hline
|
||||
500 & 508ms & 2,282ms & 867ms & 391ms \\\hline
|
||||
1000 & 601ms & 2,893ms & 1,197ms & 661ms \\\hline\hline
|
||||
\multicolumn{5}{|c|}{\textbf{Desktop}} \\ \hline
|
||||
1 & & & & \\\hline
|
||||
10 & & & & \\\hline
|
||||
100 & & & & \\\hline
|
||||
250 & & & & \\\hline
|
||||
500 & & & & \\\hline
|
||||
1000 & & & & \\\hline
|
||||
\end{tabular}
|
||||
\caption{The results of the Multiple Listeners performance tests with 2 significant digits.}
|
||||
\end{table}
|
||||
@ -486,7 +492,7 @@
|
||||
\caption{The structure of the BIDS workflow. Data is transferred to user, and to the cloud.}
|
||||
\end{figure}
|
||||
|
||||
To illustrate the potential applications of network events in MEOW, I implemented a simplified workflow that involves two runners operating concurrently. These runners are initiated with almost identical, mirrored, parameters.
|
||||
To illustrate the potential applications of network events in MEOW, I implemented a simplified workflow that involves two runners operating concurrently (\texttt{example\_workflow} in the repository\autocite{Implementation}). These runners are initiated with almost identical, mirrored, parameters.
|
||||
|
||||
On receiving a network event, each runner is configured to respond by transmitting a network event to its counterpart. This simple setup mirrors the dynamic interaction of components in more complex, real-life workflows. It shows how the introduction of network events can enable the construction of workflows that require elements to communicate and react to each other's status.
|
||||
|
||||
|
@ -7,9 +7,9 @@ def single_listener():
|
||||
|
||||
x = [1,10,100,1000]
|
||||
|
||||
y11 = [0.62, 6.7, 44, 550]
|
||||
y12 = [ 24, 4000, 10000, 22000]
|
||||
y13 = [ 2.5, 200, 1200, 6800]
|
||||
y11 = [0.61, 4.8, 46, 422]
|
||||
y12 = [ 16, 3053, 7233, 37598]
|
||||
y13 = [ 2.2, 135, 1230, 8853]
|
||||
|
||||
ax1.plot(x, y12, label="Maximum", linewidth=5)
|
||||
ax1.plot(x, y13, label="Average", linewidth=5)
|
||||
@ -27,9 +27,9 @@ def single_listener():
|
||||
|
||||
###
|
||||
|
||||
y21 = [0.4, 3, 25, 240]
|
||||
y22 = [2.2, 1000, 2200, 16000]
|
||||
y23 = [1.6, 56, 440, 5200]
|
||||
y21 = []
|
||||
y22 = []
|
||||
y23 = []
|
||||
|
||||
ax2.plot(x, y22, label="Maximum", linewidth=5)
|
||||
ax2.plot(x, y23, label="Average", linewidth=5)
|
||||
@ -46,9 +46,9 @@ def single_listener():
|
||||
|
||||
###
|
||||
|
||||
y31 = [0.62, 0.67, 0.44, 0.55]
|
||||
y32 = [ 24, 400, 100, 22]
|
||||
y33 = [ 2.5, 20, 12, 6.8]
|
||||
y31 = [0.61, 0.48, 0.46, 0.42]
|
||||
y32 = [ 16, 305, 72, 37]
|
||||
y33 = [ 2.2, 14, 12, 8.9]
|
||||
|
||||
ax3.plot(x, y32, label="Maximum", linewidth=5)
|
||||
ax3.plot(x, y33, label="Average", linewidth=5)
|
||||
@ -65,9 +65,9 @@ def single_listener():
|
||||
|
||||
###
|
||||
|
||||
y41 = [0.40, 0.30, 0.25, 0.24]
|
||||
y42 = [ 2.2, 100, 22, 16]
|
||||
y43 = [ 1.6, 5.6, 4.4, 5.2]
|
||||
y41 = []
|
||||
y42 = []
|
||||
y43 = []
|
||||
|
||||
ax4.plot(x, y42, label="Maximum", linewidth=5)
|
||||
ax4.plot(x, y43, label="Average", linewidth=5)
|
||||
@ -94,9 +94,9 @@ def multiple_listeners():
|
||||
|
||||
x = [1,10,100,250,500,1000]
|
||||
|
||||
y11 = [630,460,420,510,590,920]
|
||||
y12 = [17000,25000,20000,7900,1600,3200]
|
||||
y13 = [5600,7600,7100,2900,720,1500]
|
||||
y11 = [ 443, 446, 477, 534, 663, 893]
|
||||
y12 = [20614, 20624, 31026, 12485, 3321, 3592]
|
||||
y13 = [ 8649, 7764, 7310, 2355, 928, 1163]
|
||||
|
||||
ax1.plot(x, y12, label="Maximum", linewidth=5)
|
||||
ax1.plot(x, y13, label="Average", linewidth=5)
|
||||
@ -114,9 +114,9 @@ def multiple_listeners():
|
||||
|
||||
###
|
||||
|
||||
y21 = [240,240,250,270,310,380]
|
||||
y22 = [16000,19000,10000,12000,330,420]
|
||||
y23 = [5200,4000,1000,900,310,400]
|
||||
y21 = []
|
||||
y22 = []
|
||||
y23 = []
|
||||
|
||||
ax2.plot(x, y22, label="Maximum", linewidth=5)
|
||||
ax2.plot(x, y23, label="Average", linewidth=5)
|
||||
@ -142,9 +142,9 @@ def multiple_monitors():
|
||||
|
||||
x = [1,10,100,250,500,1000]
|
||||
|
||||
y11 = [630, 450, 380, 400, 440, 520]
|
||||
y12 = [17000, 25000, 18000, 13000, 2900, 2300]
|
||||
y13 = [5600, 6600, 4400, 1800, 720, 700]
|
||||
y11 = [ 468, 521, 444, 469, 508, 601]
|
||||
y12 = [20683,48645,12311,13828,2282,2893]
|
||||
y13 = [ 8137, 8929, 4520, 1944, 867,1197]
|
||||
|
||||
ax1.plot(x, y12, label="Maximum", linewidth=5)
|
||||
ax1.plot(x, y13, label="Average", linewidth=5)
|
||||
@ -162,9 +162,9 @@ def multiple_monitors():
|
||||
|
||||
###
|
||||
|
||||
y21 = [240,230,240,250,260,290]
|
||||
y22 = [16000,20000,18000,7600,300,300]
|
||||
y23 = [5200,6500,2900,800,270,290]
|
||||
y21 = []
|
||||
y22 = []
|
||||
y23 = []
|
||||
|
||||
ax2.plot(x, y22, label="Maximum", linewidth=5)
|
||||
ax2.plot(x, y23, label="Average", linewidth=5)
|
||||
@ -185,7 +185,38 @@ def multiple_monitors():
|
||||
|
||||
fig.savefig("performance_results/multiple_monitors.png",bbox_inches='tight')
|
||||
|
||||
def standard_deviation():
|
||||
fig, (ax1,ax2) = plt.subplots(1,2)
|
||||
|
||||
x = [1,10,100,1000]
|
||||
|
||||
y11 = [4.6,495,1273,5034]
|
||||
y12 = [0.8,330,1225,6543]
|
||||
|
||||
ax1.plot(x, y11, label="50 tests", linewidth=5)
|
||||
ax1.plot(x, y12, label="1000 tests", linewidth=5)
|
||||
|
||||
ax1.legend(bbox_to_anchor=(0.2,1.4))
|
||||
ax1.grid(linewidth=2)
|
||||
ax1.set_title("Laptop")
|
||||
|
||||
ax1.set_xlabel("Event count")
|
||||
ax1.set_ylabel("Standard deviation (ms)")
|
||||
|
||||
ax1.set_xscale("log")
|
||||
ax1.set_yscale("log")
|
||||
|
||||
###
|
||||
###
|
||||
|
||||
fig.set_figheight(12)
|
||||
fig.set_figwidth(35)
|
||||
fig.set_dpi(100)
|
||||
|
||||
fig.savefig("performance_results/standard_deviation.png",bbox_inches='tight')
|
||||
|
||||
if __name__ == "__main__":
|
||||
single_listener()
|
||||
multiple_listeners()
|
||||
multiple_monitors()
|
||||
# standard_deviation()
|
||||
|
BIN
src/performance_results/standard_deviation.png
Normal file
BIN
src/performance_results/standard_deviation.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 134 KiB |
@ -26,7 +26,6 @@
|
||||
journal = {GitHub repository},
|
||||
howpublished = {\url{https://github.com/PatchOfScotland/meow_base}},
|
||||
year = 2023,
|
||||
commit = {933d568},
|
||||
}
|
||||
|
||||
@misc{Implementation,
|
||||
|
Reference in New Issue
Block a user