Toy workflow

This commit is contained in:
NikolajDanger
2023-06-07 13:44:11 +02:00
parent b48e58c713
commit 7fae54b96b
7 changed files with 166 additions and 76 deletions

Binary file not shown.

View File

@ -240,6 +240,7 @@
\end{itemize}
\section{Method}
\textit{Code available here: \autocite{Implementation}}
To address the identified limitations of MEOW and to expand its capabilities, I will be incorporating network event triggers into the existing event-based scheduler, to supplement the current file-based event triggers. My method focuses on leveraging Python's socket library to enable the processing of network events. The following subsections detail the specific methodologies employed in expanding the codebase, the design of the network event trigger mechanism, and the integration of this mechanism into the existing MEOW system.
@ -266,26 +267,29 @@
The method will be slower, since writing to storage takes longer than keeping the data in memory, but I have decided that the positives outweigh the negatives.
\subsection{Data Type Agnosticism}
An important aspect to consider in the functioning of the network monitor is its data type agnosticism: the network monitor does not impose restrictions or perform checks on the type of incoming data. While this approach enhances the speed and simplicity of the implementation, it also places a certain level of responsibility on the recipes that work with the incoming data. The recipes, being responsible for defining the actions taken upon execution of a job, must be designed with a full understanding of this versatility. They should incorporate necessary checks and handle potential inconsistencies or anomalies that might arise from diverse types of incoming data.
An important aspect to consider in the functioning of the network monitor is its data type agnosticism: the \texttt{NetworkMonitor} does not impose restrictions or perform checks on the type of incoming data. While this approach enhances the speed and simplicity of the implementation, it also places a certain level of responsibility on the recipes that work with the incoming data. The recipes, being responsible for defining the actions taken upon execution of a job, must be designed with a full understanding of this versatility. They should incorporate necessary checks and handle potential inconsistencies or anomalies that might arise from diverse types of incoming data.
\begin{tcolorbox}[colback=lightgray!30!white]
Justify. The file events don't check for errors. The system is resistant, so errors don't really matter. Protocol specific monitors could check better.
\end{tcolorbox}
\subsection{Testing}
The unit tests for the network event monitor were inspired by the already existing tests for the file event monitor. Since the aim of the monitor was to emulate the behavior of the file event monitor as closely as possible, using the already existing tests with minimal changes proved an effective way of staying close to that goal. The tests verify the following behavior:
\begin{itemize}
\setlength{\itemsep}{0pt}
\item Network event patterns can be initialized, and raise exceptions when given invalid parameters.
\item Instances of the \texttt{NetworkEventPattern} class can be initialized, and raise exceptions when given invalid parameters.
\item Network events can be created, and they contain the expected information.
\item Network monitors can be created.
\item A network monitor is able to receive data sent to a listener, write it to a file, and create a valid event.
\item You can access, add, update, and remove the patterns and recipes associated with the monitor at runtime.
\item Instances of \texttt{NetworkMonitor} can be created.
\item A \texttt{NetworkMonitor} is able to receive data sent to a listener, write it to a file, and create a valid event.
\item You can access, add, update, and remove the patterns and recipes associated with the \texttt{NetworkMonitor} at runtime.
\item When adding, updating, or removing patterns or recipes during runtime, rules associated with those patterns ore recipes are updated accordingly.
\item The monitor only initializes listeners for patterns with associated rules, and rules updated during runtime are applied.
\item The \texttt{NetworkMonitor} only initializes listeners for patterns with associated rules, and rules updated during runtime are applied.
\end{itemize}
\newpage
\section{Results}
The testing suite designed for the monitor comprised of 26 distinct tests, all of which successfully passed. These tests were designed to assess the robustness, reliability, and functionality of the monitor. They evaluated the monitor's ability to successfully manage network event patterns, detect network events, and communicate with the runner to send events to the event queue.
\section{Results}
\subsection{Performance Tests}
To assess the performance of the Network Monitor, I have implemented a number of performance tests. The tests were run on these machines:
@ -304,26 +308,54 @@
\subsubsection{Single Listener}
To assess how a single listener handles many events at once, I implemented a procedure where a single listener in the monitor was subjected to a varying number of events, ranging from 1 to 1,000. For each quantity of events, I sent n network events to the monitor and recorded the response time. To ensure reliability of the results and mitigate the effect of any outliers, each test was repeated 50 times.
Given the inherent variability in network communication and event handling, I noted considerable differences between the highest and lowest recorded times for each test. To provide a comprehensive view of the monitor's performance, I have included not only the average response times, but also the minimum and maximum times observed for each set of 50 tests.
Given the inherent variability in network communication and event handling, I noted considerable differences between the highest and lowest recorded times for each test. To provide a comprehensive view of the monitor's performance, I have included not only the mean response times, but also the minimum and maximum times observed for each set of 50 tests, as well as the standard deviation.
\begin{table}[H]
\centering
\begin{tabular}{|p{1.1cm}||P{1.5cm}|P{1.8cm}||P{1.5cm}|P{1.8cm}||P{1.5cm}|P{1.8cm}|}
\centerline{
\begin{tabular}{|p{1.1cm}||P{1.5cm}|P{1.8cm}||P{1.5cm}|P{1.8cm}||P{1.5cm}|P{1.8cm}||P{1.7cm}|}
\hline
\textbf{Event} & \multicolumn{2}{c||}{\textbf{Minimum time}} & \multicolumn{2}{c||}{\textbf{Maximum time}} & \multicolumn{2}{c|}{\textbf{Average time}} \\
\textbf{count} & Total & Per event & Total & Per event & Total & Per event \\ \hline\hline
\multicolumn{7}{|c|}{\textbf{Laptop}} \\ \hline
1 & 0.68ms & 0.68ms & 5.3ms & 5.3ms & 2.1ms & 2.1ms \\\hline
10 & 4.7ms & 0.47ms & 2.1s & 0.21s & 0.18s & 18ms \\\hline
100 & 45ms & 0.45ms & 7.2s & 72ms & 0.86s & 8.6ms \\\hline
1,000 & 0.63s & 0.63ms & 17s & 17ms & 5.6s & 5.6ms \\\hline\hline
\multicolumn{7}{|c|}{\textbf{Desktop}} \\ \hline
1 & 0.40ms & 0.40ms & 2.2ms & 2.2ms & 1.6ms & 1.6ms \\\hline
10 & 3.0ms & 0.30ms & 1.0s & 0.10s & 56ms & 5.6ms \\\hline
100 & 25ms & 0.25ms & 2.2s & 22ms & 0.44s & 4.4ms \\\hline
1000 & 0.24s & 0.24ms & 16s & 16ms & 5.2s & 5.2ms \\\hline
\textbf{Event} & \multicolumn{2}{c||}{\textbf{Minimum time}} & \multicolumn{2}{c||}{\textbf{Maximum time}} & \multicolumn{2}{c||}{\textbf{Mean time}} & \textbf{Standard} \\
\textbf{count} & Total & Per event & Total & Per event & Total & Per event & \textbf{deviation}\\ \hline\hline
\multicolumn{8}{|c|}{\textbf{Laptop}} \\ \hline
1 & 0.62ms & 0.62ms & 24ms & 24ms & 2.5ms & 2.5ms & 3.7ms \\\hline
10 & 6.7ms & 0.67ms & 4,000ms & 400ms & 200ms & 20ms & 630ms \\\hline
100 & 44ms & 0.44ms & 10,000ms & 100ms & 1,200ms & 12ms & 1,700ms \\\hline
1000 & 550ms & 0.55ms & 22,000ms & 22ms & 6,800ms & 6.8ms & 4,700ms \\\hline\hline
\multicolumn{8}{|c|}{\textbf{Desktop}} \\ \hline
1 & & & & & & & \\\hline
10 & & & & & & & \\\hline
100 & & & & & & & \\\hline
1000 & & & & & & & \\\hline
\end{tabular}
\caption{The results of the Single Listener performance tests with 2 significant digits.}
}
\caption{The results of the Single Listener performance tests.}
\end{table}
Given the large amount of variability in the results, new performance tests were run, repeating each test more than 50 times (n on the table).
\begin{table}[H]
\centering
\centerline{
\begin{tabular}{|p{1.1cm}||P{1cm}||P{1.5cm}|P{1.8cm}||P{1.5cm}|P{1.8cm}||P{1.5cm}|P{1.8cm}||P{1.7cm}|}
\hline
\textbf{Event} & \textbf{n} & \multicolumn{2}{c||}{\textbf{Minimum time}} & \multicolumn{2}{c||}{\textbf{Maximum time}} & \multicolumn{2}{c||}{\textbf{Mean time}} & \textbf{Standard} \\
\textbf{count} & & Total & Per event & Total & Per event & Total & Per event & \textbf{deviation}\\ \hline\hline
\multicolumn{9}{|c|}{\textbf{Laptop}} \\ \hline
1 & 1000 & 0.63ms & 0.63ms & 16.0ms & 16.0ms & 2.4ms & 2.4ms & 0.89ms \\\hline
10 & 500 & & & & & & & \\\hline
100 & 250 & & & & & & & \\\hline
1000 & 100 & & & & & & & \\\hline\hline
\multicolumn{9}{|c|}{\textbf{Desktop}} \\ \hline
1 & 1000 & & & & & & & \\\hline
10 & 500 & & & & & & & \\\hline
100 & 250 & & & & & & & \\\hline
1000 & 100 & & & & & & & \\\hline
\end{tabular}
}
\caption{The results of the second suite of Single Listener performance tests.}
\end{table}
\begin{figure}[H]
@ -335,15 +367,17 @@
\caption{The results of the Single Listener performance test plotted logarithmically.}
\end{figure}
Upon examination of the results, an pattern emerges. The minimum recorded response times consistently averaged around 0.5ms per event for the laptop and 0.3ms per event for the desktop, regardless of the number of events sent. This time likely reflects an ideal scenario where events are registered seamlessly without any delays or issues within the pipeline, thereby showcasing the efficiency potential of the network event triggers in the MEOW system.
Upon examination of the results, an pattern emerges. The minimum recorded response times are consistently around 0.5ms per event for the laptop and 0.3ms per event for the desktop, regardless of the number of events sent. This time likely reflects an ideal scenario where events are registered seamlessly without any delays or issues within the pipeline, thereby showcasing the efficiency potential of the network event triggers in the MEOW system.
In contrast, the maximum and average response times exhibited more variability. This fluctuation in response times may be attributed to various factors such as network latency, the internal processing load of the system, and the inherent unpredictability of concurrent event handling.
Conversely, the maximum and mean response times showed more variability. This fluctuation in response times may be attributed to various factors such as network latency, the internal processing load of the system, and the inherent unpredictability of concurrent event handling. It's worth noting that the standard deviation in these sets of data was consistently high. This suggests that the variability in the maximum and mean response times are due to high variability among the entire dataset, as opposed to singular outliers.
\subsubsection{Multiple Listeners}
The next performance test investigates how the introduction of multiple listeners affects the overall processing time. This test aims to understand the implications of distributing events across different listeners on system performance. Specifically, we're looking at how having multiple listeners in operation might impact the speed at which events are processed.
In this test, I will maintain a constant total of 1000 events, but they will be distributed evenly across varying numbers of listeners between 1 and 1000. By keeping the total number of events constant while altering the number of listeners, I aim to isolate the effect of multiple listeners on system performance. Once again, each test will be performed 50 times.
1000 was chosen as the total number of events to be sent due to its realistic representation of a high-load situation. While this number is higher than what I would typically expect the system to handle in a real-life application, it serves to provide a stress test for the system, revealing how it copes under an intensive load. This approach enables the identification of potential bottlenecks, inefficiencies, or points of failure under heavy demand.
A key expectation for this test is to observe if and how much the overall processing time increases as the number of listeners goes up. This would give insight into whether operating more listeners concurrently introduces additional overhead, thereby slowing down the process. The results of this test would then inform decisions about optimal listener numbers in different usage scenarios, potentially leading to performance improvements in MEOW's handling of network events.
\begin{table}[H]
@ -352,19 +386,19 @@
\hline
\textbf{Listener count} & \textbf{Minimum time} & \textbf{Maximum time} & \textbf{Average time} \\ \hline\hline
\multicolumn{4}{|c|}{\textbf{Laptop}} \\ \hline
1 & 0.63s & 17s & 5.6s \\\hline
10 & 0.46s & 25s & 7.6s \\\hline
100 & 0.42s & 20s & 7.1s \\\hline
250 & 0.51s & 7.9s & 2.9s \\\hline
500 & 0.59s & 1.6s & 0.72s \\\hline
1000 & 0.92s & 3.24s & 1.49s \\\hline\hline
1 & 630ms & 17,000ms & 5,600ms \\\hline
10 & 460ms & 25,000ms & 7,600ms \\\hline
100 & 420ms & 20,000ms & 7,100ms \\\hline
250 & 510ms & 7,900ms & 2,900ms \\\hline
500 & 590ms & 1,600ms & 720ms \\\hline
1000 & 920ms & 3,200ms & 1,500ms \\\hline\hline
\multicolumn{4}{|c|}{\textbf{Desktop}} \\ \hline
1 & 0.24s & 16s & 5.2s \\\hline
10 & 0.24s & 19s & 4.0s \\\hline
100 & 0.25s & 10s & 1.0s \\\hline
250 & 0.27s & 12s & 0.90s \\\hline
500 & 0.31s & 0.33s & 0.31s \\\hline
1000 & 0.38s & 0.42s & 0.40s \\\hline
1 & 240ms & 16,000ms & 5,200ms \\\hline
10 & 240ms & 19,000ms & 4,000ms \\\hline
100 & 250ms & 10,000ms & 1,000ms \\\hline
250 & 270ms & 12,000ms & 900ms \\\hline
500 & 310ms & 330ms & 310ms \\\hline
1000 & 380ms & 420ms & 400ms \\\hline
\end{tabular}
\caption{The results of the Multiple Listeners performance tests with 2 significant digits.}
\end{table}
@ -384,6 +418,8 @@
Contrastingly, the minimum calculation time begins to increase once we reach 200 listeners, with further increases at 500 and 1000 listeners. This could suggest that while the system generally performs well under more distributed loads, the base overhead associated with managing multiple listeners starts to become more pronounced. Each listener requires some system resources to manage, so as the number of listeners increases, the minimum time necessary for processing might increase accordingly.
Therefore, the number of listeners initialized should be considered based on the expected traffic volume. This decision should balance the need for responsiveness against the capabilities of the system and its computational resources. For my tests, the overhead seemed to grow significantly once the amount of listeners passed 100, so the amount of concurrent listeners should likely not go above that.
\subsubsection{Multiple monitors}
The final test explores the performance of the system when multiple Network Event Monitors are run simultaneously. Although the current design and usage of MEOW wouldn't typically involve running multiple instances of the same monitor, it's important to anticipate potential future scenarios. Given the ever-evolving nature of computational workflows and the potential for different types of network event monitors to be developed, it's plausible to imagine a future situation where more than one network event monitor could be active at the same time.
@ -398,19 +434,19 @@
\hline
\textbf{Monitor count} & \textbf{Minimum time} & \textbf{Maximum time} & \textbf{Average time} \\ \hline\hline
\multicolumn{4}{|c|}{\textbf{Laptop}} \\ \hline
1 & 0.63s & 17s & 5.6s \\\hline
10 & 0.45s & 25s & 6.6s \\\hline
100 & 0.38s & 18s & 4.4s \\\hline
250 & 0.40s & 13s & 1.8s \\\hline
500 & 0.44s & 2.9s & 0.72s \\\hline
1000 & 0.52s & 2.3s & 0.70s \\\hline\hline
1 & 630ms & 17,000ms & 5,600ms \\\hline
10 & 450ms & 25,000ms & 6,600ms \\\hline
100 & 380ms & 18,000ms & 4,400ms \\\hline
250 & 400ms & 13,000ms & 1,800ms \\\hline
500 & 440ms & 2,900ms & 720ms \\\hline
1000 & 520ms & 2,300ms & 700ms \\\hline\hline
\multicolumn{4}{|c|}{\textbf{Desktop}} \\ \hline
1 & 0.24s & 16s & 5.2s \\\hline
10 & 0.23s & 20s & 6.5s \\\hline
100 & 0.24s & 18s & 2.9s \\\hline
250 & 0.25s & 7.6s & 0.80s \\\hline
500 & 0.26s & 0.30s & 0.27s \\\hline
1000 & 0.29s & 0.30s & 0.29s \\\hline
1 & 240ms & 16,000ms & 5,200ms \\\hline
10 & 230ms & 20,000ms & 6,500ms \\\hline
100 & 240ms & 18,000ms & 2,900ms \\\hline
250 & 250ms & 7,600ms & 800ms \\\hline
500 & 260ms & 300ms & 270ms \\\hline
1000 & 290ms & 300ms & 290ms \\\hline
\end{tabular}
\caption{The results of the Multiple Listeners performance tests with 2 significant digits.}
\end{table}
@ -445,11 +481,17 @@
\begin{figure}[H]
\begin{center}
\includegraphics[width=0.6\textwidth]{src/BIDS.png}
\includegraphics[width=0.5\textwidth]{src/BIDS.png}
\end{center}
\caption{The structure of the BIDS workflow. Data is transferred to user, and to the cloud.}
\end{figure}
To illustrate the potential applications of network events in MEOW, I implemented a simplified workflow that involves two runners operating concurrently. These runners are initiated with almost identical, mirrored, parameters.
On receiving a network event, each runner is configured to respond by transmitting a network event to its counterpart. This simple setup mirrors the dynamic interaction of components in more complex, real-life workflows. It shows how the introduction of network events can enable the construction of workflows that require elements to communicate and react to each other's status.
Although this setup is quite rudimentary, it provides a tangible demonstration of the capabilities unlocked by the inclusion of network events. Using this as a foundation, it's easy to see how more complex arrangements could be built to accommodate more sophisticated workflows. In the context of the BIDS workflow discussed earlier, for example, the intercommunication between runners could represent the transfer and validation of data between different stages of the workflow.
\subsubsection{Additional Monitors}\label{Additional Monitors}
The successful development and implementation of the network event monitor for MEOW serves as a precedent for the creation of additional monitors in the future. This framework could be utilized as a blueprint for developing new monitors tailored to meet specific demands, protocols, or security requirements.

View File

@ -3,13 +3,13 @@ import matplotlib.pyplot as plt
plt.rcParams.update({'font.size':35})
def single_listener():
fig, (ax1,ax2) = plt.subplots(1,2)
fig, ((ax1,ax2),(ax3,ax4)) = plt.subplots(2,2)
x = [1,10,100,1000]
y11 = [0.00068,0.0047,0.045,00.63]
y12 = [0.00530,2.1000,7.200,17.00]
y13 = [0.00210,0.1800,0.860,05.60]
y11 = [0.62, 6.7, 44, 550]
y12 = [ 24, 4000, 10000, 22000]
y13 = [ 2.5, 200, 1200, 6800]
ax1.plot(x, y12, label="Maximum", linewidth=5)
ax1.plot(x, y13, label="Average", linewidth=5)
@ -20,16 +20,16 @@ def single_listener():
ax1.set_title("Laptop")
ax1.set_xlabel("Event count")
ax1.set_ylabel("Time")
ax1.set_ylabel("Total time (ms)")
ax1.set_xscale("log")
ax1.set_yscale("log")
###
y21 = [0.0004,0.003,0.025,00.24]
y22 = [0.0022,1.000,2.200,16.00]
y23 = [0.0016,0.056,0.440,05.20]
y21 = [0.4, 3, 25, 240]
y22 = [2.2, 1000, 2200, 16000]
y23 = [1.6, 56, 440, 5200]
ax2.plot(x, y22, label="Maximum", linewidth=5)
ax2.plot(x, y23, label="Average", linewidth=5)
@ -39,12 +39,51 @@ def single_listener():
ax2.set_title("Desktop")
ax2.set_xlabel("Event count")
ax2.set_ylabel("Time")
ax2.set_ylabel("Total time (ms)")
ax2.set_xscale("log")
ax2.set_yscale("log")
fig.set_figheight(12)
###
y31 = [0.62, 0.67, 0.44, 0.55]
y32 = [ 24, 400, 100, 22]
y33 = [ 2.5, 20, 12, 6.8]
ax3.plot(x, y32, label="Maximum", linewidth=5)
ax3.plot(x, y33, label="Average", linewidth=5)
ax3.plot(x, y31, label="Minimum", linewidth=5)
ax3.grid(linewidth=2)
# ax3.set_title("Laptop")
ax3.set_xlabel("Event count")
ax3.set_ylabel("Time per event (ms)")
ax3.set_xscale("log")
ax3.set_yscale("log")
###
y41 = [0.40, 0.30, 0.25, 0.24]
y42 = [ 2.2, 100, 22, 16]
y43 = [ 1.6, 5.6, 4.4, 5.2]
ax4.plot(x, y42, label="Maximum", linewidth=5)
ax4.plot(x, y43, label="Average", linewidth=5)
ax4.plot(x, y41, label="Minimum", linewidth=5)
ax4.grid(linewidth=2)
ax4.set_xlabel("Event count")
ax4.set_ylabel("Time per event (ms)")
ax4.set_xscale("log")
ax4.set_yscale("log")
###
fig.set_figheight(25)
fig.set_figwidth(35)
fig.set_dpi(100)
@ -55,9 +94,9 @@ def multiple_listeners():
x = [1,10,100,250,500,1000]
y11 = [00.63,00.46,00.42,0.51,0.59,00.92]
y12 = [17.00,25.00,20.00,7.90,1.60,03.24]
y13 = [05.60,07.60,07.10,2.90,0.72,01.49]
y11 = [630,460,420,510,590,920]
y12 = [17000,25000,20000,7900,1600,3200]
y13 = [5600,7600,7100,2900,720,1500]
ax1.plot(x, y12, label="Maximum", linewidth=5)
ax1.plot(x, y13, label="Average", linewidth=5)
@ -68,16 +107,16 @@ def multiple_listeners():
ax1.set_title("Laptop")
ax1.set_xlabel("Listener count")
ax1.set_ylabel("Time")
ax1.set_ylabel("Total time (ms)")
ax1.set_xscale("log")
ax1.set_yscale("log")
###
y21 = [00.24,00.24,00.25,00.27,0.31,0.38]
y22 = [16.00,19.00,10.00,12.00,0.33,0.42]
y23 = [05.20,04.00,01.00,00.90,0.31,0.4]
y21 = [240,240,250,270,310,380]
y22 = [16000,19000,10000,12000,330,420]
y23 = [5200,4000,1000,900,310,400]
ax2.plot(x, y22, label="Maximum", linewidth=5)
ax2.plot(x, y23, label="Average", linewidth=5)
@ -87,7 +126,7 @@ def multiple_listeners():
ax2.set_title("Desktop")
ax2.set_xlabel("Listener count")
ax2.set_ylabel("Time")
ax2.set_ylabel("Total time (ms)")
ax2.set_xscale("log")
ax2.set_yscale("log")
@ -103,9 +142,9 @@ def multiple_monitors():
x = [1,10,100,250,500,1000]
y11 = [00.63,00.45,00.38,00.40,0.44,0.52]
y12 = [17.00,25.00,18.00,13.00,2.90,2.30]
y13 = [05.60,06.60,04.40,01.80,0.72,0.70]
y11 = [630, 450, 380, 400, 440, 520]
y12 = [17000, 25000, 18000, 13000, 2900, 2300]
y13 = [5600, 6600, 4400, 1800, 720, 700]
ax1.plot(x, y12, label="Maximum", linewidth=5)
ax1.plot(x, y13, label="Average", linewidth=5)
@ -116,16 +155,16 @@ def multiple_monitors():
ax1.set_title("Laptop")
ax1.set_xlabel("Monitor count")
ax1.set_ylabel("Time")
ax1.set_ylabel("Total time (ms)")
ax1.set_xscale("log")
ax1.set_yscale("log")
###
y21 = [00.24,00.23,00.24,0.25,0.26,0.29]
y22 = [16.00,20.00,18.00,7.60,0.30,0.30]
y23 = [05.20,06.50,02.90,0.80,0.27,0.29]
y21 = [240,230,240,250,260,290]
y22 = [16000,20000,18000,7600,300,300]
y23 = [5200,6500,2900,800,270,290]
ax2.plot(x, y22, label="Maximum", linewidth=5)
ax2.plot(x, y23, label="Average", linewidth=5)
@ -135,7 +174,7 @@ def multiple_monitors():
ax2.set_title("Desktop")
ax2.set_xlabel("Monitor count")
ax2.set_ylabel("Time")
ax2.set_ylabel("Total time (ms)")
ax2.set_xscale("log")
ax2.set_yscale("log")

Binary file not shown.

Before

Width:  |  Height:  |  Size: 160 KiB

After

Width:  |  Height:  |  Size: 168 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 151 KiB

After

Width:  |  Height:  |  Size: 158 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 190 KiB

After

Width:  |  Height:  |  Size: 326 KiB

View File

@ -27,4 +27,13 @@
howpublished = {\url{https://github.com/PatchOfScotland/meow_base}},
year = 2023,
commit = {933d568},
}
@misc{Implementation,
author = {Nikolaj Gade},
title = {meow\_base},
publisher = {Gitea},
journal = {Git repository},
howpublished = {\url{https://git.ingemanngade.net/NikolajDanger/meow_base}},
year = 2023,
}