<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>The Ultimate Comparison Framework</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>IPVS, University of Stuttgart</institution>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <fpage>56</fpage>
      <lpage>59</lpage>
      <abstract>
        <p>Researches come up with scientific criteria to compare diefrent tools and spend huge eofrt to run the comparison. The presentation of the comparison results is an open issue. The “Ultimate Comparison Framework” is one solution enabling a) collection of evaluation data using markdown and b) presentation of the data set in a web application. 1 https://ultimate-comparisons.github.io/ultimate-deployment-toolcomparison/ 2 https://ultimate-comparisons.github.io/ultimate-IoT-platform-comparison/</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Oliver Kopp</p>
    </sec>
    <sec id="sec-2">
      <title>Introduction</title>
      <p>
        The accumulation of knowledge is an essential condition for a field to be scientific and
to develop [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. In the field of information systems research in general and business
process management in particular, literature reviews are conducted to harvest the body of
knowledge [
        <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
        ]). There is still lack of sharing analysis results, especially in the field of
business process management (cf. Recker and Mendling [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]). The Ultimate Comparison
Framework is an approach to fill this gap: It supports publishing reviewing results in an
open way. It further supports updating the research results by established collaborative
software engineering techniques such as GitHub pull requests [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. The current focus of
the framework is on comparison of tools and technologies, but not limited to it.
      </p>
      <p>
        The development of the Ultimate Comparison Framework was driven by a) sustaining
search results in the context of finding the most fitting tool for a task and b) oefring a
framework for comparative studies crafted by students in the context of their software
engineering trainings (cf. “Fachstudie” [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]).
      </p>
      <p>The framework is called “Ultimate Comparison Framework”, because it oefrs
creation of multiple “Ultimate Comparisons” of diefrent tools in diefrent settings. The
name “Ultimate Comparison” stems from the claim that the framework is easy to use
(covering creation and maintenance of data as well as presenting data) and that it will
be used by many researchers to present their research results. This paper is a first step
into this direction by making it known to a broader community.</p>
      <p>Users of the framework are a) researchers wanting to sustain their survey results, b)
researches investigating other surveys, and c) industry users interested in introducing a new
tool or framework and aiming for a scientifically-grounded comparison of existing work.</p>
      <p>Already published “ultimate comparisons” include: Comparison of Cloud
Deployment and Management Tools1, Comparison of IoT Platforms2, Comparison of
TimeSeries Databases3, Comparison of Message Brokers4, Comparison of Graph Libraries for
JavaScript5, Comparison of Web-based IDEs6, Comparison of Literature Management
Software7, and Comparison of LATEX Building Helpers8. The list of available comparisons
is constantly updated at https://ultimate-comparisons.github.io/. In other words:
the Ultimate Comparison Framework is not limited to a fixed set of tools, but can be
used to compare tools, where comparison critera can be defined and the result can be
presented as table.</p>
      <p>
        The idea stems from the PaaSfinder web application [
        <xref ref-type="bibr" rid="ref4 ref5">4, 5</xref>
        ] (available at https:
//paasfinder.org/), which oefrs similar functionality. In contrast to PaaSfinder, the
Ultimate Comparison Framework is hosted as a website and stores its data in Markdown.
It is the first tool basing on plain-text (Markdown) for data storage and on GitHub for
data collection and data presentation.
      </p>
      <p>
        This paper demos the framework by showing the Ultimate Comparison of Open
Source Time-Series-Databases [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. We explain the reader-facing interface (Sect. 2) and
the contributor-facing interface (Sect. 3). After sketching the implementation (Sect. 4),
we provide a short summary and outlook (Sect. 5).
2
      </p>
    </sec>
    <sec id="sec-3">
      <title>Reader-facing Interface</title>
      <p>A person interested in comparison results, opens the web page presenting the Ultimate
Comparison. He sees a table showing the compared tools as row headings and criteria as
column headings. In each cell, the matched criteria are listed. For instance, if a tool is
oefred both under MIT and a proprietary license, the cell for “License” shows “MIT”
and “proprietary”. The person can sort the table by clicking on a criteria name. It is also
possible to show details on each tool. An example table is presented in Fig. 1.
3</p>
    </sec>
    <sec id="sec-4">
      <title>Contributor-facing Interface</title>
      <p>Data in a comparison may get outdated or data is missing. Then, a user can update the
data using the typical GitHub flow 9. The user searches for the Markdown file containing
the data of the tool (for instance the time-series database Timely), modifies it accordingly,
and sends a pull request.</p>
      <p>In case a new comparison should be set up, its name, description and comparison
criteria need configured. This is enabled by a YAML configuration file. Each criterion
bundles a set of possible values: Possible data types for a criterion are text, enums, and
numbers. For instance, the code license can be enumerated and results of performance
3 https://tsdbbench.github.io/Ultimate-TSDB-Comparison/
4 https://ultimate-comparisons.github.io/ultimate-message-broker-comparison/
5 https://ultimate-comparisons.github.io/ultimate-graphframework-comparison/
6 https://ultimate-comparisons.github.io/ultimate-webIDE-comparison/
7
https://ultimate-comparisons.github.io/ultimate-reference-managementsoftware-comparison/
8 https://ultimate-comparisons.github.io/ultimate-latex-makers-comparison/
9 https://guides.github.com/introduction/flow/
measurements can be provided. Each criterion is input as Markdown heading. In case
of text, the text has to be put below the heading. In case of an enum, all matching enums
are listed as markdown list (e.g., “- Apache-2.0” below “## License” in the case of
a single license). In case of numbers, the number is given below the heading.</p>
      <p>When designing a criterion, it is important to ensure that it can be used for comparing
diefrent tools. For instance, if performance of tools depend on the environment, the
chosen environment should be described. Alternatively, an Ultimate Comparison for
each diefrent environment could be setup.</p>
      <p>To set up an ultimate comparison, basic knowledge of GitHub pages and the usage
of CI/CD tools is necessary. Furthermore, writing YAML is an essential software
engineering skill. To input data into an ultimate comparison, the knowledge of using GitHub
as well as entering Markdown are necessary.</p>
    </sec>
    <sec id="sec-5">
      <title>4 Implementation</title>
      <p>The web interface is implemented using Angular. The data is converted from Markdown
to JSON using Java on a CI server (currently TravisCI). The JSON is read by the web
interface running on the client side in the browser. The web interface is hosted in the branch
“gh-pages” of the respective comparison and oefred to users by the GitHub Pages
oefring10. The source code of the implementation is open source and available at https:
//github.com/ultimate-comparisons/ultimate-comparison-framework.
5</p>
    </sec>
    <sec id="sec-6">
      <title>Conclusion and Outlook</title>
      <p>This paper presented the Ultimate Comparison Framework oefring a collaborative
comparison framework. It stores the data in Markdown files enabling the usage of
arbitrary text editors to add or modify data. The data is rendered as static HTML page
showing all criteria, the evaluated tools, and the fulfillment of each criterion in a table.
It is an open discussion point whether tables are the best option to present comparison
results. Future work has to evaluate other rendering possibilities such as charts.
10 https://pages.github.com/</p>
      <p>Facts such as write performance can also be captured in an Ultimate Comparision. The
framework currently oefrs to fetch the last commit and decide on that fact if development
is stalled. There is currently no way to trigger complete setup of each system, measure
the performance, and update the result in an Ultimate Comparison. This is left as future
work, for instance for TSDBBench11.</p>
      <p>Currently, there is no advanced analysis functionality oefred. One idea is to oefr
clustering of the results. Currently, this can only be done by defining an enum criterion,
where each enum value presents one cluster. Then, the user can sort by that criterion.</p>
      <p>The next development ideas are a) oefring a browser-based user interface to input
data (instead of relying on GitHub) and b) using Wikipedia tables as data input; either
as data source for displaying or as data source for synchronizing the local Markdown
ifles. Future work includes measurement of the time required to publish and maintain
results in comparison to other approaches such as scientific publications or tables in
WikiPedia articles.</p>
      <p>Acknowledgments We want to thank Stefan Kolb for the idea and inspiration of this tool
Further, we want to thank Andreas Bader, Armin Hüneburg, and Christoph Kleine for
discussions on the UI and driving the implementation. This work is partially funded by
the Federal Ministry for Economic Aafirs and Energy (BMWi) projects Industrial
Communication for Factories (01MA17008G), NEMAR (03ET4018), and SmartOrchestra
(01MD16001F).
11 https://tsdbbench.github.io/</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Bader</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , et al.:
          <article-title>Survey and Comparison of Open Source Time Series Databases</article-title>
          . In: BTW2017 Workshops. GI e.V. (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Gousios</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pinzger</surname>
          </string-name>
          , M.,
          <string-name>
            <surname>van Deursen</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>An exploratory study of the pull-based software development model</article-title>
          .
          <source>In: ICSE</source>
          . ACM Press (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Hunter</surname>
            ,
            <given-names>J.E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schmidt</surname>
            ,
            <given-names>F.L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jackson</surname>
            ,
            <given-names>G.B.</given-names>
          </string-name>
          :
          <article-title>Meta-Analysis: Cumulating Research Findings across Studies</article-title>
          .
          <source>Educational Researcher</source>
          <volume>15</volume>
          (
          <issue>8</issue>
          ),
          <volume>20</volume>
          (Oct
          <year>1986</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Kolb</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>On the Portability of Applications in Platform as a Service</article-title>
          .
          <source>Ph.D. thesis</source>
          , University of Bamberg, Germany (
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Kolb</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wirtz</surname>
          </string-name>
          , G.:
          <article-title>Towards Application Portability in Platform as a Service. In: SOSE</article-title>
          . IEEE (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Ludewig</surname>
          </string-name>
          , J.:
          <article-title>Erfahrungen bei der Lehre des Software Engineering</article-title>
          . In: Software Engineering im Unterricht der Hochschulen. dpunkt.verlag (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Paré</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Trudel</surname>
            ,
            <given-names>M.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jaana</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kitsiou</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Synthesizing information systems knowledge: A typology of literature reviews</article-title>
          .
          <source>Information &amp; Management</source>
          <volume>52</volume>
          (
          <issue>2</issue>
          ),
          <fpage>183</fpage>
          -
          <lpage>199</lpage>
          (
          <year>Mar 2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Recker</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mendling</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>The State of the Art of Business Process Management Research as Published in the BPM Conference</article-title>
          .
          <source>Business &amp; Information Systems Engineering</source>
          <volume>58</volume>
          (
          <issue>1</issue>
          ),
          <fpage>55</fpage>
          -
          <lpage>72</lpage>
          (
          <year>Nov 2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <source>All links were last followed on February 19</source>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>