Comparing Research Contributions in a Scholarly Knowledge Graph Allard Oelen Mohamad Yaser Jaradeh Kheir Eddine Farfar L3S Research Center, Leibniz L3S Research Center, Leibniz TIB Leibniz Information Centre for University of Hannover University of Hannover Science and Technology oelen@l3s.de jaradeh@l3s.de kheir.farfar@tib.eu Markus Stocker Sören Auer TIB Leibniz Information Centre for TIB Leibniz Information Centre for Science and Technology Science and Technology markus.stocker@tib.eu auer@tib.eu ABSTRACT of the same type—are often described using different and differently Conducting a scientific literature review is a time consuming activ- named attributes. Moreover, different hierarchical structures can ity. This holds for both finding and comparing the related literature. complicate a comparison of two resources. In this paper, we present a workflow and system designed to, among In this paper, we present a workflow that describes how to select other things, compare research contributions in a scientific knowl- and compare resources describing scholarly knowledge in graph edge graph. In order to compare contributions, multiple tasks are databases. We implement this workflow in the ORKG, which en- performed, including finding similar contributions, mapping prop- ables the comparison of related literature, including state-of-the-art erties and visualizing the comparison. The presented workflow overviews. In the ORKG, these resources are specifically called re- is implemented in the Open Research Knowledge Graph (ORKG) search contributions. A research contribution relates the research which enables researchers to find and compare related literature. A problem addressed by the contribution, the research method and (at preliminary evaluation has been conducted with researchers. Re- least one) research result. Currently, we do not further constrain the sults show that researchers are satisfied with the usability of the description of these resources. Users can adopt arbitrary third-party user interface, but more importantly, they acknowledge the need vocabularies to describe problems, methods, and results. We thus and usefulness of contribution comparisons. tackle the following research questions: • RQ1: How to compare research contributions in a graph based KEYWORDS system? Scholarly Communication; Scholarly Information Systems; Schol- • RQ2: How to effectively specify and visualize research contri- arly Knowledge Comparison; Comparison User Interface; Digital bution comparisons in a user interface? Libraries 2 RELATED WORK 1 INTRODUCTION Resource (or entity) comparison is a well-known task in a variety of When conducting scientific research, finding and comparing state- information systems, for instance in e-commerce or hotel booking of-the-art literature is an important activity. Mainly due to the systems. In e-commerce, products can be compared in order to help unstructured way of publishing scholarly knowledge it is currently customers during the decision process [19]. These comparisons are time consuming to find and compare related literature. The Open often based on a predefined set of properties to compare (e.g., price Research Knowledge Graph1 (ORKG) [9] is a system designed to and color). This does not apply when comparing resources in a acquire, publish and process structured scholarly knowledge pub- community-created knowledge graph where there is no predefined lished in the scholarly literature. One of the main features of the set of properties. Petrova et al. created a framework to compare ORKG is the ability to automatically compare related literature. heterogeneous entities in RDF graphs using SPARQL queries [15]. In this framework, both the similarities and differences between The benefits of having scholarly knowledge structured include, entities are determined. among others, the ability to easily find, retrieve but also compare The task of comparing research contributions in a graph system such knowledge. Comparing resources (scholarly knowledge or can be decomposed into multiple sub tasks. The first sub task is other) can be useful in many contexts [15], for instance resources finding suitable contributions to compare. The most suitable com- describing cities for their population, area and other attributes. parison candidates are similar resources. The actual retrieval of Comparing structured data is useful particularly when compared similar resources can be seen as an information retrieval problem, resources are described with similar or even same properties. Al- with techniques such as TF-IDF [12]. Measures to calculate the though knowledge graphs are of course structured, resources—even structural similarity between RDF graphs have been proposed in 1 http://orkg.org the literature (e.g., Maillot et al. [11]). A second sub task is match- ing semantically similar predicates. Determining the similarity of Copyright ©2019 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). resources is a recurring problem in dataset interlinking [1] or the K-CAP’19 SciKnow, November 2019, Marina del Rey, CA, USA Oelen et al. more general task of ontology alignment/matching [17]. For prop- erty mapping, techniques of interest include edit distance (e.g., Jaro-Winkler [18] or Levenshtein [10]) and vector distance. Gro- mann and Declerck evaluated the performance of word vectors for ontology alignment and found that FastText [2] performed best [7]. As suggested above, an effective automated related literature comparison relies on scholarly knowledge being structured. There is substantial related work on representing scholarly knowledge in structured form. Building on the work of numerous philosophers of science, Hars [8] proposed a comprehensive scientific knowledge model that includes concepts such as theory, methodology and statement. More recently, ontologies were engineered to describe Figure 1: Resource comparison workflow. different aspects of the scholarly communication process. Among them are CiTO2 and C4O3 for recording citation related concepts, FaBiO4 and BiRO5 for capturing bibliographic data and PRO6 , PSO7 by a user. The main resource is compared against other compar- and PWO8 for the publication process. Additionally DoCO9 can be ison resources. There are two different approaches for selecting used to describe the structure of a document which can be comple- the comparison resources. The first approach automatically selects mented by DEO10 to also include rhetorical elements to describe comparison resources based on similarity. The second approach the scientific discourse [4]. Among others, these ontologies are part lets users manually select resources. of the Semantic Publishing and Referencing Ontologies (SPAR), a 3.1.1 Find similar resources. Comparing resources makes only collection of ontologies that can be used to describe scholarly pub- sense when resources can sensibly be compared. For example, it lishing and referencing of documents [6, 13, 14]. Ruiz Iniesta and does not make (much) sense to compare a city (e.g., dbpedia:berlin) Corcho [16] reviewed the state-of-the-art ontologies to describe to a car brand (e.g., dbpedia:volkswagen). This of course does not scholarly articles. only apply to comparison in knowledge graphs but also applies to These ontologies are designed to capture primarily metadata comparison in other kinds of databases. We thus argue that it makes about and structure of scholarly articles, not the actual research only sense to compare resources that are similar. More specifically, contributions (scholarly knowledge) communicated in articles. In resources that share the same (or a similar set of) properties are order to conduct a useful comparison, only comparing article meta- good comparison candidates. To illustrate this, consider following data and structure is oftentimes not sufficient. Rather, a compari- resources: dbpedia:berlin and dbpedia:new_york_city. Both resources son should include (structured descriptions of) problem, materials, share the property dbo:populationTotal which makes them suitable methods, results and perhaps other aspects of scholarly work. The for comparison. Finding similar resources is therefore based on Open Research Knowledge Graph (ORKG) [9] supports creating finding resources that share the same or similar properties. such structured descriptions of scholarly knowledge. We thus em- To do so, each comparison resource is converted into a string. bed the research contribution comparison presented in this paper This string is generated by concatenating all properties of the re- in the larger effort of the ORKG project, which aims to advance source (Algorithm 1). The resulting string is stored. TF-IDF [12] is scholarly communication infrastructure. used to query the store and the string for the main resource is used as query. The search returns the most similar resources. The top-k 3 WORKFLOW resources are selected and form a set of resources that is used in We present a workflow that describes how to perform a compari- the next step. son of research contributions, thereafter more generally referred to as resources. This workflow consists of four different steps: 1) Algorithm 1 Paper indexing select comparison candidates, 2) select related statements, 3) map 1: procedure IndexPaper(paper) properties and 4) visualize comparison. The workflow is depicted in Figure 1. Section 4 presents the implementation for the individual 2: for each property in paper do steps. We now discuss each step of the workflow in more detail. 3: propertyStrinд ← propertyStrinд + property 4: save propertyStrinд 3.1 Select comparison candidates To perform a comparison, a starting resource is needed. This re- source is called main resource and is always manually selected 3.1.2 Manual selection. There are scenarios where comparison based on similarity is not suitable. For example, a user may want 2 http://purl.org/spar/cito 3 http://purl.org/spar/c4o to compare Germany and France to see which country has the 4 http://purl.org/spar/fabio highest GDP. In this case, there is no need to automatically select 5 http://purl.org/spar/biro resources to be compared because they are determined by the user. 6 http://purl.org/spar/pro Therefore, manual selection of resources should also be supported. 7 http://purl.org/spar/pso 8 http://purl.org/spar/pwo How to manually select resources is an implementation detail and 9 http://purl.org/spar/doco a proposal for how to do this is presented in Section 4. The result 10 http://purl.org/spar/deo of manual selection is a set of resources used for the comparison. Comparing Research Contributions K-CAP’19 SciKnow, November 2019, Marina del Rey, CA, USA 3.2 Select related statements Algorithm 2 Property mapping This step selects the statements related to the resources to be com- 1: procedure MapProperties(properties, threshold) pared returned in the previous step. Statements are selected transi- 2: for each property p1 ∈ properties do tively to match resources in subject or object position. This search 3: for each property p2 ∈ properties do is performed until a predefined maximum transitive depth δ has 4: similarity ← cos(FastText(p1 ), FastText(p2 )) been reached. The intuition is that the deeper a property is nested 5: if similarity > threshold then the less likely is its relevance for the comparison. 6: similarProps ← similarProps ∪ {p1 , p2 } return similarProps 3.3 Map properties As described in the first step, comparisons are built using shared or displayed and which ones should be hidden. A property is displayed similar properties of resources. In case the same property has been when it is shared among a predetermined amount τ of papers, where used between resources, these properties are grouped and form τ mainly depends on comparison use and can be determined based one comparison row. However, often different properties are used on the total amount of resources in the comparison. to describe the same concept. This occurs for various reasons. The Another aspect of comparison visualization is the possibility most obvious reason is when two different ontologies are used to to customize the resulting table. This is needed because of the describe the same property. For example, for describing the popu- similarity-based matching of properties and the use of predeter- lation of a city, DBpedia uses dbo:populationTotal while WikiData mined thresholds. For example, users should be able to enable or uses WikiData:population (actually the property identifier is P1082; disable properties. They should also get feedback on property prove- for the purpose here we use the label). When comparing resources, nance (i.e., the property path). Ultimately, this contributes to a better these properties should be considered as equivalent. Especially user experience, with the possibility to manually correct mistakes for community-created knowledge graphs, differently identified made by the system. properties likely exist that are, in fact, equivalent. To overcome this problem, we use FastText [2] word embeddings 4 IMPLEMENTATION to determine the similarity of properties. If the similarity is higher than a predetermined threshold τ , the properties are considered The presented four-step workflow for comparing resources is im- same and are grouped. In the end, each group of predicates will be plemented in the Open Research Knowledge Graph (ORKG), specifi- visualized as one row in the comparison table. The result of this cally to compare research contributions as a special type of resource. step is a list of statements for each comparison resource, where As a companion to the description here, an online video summarizes similar predicates are grouped. and demonstrates the ORKG comparison feature11 . The similarity matrix γ is generated The user interface of the comparison feature is seamlessly in- tegrated with the ORKG front end, which is written in JavaScript using the React framework12 and is publicly available online13 . The γ = cos(→ p−i ,→ p−j ) h i (1) back end of the comparison feature is written as a service separate with cos(.) as the cosine similarity of vector embeddings for from the ORKG back end, is written in Python and is available predicate pairs (pi , p j ) ∈ P, whereby P is the set of all resources. online14 . Furthermore, we create a mask matrix Φ that selects predicates of In the ORKG, each paper consists of at least one research contri- resources c i ∈ C, whereby C is the set of resources to be compared. bution which addresses at least one research problem and is further Formally, described with contribution data including for instance materials, ( methods, implementation, results or other aspects. In the ORKG, it 1 if p j ∈ c i is research contributions that are compared rather than papers. Φi, j = (2) We will now discuss each step of the presented workflow to 0 otherwise illustrate how it is implemented in the ORKG. Next, for each selected predicate p we create the matrix φ that slices Φ to include only similar predicates. Formally, 4.1 Select comparison candidates Both approaches presented, namely find similar resources and man- φ i, j = (Φi, j ) c i ∈ C (3) p j ∈sim(p) ual selection, are implemented in the ORKG. The reason for im- plementing both is that they complement each other. Conducting where sim(p) is the set of predicates with similarity values γ [p] ≥ a comparison based on similarity is useful when a user wants to τ with predicate p. Finally, φ is used to efficiently compute the compare a certain contribution with other (automatically deter- common set of predicates [9]. This process is displayed in Algorithm mined similar) contributions (for example, addressing the same 2. problem), while manual contribution selection can be helpful to compare a user defined set of contributions. Figure 2 shows both 3.4 Visualize comparison approaches. As depicted, three similar contributions are suggested The final step of the workflow is to visualize the comparison and 11 https://youtu.be/mbe-cVyW_us present the data in a human understandable format. Tabular format 12 https://reactjs.org/ is often appropriate for visualizing comparisons. Another aspect 13 https://gitlab.com/TIBHannover/orkg/orkg-frontend of the visualization is determining which properties should be 14 https://gitlab.com/TIBHannover/orkg/orkg-similarity K-CAP’19 SciKnow, November 2019, Marina del Rey, CA, USA Oelen et al. Figure 2: Implementation of workflow step 1, select compar- ison candidates, showing both the similarity-based and the manual selection approaches. Figure 3: Box showing the manually selected contributions. to the user. (The corresponding similarity percentage is displayed 4.4 Visualize comparison next to paper title.) These suggested contributions can be directly Because the comparisons are made for humans, visualizing them compared. In contrast, the manual approach works similarly to an effectively is essential and therefore we invested considerable effort online shopping cart. When the “Add to comparison” checkbox is on this aspect. Figure 4 displays a comparison for research contri- checked, a box is displayed at the bottom of the page. This box butions related to visualization tools published in the literature. In shows the manually selected contributions that will be used for the this example, four properties are displayed. Literals are displayed as comparison (Figure 3). plain text while resources are displayed as links. When a resource To retrieve contributions that are similar to a given contribution, link is selected, a popup is displayed showing the statements related we developed an API endpoint. This endpoint takes the given con- to this resource. By default, only properties that are common to at tribution as input and returns five similar contributions (of which least two contributions (τ ≥ 2) are displayed. The UI implements three are displayed). For performance reasons, each contribution is some additional features that are particularly useful to compare indexed by concatenating the properties to a string (Section 3.1). research contributions. We will now discuss these features is more This string is stored inside a document-oriented database. The in- detail. dexing happens as soon as a contribution is added. The result of this step is a set of contribution IDs used to perform the comparison. 4.4.1 Customization. Users can customize comparisons accord- ing to their needs. The customization includes transposing the table and customizing the properties. The properties can be en- 4.2 Select related statements abled/disabled and they can be sorted. Especially the option to An additional API endpoint was developed for the comparison. This disable properties is helpful when resources with many statements endpoint takes the set of contribution IDs as input and returns the are compared. Only properties considered relevant to the user can data used to display the comparison. The comparison endpoint be selected to display. Customizing the comparison table can be is responsible for steps two and three of the workflow: selecting useful before exporting or sharing the comparison. the related statements and mapping the properties. For each listed contribution, an ORKG back end query selects all related statements. 4.4.2 Sharing and persistence. The comparison can be shared This is done as described in Section 3.2. The process of selecting using a link. For sharing the comparison, a persistence mechanism statements is repeated until depth δ = 5. This number is chosen to has been built in. Especially when sharing the comparison for re- include statements that are not directly related to the resource, but search purposes, it is important to share the original comparison. to exclude statements that are less relevant because they are nested Since resource descriptions may change over time comparisons too deep. may also change. To support persistency, the whole state of the comparison is stored in a document-based database. 4.3 Map properties 4.4.3 Export. It is possible to export comparisons in formats Using the API of the previous step, the properties of the selected PDF, CSV and LATEX. Especially the LATEX export is useful for the statements are mapped. As described in the workflow, for each ORKG, since the export be directly used in research papers. In property pair the similarity is calculated using word embeddings. In addition to the generated LATEX table, a BibTeX file is generated case the similarity threshold τ ≥ 0.9, the properties are considered containing the bibliographic information of the papers used in the to be equivalent and are grouped. The threshold is determined by a comparison. Also, a link referring back to the comparison by the trial and error method. Then the results from the API are returned ORKG is showed as footnote. Just like the shareable link, this link to the UI where they are displayed. is persistent and is therefore suitable for use in articles. Comparing Research Contributions K-CAP’19 SciKnow, November 2019, Marina del Rey, CA, USA Figure 4: Comparison of research contributions related to visualization tools. Figure 5: SUS score by question (higher is better) 5 PRELIMINARY EVALUATION In this section we present a preliminary evaluation of the imple- Table 1: Time (in seconds) needed to perform compar- mented comparison functionality. isons with 2-8 research contributions using the baseline and ORKG approaches. 5.1 User evaluation Number of compared research contributions A qualitative evaluation is conducted to determine the usability of 2 3 4 5 6 7 8 our implementation. Additionally, the evaluation is used to deter- Baseline 0.00026 0.1714 0.763 4.99 112.74 1772.8 14421 mine the usefulness of the comparison functionality in general. The ORKG 0.0035 0.0013 0.01158 0.02 0.0206 0.0189 0.0204 usability is determined using the System Usability Scale (SUS) [3]. In total, five participants were part of the evaluation. All participants property means"; participant #5 stated "It should be possible to split are researchers. At the start of the evaluation, each participant was properties that are mapped, but are in fact different". asked to watch a video that explained the basic concepts of the comparison functionality. Afterwards an instructor asked the par- 5.2 Performance evaluation ticipant to perform certain tasks in the system, specifically creating In order to evaluate the performance of the overall comparison, we a comparison (based on similarity and manually), customizing this compared the implemented ORKG approach to a baseline approach comparison and exporting the comparison. The tasks were chosen for comparing multiple resources. In Table 1 the time needed for to include all main functionalities of the comparison functional- a comparison is displayed for both the baseline and the ORKG ity. In case a participant was not able to complete the task he was approach. In total eight papers are compared with on average ten allowed to ask an instructor for help. After interacting with the properties per paper. In the baseline approach, the “Map properties” system, users were asked to fill out an online questionnaire15 . The step is not scaling well. This is because each property is compared questionnaire contained ten questions from the SUS, each questions against all other properties. If multiple contributions are selected, could be answered on a scale from 1 (strong disagree) to 5 (strongly the amount of property similarity checks grows exponentially. As agree). Afterwards, a short interview was conducted to get the displayed in the table, the ORKG approach outperforms the baseline opinions of the participants on the usefulness of the comparison approach. The total amount of papers used for the evaluation is feature. limited to eight because the baseline approach does not scale to The SUS score ranges from 1 to 100. In our evaluation, the SUS larger sets. score is 81, which is considered excellent [5]. Figure 5 depicts the score per question. This indicates that participants did not have problems with using the user interface to create, customize and 6 DISCUSSION & FUTURE WORK export their related work comparisons. This is in line with the posi- The aim of the contribution comparison functionality is to support tive verbal feedback that was provided to the instructor during the literature reviews and make it more efficient. To live up to this aim, evaluation. In addition to the usability questions, three questions the knowledge graph should contain more data. As described in Sec- were asked related to the usefulness of the related literature compar- tion 2, structured data is needed to perform an effective and accurate ison functionality. All participants agreed that such a functionality comparison. Currently, such a graph containing research contri- is useful and can potentially save them time while conducting re- butions does not exist since most existing initiatives focus solely search. Finally, participants were asked to give additional feedback. on document metadata. This is why the ORKG focuses on making Among others, participant #1 remarked "It would be nice if it is the actual research contributions machine-readable. Although the explained how similarity of papers is determined"; participant #3 amount of papers in the ORKG is growing, it is currently not suffi- suggested "Show text labels next to properties, explaining what this cient for the comparison functionality to be used effectively. The evaluation results suggest that the comparison feature performs 15 https://forms.gle/x2t7SYkAzCkCekUp8 well and that users are satisfied with the usability. Additionally, K-CAP’19 SciKnow, November 2019, Marina del Rey, CA, USA Oelen et al. they see the potential of the functionality. Thus, the technical in- [6] Aldo Gangemi, Silvio Peroni, David Shotton, and Fabio Vitali. 2017. The frastructure is in place for the related literature comparison but Publishing Workflow Ontology (PWO). Semantic Web 8, 5 (2017), 703–718. https://doi.org/10.3233/SW-160230 more data is needed for an extensive evaluation and real-world use. [7] Dagmar Gromann and Thierry Declerck. 2019. Comparing pretrained multi- In order to evaluate the usability of the interface, a user eval- lingual word embeddings on an ontology alignment task. LREC 2018 - 11th International Conference on Language Resources and Evaluation (2019), 230–236. uation is arguably the most suitable method. In total there were [8] Alexander Hars. 2001. Designing Scientific Knowledge Infrastructures: The only five participants for the user evaluation presented here. While Contribution of Epistemology. Information Systems Frontiers 3, 1 (2001), 63–73. this is not sufficient to make any definitive conclusions, it helps to https://doi.org/10.1023/A:1011401704862 [9] Mohamad Yaser Jaradeh, Allard Oelen, Manuel Prinz, Jennifer D’Souza, Gábor understand what users expect from such a system. The individually Kismihók, Markus Stocker, and Sören Auer. 2019. Open Research Knowledge provided feedback is also helpful to guide further developments. Graph: Next Generation Infrastructure for Semantic Scholarly Knowledge (in One of the important outcomes of the evaluation is that all par- press). In In Proceedings of the 10th International Conference on Knowledge Capture (K-CAP ’19). ACM. https://doi.org/10.1145/3360901.3364435 ticipants agreed on the usefulness of the feature. They saw the [10] Vladimir I Levenshtein. 1966. Binary codes capable of correcting deletions, potential of conducting literature reviews with the ORKG instead insertions, and reversals. In Soviet physics doklady, Vol. 10. 707–710. [11] Pierre Maillot, Carlos Bobed, Pierre Maillot, Carlos Bobed, Pierre Maillot, and of doing it entirely manually. Carlos Bobed. 2019. Measuring structural similarity between RDF graphs To cite Future work will focus on a more extensive evaluation of the this version : HAL Id : hal-01940449 Measuring Structural Similarity Between individual components of the system. This includes the merging RDF Graphs. (2019). [12] Carme Pinya Medina and Maria Rosa Rosselló Ramon. 2015. Using TF-IDF to of properties and the similarity functionality. In order to perform Determine Word Relevance in Document Queries Juan. New Educational Review such an evaluation, more data should be added to the ORKG. Given 42, 4 (2015), 40–51. https://doi.org/10.15804/tner.2015.42.4.03 that automated related literature comparisons is one of the many [13] Silvio Peroni and David Shotton. 2012. FaBiO and CiTO: Ontologies for describing bibliographic resources and citations. Journal of Web Semantics 17 (2012), 33–43. advantages of structured scholarly knowledge, more functionalities https://doi.org/10.1016/j.websem.2012.08.001 leveraging this structured data will be developed. An example is [14] Silvio Peroni and David Shotton. 2018. The SPAR ontologies. In International Semantic Web Conference. Springer, 119–136. faceted search, which provides an alternative to the full-text search [15] Alina Petrova, Evgeny Sherkhonov, Bernardo Cuenca Grau, and Ian Horrocks. commonly used to find related literature. 2017. Entity comparison in RDF graphs. Lecture Notes in Computer Science (includ- ing subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformat- ics) 10587 LNCS (2017), 526–541. https://doi.org/10.1007/978-3-319-68288-4_31 7 CONCLUSION [16] Almudena Ruiz Iniesta and Oscar Corcho. 2014. A review of ontologies for describ- The presented workflow shows how research contributions in a ing scholarly and scientific documents. In 4th Workshop on Semantic Publishing (SePublica) (CEUR Workshop Proceedings). http://ceur-ws.org/Vol-1155#paper-07 graph database can be compared, which answers our first research [17] Pavel Shvaiko and Jérôme Euzenat. 2013. Ontology matching: State of the art question. The workflow consists of four steps in which comparison and future challenges. IEEE Transactions on Knowledge and Data Engineering 25, 1 (2013), 158–176. https://doi.org/10.1109/TKDE.2011.253 candidates are selected, related statements are fetched, properties [18] William E. Winkler. 1990. String Comparator Metrics and Enhanced Decision are mapped and finally the comparison is visualized. We presented, Rules in the Fellegi-Sunter Model of Record Linkage. Proceedings of the Section evaluated and discussed an implementation of the workflow in the on Survey Research, American Statistical Association (1990), 354–359. https://doi. org/10.1007/978-1-4612-2856-1_101 ORKG. The implementation answers our second research question [19] Paweł Ziemba, Jarosław Jankowski, and Jarosław Wątróbski. 2017. Online com- by showing how the comparisons can be effectively visualized in a parison system with certain and uncertain criteria based on multi-criteria deci- user interface. The performance evaluation results show that the sion analysis method. In International Conference on Computational Collective Intelligence. Springer, 579–589. system scales well. The user evaluation indicates that users see the potential of a related literature comparison functionality, and that the current implementation is user-friendly. ACKNOWLEDGMENTS This work was co-funded by the European Research Council for the project ScienceGRAPH (Grant agreement ID: 819536) and the TIB Leibniz Information Centre for Science and Technology. The authors would like to thank the participants of the user evaluation. REFERENCES [1] Samur Araujo, Jan Hidders, Daniel Schwabe, and Arjen P. De Vries. 2011. SERIMI - Resource description similarity, RDF instance matching and interlinking. CEUR Workshop Proceedings 814 (2011), 246–247. [2] Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. En- riching Word Vectors with Subword Information. Transactions of the Association for Computational Linguistics 5 (2017), 135–146. https://doi.org/10.1162/tacl_a_ 00051 [3] John Brooke et al. 1996. SUS-A quick and dirty usability scale. Usability evaluation in industry 189, 194 (1996), 4–7. [4] Alexandru Constantin, Silvio Peroni, Steve Pettifer, David Shotton, and Fabio Vitali. 2016. The Document Components Ontology (DoCO). Semantic Web 7, 2 (2016), 167–181. https://doi.org/10.3233/SW-150177 [5] Tim Donovan, Lambert M. Felix, James D. Chalmers, Stephen J. Milan, Alexan- der G. Mathioudakis, and Sally Spencer. 2018. Continuous versus intermittent antibiotics for bronchiectasis. Cochrane Database of Systematic Reviews 2018, 6 (2018), 114–123. https://doi.org/10.1002/14651858.CD012733.pub2