Optimizing Semantic Enrichment of Biomedical Content through Knowledge Sharing Asim Abbas1 , Steve Mbouadeu1 , Avinash Bisram1 , Nadeem Iqbal1 , Fazel Keshtkar1 and Syed Ahmad Chan Bukhari1,∗ 1 Division Of Computer Science, Math & Science, Collins College of Professional Studies, St. John’s University, Queens, NYC, USA Abstract Each day a vast amount of unstructured content is generated in the biomedical domain from various sources such as clinical notes, research articles and medical reports. Such content contain a sufficient amount of efficient and meaningful information that needs to be converted into actionable knowledge for secondary use. However, accessing precise biomedical content is quite challenging because of content heterogeneity, missing and imprecise metadata and unavailability of associated semantic tags required for search engine optimization. We have introduced a socio-technical semantic annotation optimization approach that enhance the semantic search of biomedical contents. The proposed approach consist of layered architecture. At First layer (Preliminary Semantic Enrichment), it annotates the biomedical contents with the ontological concepts from NCBO BioPortal. With the growing biomedical information, the suggested semantic annotations from NCBO Bioportal are not always correct. Therefore, in the second layer (Optimizing the Enriched Semantic Information), we introduce a knowledge sharing scheme through which authors/users could request for recommendations from other users to optimize the semantic enrichment process. To guage the credibility of the the human recommended, our systems records the recommender confidence score, collects community voting against previous recommendations, stores percentage of correctly suggested annotation and translates that into an index to later connect right users to get suggestions to optimize the semantic enrichment of biomedical contents. At the preliminary layer of annotation from NCBO, we analyzed the n-gram strategy for biomedical word boundary identification. We have found that NCBO recognizes biomedical terms for n-gram-1 more than for n-gram-2 to n-gram-5. Similarly, a statistical measure conducted on significant features using the Wilson score and data normalization. In contrast, the proposed methodology achieves an suitable accuracy of ≈90% for the semantic optimization approach. Keywords Structured data, Biomedical semantic enrichment, Annotation optimization, Recommendation, 1. Introduction engines because of the missing machine-interpretable metadata (semantic annotations) [1]. Search engines Over the last few decades, a huge volume of the digi- require the metadata to properly index contents in a tal unstructured textual content has been generated in context-aware fashion for the precise search of biomedi- biomedical research and practice, including various con- cal literature and to foster secondary activities such as tent types such as scientific papers, medical reports, and automatic integration for meta-analysis [2]. Incorpo- physician notes. This explosive growth in the biomedical rating machine-interpretable semantic annotations at domain has introduced several access-level challenges the pre-publication stage (while first-time drafting) of for researchers and practitioners. These valuable infor- biomedical contents and preserving them during online mation are available in the web contents but still opaque publishing is desirable and will be a great value addition to information retrieval and knowledge extraction search to the broader semantic web vision [3]. However, both these processes are complex and require deep technical 4th Edition of Knowledge-aware and Conversational Recommender Sys- and/or domain knowledge. Therefore, a state-of-the-art, tems (KaRS) Workshop @ RecSys 2022, September 18–23 2023, Seattle, freely accessible biomedical semantic content authoring WA, USA. ∗ framework would be a game-changer. Corresponding author. † These authors contributed equally. The main components of the semantic annotation pro- Envelope-Open abbasa@stjohns.edu (A. Abbas); steve.mbouadeu19@stjohns.edu cess are ontologies which are sets of machine-readable (S. Mbouadeu); avinash.bisram19@stjohns.edu (A. Bisram); controlled vocabularies that provide the “explicit specifi- iqbaln@stjohns.edu (N. Iqbal); keshtkaf@stjohns.edu (F. Keshtkar); cation of a conceptualization” of a domain. Similarly se- bukharis@stjohns.edu (S. A. C. Bukhari) mantic annotators are designed to facilitate tagging/anno- GLOBE https://www.linkedin.com/in/asim-abbas-b2891ab8/ (A. Abbas); https://bukharilab.org (S. Mbouadeu); https://bukharilab.org tating the related ontology concepts with pre-defined ter- (A. Bisram); https://bukharilab.org (N. Iqbal); https://bukharilab.org minologies in a manual, automatic, or hybrid way [4]. As (F. Keshtkar); https://bukharilab.org (S. A. C. Bukhari) a result, users produce semantically richer content when Orcid 0000-0001-6374-0397 (A. Abbas); 0000-0002-6517-5261 compared with traditional composing processes e.g., us- (S. A. C. Bukhari) ing a word processor [5]. Owing to the significance of © 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). the semantic annotation process in biomedical informat- CEUR Workshop Proceedings http://ceur-ws.org ISSN 1613-0073 CEUR Workshop Proceedings (CEUR-WS.org) ics research and retrieval, the scientific community has 2. Proposed Methodology invested considerable resources in the development of semantic annotators. Whereas the biomedical annota- This section presents the biomedical semantic annotation tors predominantly use term-to-concept matching with recommendation and optimization processes. We devel- or without machine learning-based methods [5]. Like- oped a system through which users access a biomedical wise, biomedical annotators such as NOBLE Coder [6], content authoring interface analogous to the MS Word ConceptMapper [7], Neji [8], and Open Biomedical An- editor to type or import biomedical contents for their notator [9] use machine learning and annotate text with semantic enrichment. The system generates the first an acceptable processing speed. However, they lack a layer of semantic annotation utilizing the NCBO Biopor- strong disambiguation capacity, i.e., the ability to iden- tal API [10] Figure 1(a). However, the correctness of the tify the correct biomedical concept for a given piece of acquired semantic annotation varies as one annotation is text among several candidate concepts. Whereas NCBO available to multiple ontologies. Furthermore, the linguis- Annotator [10] and MGrep services are quite slow, Rysan- tic mapping mechanism of the Bioportal recommender nMd annotator claims to balance speed and accuracy in often ignores the sentence and paragraph-level context. the annotation process. However, on the flip side its Therefore, the suggested annotations might be correct knowledge base is limited to certain ontologies available at the content level. However, they may be entirely in- in UMLS (Unified Medical Language System) and does not correctly contextually in a particular setting. Only the provide full coverage of all biomedical sub-domains [11]. original author knows in which context they used a spe- Other than the technical challenges as stated above, one cific concept. Therefore a state-of-the-art knowledge of the main reasons why semantic authoring is still in sharing approach is designated as it provides a system infancy and researchers have not been able to achieve that allows the author to query peers for more specific the desired objectives is because researchers did not re- semantic annotation against the biomedical term to op- alize the importance of original content creator (author) timize the annotation quality. In the following sections, involvement and heavily focused on technology sophis- we explain 1) Preliminary Semantic Enrichment, 2) Opti- tication where systems interactions were limited to the mizing Semantic Enrichment, and an Example Scenario technical persons. Typically, only the author knows why in an annotation optimization environment Figure 1. Ad- they used a particular term to explain a concept. Third- ditionally, in an Example Scenario below, we categorize party developers are naturally not privy to such tacit the role as author who posted a query, 𝐸𝑖 = 𝑒1 , 𝑒2 , 𝑒3 ….𝑒𝑛 knowledge. Researchers and practitioners face access- represents responder or expert, and 𝑈𝑖 = 𝑢1 , 𝑢2 , 𝑢3 ….𝑢𝑛 is level issues due to the dissonance between those who community users. authored the original work and those who added seman- tic annotations and published it. The majority of the 2.1. Preliminary Semantic Enrichment authors lack of technical and/or domain knowledge, and there is a steep learning curve that necessitates substan- A biomedical annotator is an essential component of se- tial time to develop critical skills that are not the primary mantic annotation or enrichment [12]. Available biomed- job of the majority of the authors. To overcome the afore- ical annotators use publicly available biomedical ontolo- mentioned challenges, we propose a semantic annotation gies, such as Bioportal [10] and UMLS [4], to help the optimization approach that adopts a knowledge-sharing biomedical community researcher to structure and anno- strategy and presents a framework through which users tate their data with ontology concepts for better infor- can seek and provide suggestions to optimize the an- mation retrieval and indexing. However, the semantic notation quality. Our systems keep track of the recom- annotation and enhancement process is tedious and re- mender confidence score, gather community feedback quires expert curators. With our developed systems, we regarding prior recommendations, store the percentage automate the semantic annotations assignment process. of correctly suggested annotations, and translate that For that, we utilized the NCBO Bioportal web-service into an index to later connect the appropriate users to resources [10] that analyze the raw textual content and receive suggestions to optimize the semantic enrichment tag them with relevant biomedical ontology concepts. of biomedical contents. The rest of the paper is organized By pressing the ”Annotate” button, users can generate as follows. The proposed methodology section covers a preliminary level of annotations without the need for implementation details of catering preliminary seman- any technical knowledge. Initially, authors can either tic annotation, semantic annotation optimization and import pre-existing content from research papers, clini- example scenarios in an annotation optimization envi- cal notes, and biomedical reports or start typing directly ronment. Subsequently, the result and discussion include in the semantic text editor see Figure 1(a). Our systems the dataset utilized, methodology for evaluation, and re- accept the user’s free text and feeds it forward as input sults achieved at system-level. The conclusion section to a concept recognition engine. The engine identifies summarizes the systems’ working and future plans. relevant ontologies, acronyms, definitions, and ontology Figure 1: Proposed M‘ethodology of Biomedical Content Semantic Optimization links for individual terminologies that are best matched 2.3. Optimizing the Enriched Semantic based on the context by following the string matching Information approach. This semantic information is displayed in our system’s annotation panel for human interpretation and To optimize the newly harvested annotations through understanding Figure 1(a). Authors may alter the gener- the knowledge-sharing process, authors are required to ated semantic information based on their knowledge and select the existing annotation and then click to seek the experience, such as choosing appropriate ontology from help option from the panel. A pop-up appears with a the list, selecting suitable acronyms, removing seman- drop-down of question sets that authors may ask. For tic information or annotating for explicit terminology, example, if authors are interested to know whether a etc. Users without a technical background may easily particular prelimanary annotation or ontology is correct. navigate a simplified interface, while more sophisticated They can select the questions and fill in the required users may utilize advanced options to control the seman- information. Similarly, authors can seek peers help post- tic annotation and authoring process further. ing a question. All the posted questions will go to the ”Semantically Knowledge Cafe” forum style. The ”Seman- tically Knowledge Cafe” is a virtual social place where 2.2. Seeking Annotation people/users ask questions and seek help regarding their Recommendation annotation improvement. As soon authors receive a re- Subsequently at initial level semantic annotation, the sponse from the crowd, they are notified, and all sugges- author is enabled to approach and get recommendation tions start the display with the option to accept or reject. from peer review for a correct and high quality anno- Here the authors decide to choose a particular sugges- tation through seeking help module Figure 1(b). The tion based on social indexing. Our system calculates the authors are required to select the biomedical term from social index and displays each suggestion based on its the preliminary annotation interface for correct annota- index score in descending order. To gauge the credibility tion by peer review. Additionally the author is facilitated of the human recommender, our systems record the rec- with an interface to smoothly query with available op- ommender confidence score, collect community voting tions such as a drop down menu of recommended queries against previous recommendations, store the percentage for an author. Similarly author can explain their query of correctly suggested annotation, and translate that into and provide evidence and links to better convey their an index to later connect the right users to get sugges- query to the expert 𝐸𝑖 or peer review. Finally when the tions to optimize the semantic enrichment of biomedical author submit their query, it is posted on “Semantically contents. All the process information is stored in the Knowledge Cafe” forum for peer review response and a backend knowledge base. notification is send to the community users as shown in Consider an author is required to find correct ontology Figure 1(c). annotations from peer review for the biomedical term “worsening shortness of breath” as shown in Figure 2. The author posts the query on a “Semantically Knowledge Cafe” forum such as “Which Ontology should I use for medical content ‘worsening shortness of breath’”? and receives replies from fellow users or experts 𝐸𝑖 . We cate- gorized users who replied as expert users as 𝐸𝑖 with “No self confidence and author credibility score for each ex- of Reply-post” and suggested the correct annotation for a pert 𝐸𝑖 that suggested annotation as “aggregate score” of required biomedical content as “Expert Annotation” see (0.458, 0.381, 0.518). Finally, 𝑎𝑟𝑔𝑚𝑎𝑥(𝑥𝑖 ) function is ap- Figure 2. In the study three expert users participated plied on the aggregate score to obtain the maximum and each expert suggested the annotation as (“RCD”,”UP- score earned by each expert 𝐸𝑖 annotation which is 0.518. HENO” and “NCIT”). We also asked experts to provide Eventually, the high proficient and ranking annotation their confidence score which they recorded as of (4,6 is recommended to the author as “NCIT” and “Reply- and 7) out of a scale from 1 to 10. The community user- post=3” for the biomedical content “worsening shortness s/crowd 𝑈𝑖 at ’Semantically Knowledge Cafe’ can observe of breath”, see Figure 2. The same process is applied for the suggested recommendation and record their up and another biomedical content, “Acute Flaccid Myelitis”, yet down voting about a particular suggestions. From users the scenario or query can be changed. 𝑈𝑖 , we recorded upvotes (9,10,11) and downvotes (9,8,7) to the expert recommended annotation. Whenever the author accepts recommended annotation from experts 𝐸𝑖 , 3. Results and Discussion a credibility score is recorded. We used Wilson score con- 30 people participated in our proposed model. We re- fidence interval for a Bernoulli parameter to normalize cruited participants by via social media request and and aggregate the recorded scores, see Equ. (1). asking them to participate in the study. Further, We categorize participants as the most graduate-level stu- 𝑧 2𝛼 𝑧 2𝛼 (𝑝̂ + 2𝑛 ± 𝑧 [𝑝 ̂ (1 − 𝑝)̂ + 4𝑛 ]) 2 𝛼 2 dents with computer and biological science backgrounds. 2 √ Accordingly, We considered a set of 30 articles from 𝑊 𝑖𝑠𝑙𝑜𝑛𝑠𝑐𝑜𝑟𝑒 = (1) pubmed.org [13] and randomly distributed it to the par- 𝑧 2𝛼 (1 + 𝑛2 ) ticipants. Similarly, We provided a user manual of sys- tems along with a pre-recorded video about system us- age. Afterward, We asked each participant to generate Where, 𝑁 queries on the “Semantically Knowledge Cafe” about the 𝑝 ̂ = (∑ +𝑉) / (𝑛) (2) biomedical content annotation about which they like to 𝑛=1 seek social help. Collectively, our participants post 140 𝑁 𝑀 questions to the system. All the participants have also 𝑛 = ∑ ∑ (+𝑉𝑖 , −𝑉𝑗 ) (3) recorded their confidence scores between 1 and 10 from 𝑖=0 𝑗=0 the suggestions they received as a satisfaction score. Con- 𝛼 sequently, Our system recorded 421 responses against and, 𝑧 𝛼 is the (1 − 2 ) quantile of the standard normal 2 140 questions from expert users. Similarly, 2929 and 3149 distribution. up and down votes were also recorded against the sug- In Equ. (1), 𝑝̂ is the sum of upvotes (+𝑉 ) of a commu- gestions annotations. Table 1 illustrates participants and nity user’s 𝑈𝑖 to the expert 𝐸𝑖 response for a post from their responses. an author for correct annotation divided by overall votes (+𝑉 , −𝑉 ) see Equ. (2). Likewise n is the sum of number Table 1 of upvote and downvote (+𝑉 , −𝑉 ) see Equ. (3) and 𝛼 is Datasets utilized for experimental purpose the confidence refers to the statistical confidence level: pick 0.95 to have a 95% chance that our lower bound is Title Numbers correct. However the z-score in this function is fixed. N o of Participants 30 Likewise, a data normalization formula (see Equ. (4)) is N o of Documents 30 employed on each expert 𝐸𝑖 confidence score and author N o of Posts 140 credibility score to downstream the value between 0 and N o of Response 421 1. N o of Upvotes 2929 𝑧𝑖 = (𝑥𝑖 − 𝑚𝑖𝑛(𝑥))/(𝑚𝑎𝑥(𝑥) − 𝑚𝑖𝑛(𝑥)) ∗ 𝑄 (4) N o of Downvotes 3149 Where 𝑧𝑖 is the 𝑖𝑡ℎ normalized value in the dataset. Where 𝑥𝑖 is the 𝑖𝑡ℎ value in the dataset e.g the user confidence score. Similarly, 𝑚𝑖𝑛(𝑥) is the minimum value in the 3.1. Performance Measurement: dataset, e.g the minimum value between 1 and 10 is 1, Preliminary Semantic Enrichment so the 𝑚𝑖𝑛(𝑥) = 1 and 𝑚𝑎𝑥(𝑥) is the maximum value After catering initial level semantic information from in the dataset, e.g the maximum value between 1 and NCBO Bioportal, we analyzed the content following the 10 is 10, so the 𝑚𝑎𝑥(𝑥) = 10. Consequently, a mean 1 𝑁 n-gram strategy. This strategy is crucial for the biomedi- 𝑥̂ = 𝑁 ∑𝑖=0 𝑥𝑖 applied on Wilson score, normalize the cal word or concept boundary detection process. A set of Figure 2: A statistical process of semantic annotation optimization approach. Figure 3: Initial level semantic annotation performance employ biomedical term of N-gram strategy. 30 pubmed.org [13] articles is processed at the initial level, 3.2. Performance Measurement: consequently obtaining annotated biomedical terms in Knowledge-sharing based Semantic the range of n-gram-5. Subsequently scrutinized, we Enrichment Optimization found the proposed annotation system identifies biomed- ical terms of n-gram-1 quantitatively higher than n-gram- A domain expert from the academia at the professor level 2 to n-gram-5. A very few biomedical terms identify of is engaged to evaluate these results manually based on n-gram-5 see Figure 3. However, the biomedical terms their knowledge and experience. After that, calculate of 𝑛 − 𝑔𝑟𝑎𝑚 > 1 deliver extra meaningful and coher- the system level accuracy for semantic annotation be- ent information to the user contextually. For example, fore socio-techinical semantic annotation optimization “blood pressure is high”, “he has coronary artery disease”, and after socio-technical semantic annotation optimiza- and “liver function test is normal” are more meaningful tion approach Figure 4. A document’s level accuracy terms as compared to a single term such as “pressure”, is recorded with-out a socio-technical and with a socio- “blood”, “coronary”, and others. As the 𝑛 − 𝑔𝑟𝑎𝑚 word technical approach. The Figure 4 on X-axis represents size increases, the accuracy of composite terms decreases, the number of 30 documents processed. In contrast, the as shown in Figure 3. Because the proposed system em- Y-axis at the left represents the level of accuracy with-out ploys exact word matching to the terminology (Bioportal) a socio-technical approach, and the Y-axis at the right approach, the primary characteristic of the exact word represents the level of accuracy with a socio-technical matching approach is that a single word matches more approach. Consequently, scrutinizing the results of a accurately than a combined or compound word. system with a socio-technical approach performed better than with-out a socio-technical at document level. Until high accuracy of 90% has been gained by nine documents and lower accuracy of 87% has gained by three documents and maximum number of documents has gained accuracy between 87% and 90% with socio-technical approach see Figure 4: System level performance for socio-technical annotation recommendation. Figure 4. Similarly high accuracy of 73% is yielded by one interface with possible option is available to the expert document and low accuracy of 65% is gained by five docu- for reply post. Subsequently reply by the expert to the ments and maximum number of documents gained accu- author post with precise annotation, same while other racy in the range of 65% to 73% with-out socio-technical community users 𝑈𝑖 are enable to give up-vote or down- approach. Overall the proposed annotation optimization vote to the expert reply post shown in Figure 5(e). Finally socio-technical system remains the winner by obtaining a high quality annotation recommendation notification high precision for each document related to the with-out is generated to the author by aggregating wilson score, socio-technical. and expert self confidence score as shown in Figure 5(f). Whenever the author click on ”New Recommendation” 3.3. Semantically Workspace: Semantic link, a high quality expert recommended annotation is pop-up see Figure 5(g). Now here author is allow ei- Annotation Optimization ther accept the recommended annotation or reject, while Demonstration accepting annotation a credibility score is recorded to Initially the author is enable to import or write the the author profile between 1-5, vise versa no score is biomedical content in the editor and click on annotation recorded to the author profile. Similarly by accepting button to get preliminary annotation see Figure 5. The un- recommended annotation, initial annotation for specific derline word with green color presented annotated term, terminology is replaced by recommended one and thus subsequently when author select any term the underline annotation optimization process is completed Figure 5(g). color change to pink and ”Need Help” option is appeared on left side panel to the author see Figure 5(a). After click 4. Conclusion on ”Need Help” an interface is open, where author can write there query from expert for recommended anno- This research advances state-of-the-art biomedical se- tation to explicit terminology Figure 5(b). Additionally mantic research and systems, enabling various biomed- author is facilitated with primary options for quick query. ical users to author context-aware content with no When the author click on ”𝑆𝑢𝑏𝑚𝑖𝑡” button, the query is prior technical skills needed. An out-of-the-box socio- posted on the ”Semantically Knowledge Cafe” forum and technical semantic annotation optimization approach is a new post notification received to the community users presented to automate the semantic enrichment mecha- 𝑈𝑖 as shown in Figure 5(c). Whenever users 𝑈𝑖 click on nism and discover the precise semantic annotation while ”Semantically Knowledge Cafe” , the new posted query is keeping the original content creator in the loop. The appeared as shown in Figure 5(d). Now that if user know end user is facilitated with an authoring interface similar the answer of the posted query, he/she is enable to click to the MS Word editor type/write biomedical contents. on ”𝐴𝑛𝑠𝑤𝑒𝑟” button to reply author post/query as shown To cater the preliminary semantic annotation or enrich- in Figure 5(d). Subsequently reply to the post by user ment at the content level, we utilized Bioportal endpoint with record self confidence score, now the role of this APIs and automated the configuration process for au- user is consider as a domain expert. Similarly, a smooth Figure 5: Semantic Annotation optimization and Enrichment Demonstration Interfaces. thors. Similarly, the semantic annotation optimization [5] K. Hasida, Semantic authoring and semantic com- approach is designed where the author can post their puting, in: New Frontiers in Artificial Intelligence, query for optimized annotation recommendation. In our Springer, 2003, pp. 137–149. future work, we plan to expand the backend knowledge [6] E. Tseytlin, K. Mitchell, E. Legowski, J. Corrigan, graph and apply the neural graph networks. The se- G. Chavan, R. S. Jacobson, Noble–flexible concept mantic annotation optimization system is available at recognition for large-scale biomedical natural lan- https://gosemantically.com. guage processing, BMC bioinformatics 17 (2016) 1–15. [7] C. Funk, W. Baumgartner, B. Garcia, C. Roeder, Acknowledgments M. Bada, K. B. Cohen, L. E. Hunter, K. Verspoor, Large-scale biomedical concept recognition: an This work is supported by the National Science Founda- evaluation of current automatic annotators and tion grant ID: 2101350. their parameters, BMC bioinformatics 15 (2014) 1–29. References [8] D. Campos, S. Matos, J. L. Oliveira, A modu- lar framework for biomedical concept recognition, [1] A. Abbas, S. F. Mbouadeu, F. Keshtkar, J. DeBello, BMC bioinformatics 14 (2013) 1–21. S. A. C. Bukhari, Biomedical scholarly article edit- [9] J. Jovanović, E. Bagheri, Semantic annotation in ing and sharing using holistic semantic uplifting biomedicine: the current landscape, Journal of approach, in: The International FLAIRS Conference biomedical semantics 8 (2017) 1–18. Proceedings, volume 35, 2022. [10] C. Jonquet, N. Shah, C. Youn, C. Callendar, M.-A. [2] S. A. C. Bukhari, Semantic enrichment and simi- Storey, M. Musen, Ncbo annotator: semantic an- larity approximation for biomedical sequence im- notation of biomedical data, in: International Se- ages, Ph.D. thesis, University of New Brunswick mantic Web Conference, Poster and Demo session, (Canada), 2017. volume 110, Washington DC, USA, 2009. [3] P. Warren, J. Davies, D. Brown, The semantic web- [11] J. Cuzzola, J. Jovanović, E. Bagheri, Rysannmd: from vision to reality, ICT futures: Delivering per- a biomedical semantic annotator balancing speed vasive, real-time and secure services (2008) 55–66. and accuracy, Journal of Biomedical Informatics 71 [4] A. Abbas, M. Afzal, J. Hussain, S. Lee, Meaningful (2017) 91–109. information extraction from unstructured clinical [12] S. F. Mbouadeu, A. Abbas, F. Ahmed, F. Keshtkar, documents, Proc. Asia Pac. Adv. Netw 48 (2019) J. De Bello, S. A. C. Bukhari, Towards structured 42–47. biomedical content authoring and publishing, in: 2022 IEEE 16th International Conference on Seman- tic Computing (ICSC), IEEE, 2022, pp. 175–176. [13] PubMed, National Center for Biotechnology In- for- mation, 2022. https://pubmed.ncbi.nlm.nih.gov/.