<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Trending Topics on Science, a tensor memory hypothesis approach</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Felipe Torres</string-name>
          <email>felipe.torrese@sansano.usm.cl</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Universidad Técnica Federico Santa María</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>The current human knowledge is written. Documenting is the most used manner to preserve memories and to store fantastic stories. Thus, to distinguish the reality from fiction, the scientific writing cites previous works moreover than become form experimental setups. Books and scientific papers are only a small part of the existent literature but are considered more thrust as information sources. It is useful to find more relations and to know where to focus the lookup of a topic using the information about the authors and the keywords on the titles and abstracts. This is possible using relational databases or knowledge graphs, a semantic approach, but with the tensor memory hypothesis, that adds a temporal dimension, is possible to process the information with an episodic memory approach. If well, knowledge graphs are of extended use on question answering and chatbots, they need a previous relational schema generated automatically or by-hand and stored in an easy-to-query file format. I use JATS, a standard format that allows integrating scientific papers in semantic searches but is not spread on all scientific publishers, to extract the markup tags from PDF files, current year journal articles of one particular topic, and then construct the tensors memory with their references to extract relations and predictions with statistical relational learning techniques.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        Memory is defined as the ability to record information and after recall it. Writing is a human
invention that facilitates this capacity in particular for declarative memories that are facts or events
that can be expressed with language and it could be of two types: semantic or episodic
        <xref ref-type="bibr" rid="ref12 ref13">(Tresp et al.,
2017)</xref>
        .
      </p>
      <p>The memories and knowledge of humanity are stored on written documents, getting more
reliability if they include references to previous works from others authors. Scientific articles are
the model of well-structured presentation and storage of information, each one of them with an
own title, explicit authorship, and references to information related to other documents or within
the same document. But, what almost always is relevant for the consideration of reading them,
the retrieval action, is their publishing year. Thus, their ordered structure makes possible to use
them as a representation of global human episodic knowledge and memories. Also, scientific
publication as a human activity could be modeled as a social network. From this kind of networks
the expression “trending topic” emerged to call the more frequent term or word used in a specific
temporal window and it is understood as the principal theme or main subject that is related to the
information described in a piece of content.</p>
      <p>
        In a mathematical and computational framework, semantic memories could be represented
as knowledge graphs, where the entities are nodes and the links are relations between them.
A relation between entities is then possible to define as a triple .s; p; o/ or as a simple sentence
subject-predicate-objective. An episodic memory adds a time marker, thus a temporal
prepositional phrase is added to the simple sentence: subject-predicate-objective-temporal_preposition
or a quad .s; p; o; t/. This approach is widely used on semantic web technologies under the Linked
Data methodology
        <xref ref-type="bibr" rid="ref2">(Bizer et al., 2011)</xref>
        .
      </p>
      <p>
        Thus, it is plausible to use complex networks analysis tools to search for the most relevant
relations between authors, paper titles or keywords. The scientific publication databases can easily
contain millions of authors, papers and their respective citations. A reduced number of relevant
documents is expected from a specific topic query, and not thousands of results that search engines
like Google Scholar or publisher’s own engines could generate for a given chain of words. The field
of science of science studies these relations and the former works were realized using knowledge
graphs, that are expressed as adjacency matrices. If the temporal dimension and various types
of relationships are considered, then its possible to form tensors of fourth order. A matrix X
of the network could be bipartite (X Ë Rnm) if there are two types of nodes (authors-articles,
authors-words, articles-words) or monopartite (X Ë Rnn); unweighted ( xij Ë ^0; 1`) or weighted
(xij Ë R), directed or undirected (XT = X)
        <xref ref-type="bibr" rid="ref15">(Zeng et al., 2017)</xref>
        .
      </p>
      <p>
        <xref ref-type="bibr" rid="ref12 ref13">(Tresp and Ma, 2017)</xref>
        introduced the Tensor Memory Hypothesis, where a knowledge graph is
represented by a Tucker decomposition of the tensors. It is based on representational learning, i.e,
a discrete entity e is associated with a vector of real numbers ae called latent variables.
        <xref ref-type="bibr" rid="ref12 ref13">(Tresp and
Ma, 2017)</xref>
        also argue that representational learning might also be the basis for perception, planning
and decision making. From a physiological point of view, there is evidence that the hippocampus
plays a central role in the temporal organization of memories and supports the disambiguation
of overlapping episodes
        <xref ref-type="bibr" rid="ref5">(Eichenbaum, 2014a)</xref>
        , then in the standard consolidation of memory
theory (SCT), the episodic memory is a neocortical representation that arises from hippocampal
activity while in the multiple trace theory (MTT) the episodic memory is only represented on the
hippocampus and is used to form semantic memories on the neocortex. Also, there is evidence of
the existence of “place cells” and “time cells”in the hippocampus and that these support associative
networks that represent spatiotemporal relations between the entities of memories
        <xref ref-type="bibr" rid="ref5">(Eichenbaum,
2014b)</xref>
        .
      </p>
    </sec>
    <sec id="sec-2">
      <title>Results</title>
      <p>The quantity of latent components is not associated with a specific statistical measure of data.
However, to have an approach, table 1 presents the correspondent percentage of variance if the
same number of PCA components were employed.
stimulation, sleep sleep, memory
sleep sleep
sleep, stimulation sleep, memory
brain, consolidation sleep, memory
oscillations, sleep sleep, memory
activity, memory sleep, memory
oscillations, humans sleep, memory
reactivation, slow-wave sleep, memory
sleep, brain sleep, memory
neuromodulation neuromodulation
stimulus, presented stimulus, presented
presented presented
sleep, memory sleep
stimulus, memory stimulus, cued
memory, sws memory, spatial, sws
sleep, stimulus sleep, stimulus
assr, memory assr, memory
wireless, monitoring sleep, slow</p>
      <p>Words
neuromodulation
stimulus, technique
presented
sleep
stimulus, cued
memory, sws
sleep, stimulus
assr, memory
sleep, slow</p>
      <p>The words with more relations in the complete tensor, before decomposition, are sleep,
memory, stimulation, slow, brain, consolidation, auditory, spindles, reactivation, and
activity. Table 2 is populated using a selection strategy of most frequently word from queries of
the type</p>
      <p>wordi = argmaxo^P .s; o; t/`; (1)
where s is each author, paper title or word in the database, o a word, t a year and, i is the index of a
entity .</p>
      <p>The most probable words, from the same queries, using more latent components are more
than using a few latent variables. For example, there are 21 different words from query results
using 200 latent components. In the other hand for few latent components, the results of queries
are only the words shown in table 2.</p>
      <p>Table 3 is populated using the of NMF decomposition in the collapsed on time matrix, adding
the weights of each year. The more frequently words are selected from which are maximum for
each topic or k-row in the matrix H of the decompositions. The same processing using nsNMF
decomposition results with the words sleep and memory as the most probable for all the cases.</p>
      <p>The analysis of relationships between entities needs a metric of distance. Each entity is
represented by latent vectors, then one metric selection could be the Euclidean distance but given
this particular type of data, content from documents, the usual metric employed is the cosine
similarity. However, the use of distances on the original data space demand high computational
costs, the use of a reduced space alleviates the computational cost of calculating distances but
requires a previous high cost of space transformation. Figure 1 is an example of the Euclidean
distance and cosine similarity that was extracted from the R tensor of the RESCAL factorization. The
difference between the years of sources and the years of only cited papers is most evident with
less latent components. Moreover, the similarity is greater, then lesser Euclidean distance, between
the entities of the previous years.</p>
    </sec>
    <sec id="sec-3">
      <title>Discussion</title>
      <p>There are scientific papers meta-data databases or it is possible to extract article’s meta-data from
a specific journal or publisher. But in practice, it is usual to have few references from a previous
search and they are from different journals or publishers, then to extract the meta-data I used
the JATS format 1, a semantic web standard format for scientific papers popularized by National
Center for Biotechnology Information (NCBI). A most popular format is the Resource Description
Framework (RDF) and various scientific publishers are adopting this one.</p>
      <p>The analysis of the statistical features of the tensor without any other process could give
information of the most related entities, as the most cited author, most cited article or most used
word in each slice of time. However, employing a tensor decomposition technique allows the
use of a latent components space, where more information could be extracted given that the
relationships are expressed in fewer variables, thus, clustering some properties of data. This work
is an example of how from a small sample of documents with a known relationship between
them, the topic was already known, some words that are not the most frequent could be
extracted and provide a new perspective of the topics covered on the documents. The figure 1 is
an example of extracted information that is not easy to visualize in the original space of the data.
BArysearysea11500050 00 yyee3a3arrss 1100 0000100000000.............687901100010042486002 11500050 00 yye2e2aa55rrss 1100 00001000000000......00........02460876243501..42 11500050 00 yy22ee00aa0r0rss 1100 0000010000000.............0246805624310 wtsssa2oycied0fetpimi1velboaeTa7nnathtn)teoensietofinfcatacfcthidgcnocndecadmaooootonamnlctpfoh.dusrapgemieAredioleilspaseesnseotoinresiaiino,tnsnorsnttcgodnhsaahfioencltidtshfddmopireaffwirertteoeoasktimrhpst,rnerkteoaoionhcsvtwrotatoaaietpteliledebldstaodnieuombafgsenldlo(eeulTlotorymorfgwnoedtrsersosenaepvctphumfoaooehmarmarttvsimnoabipanuktaeaoliegssc.rl-,
Figure 1. Distance metrics on the latent space. A. a framework that links computational memory
Cosine similarity between years with 3, 25, and 200 with the biological one. Its capacities and defects
latent components. B. Euclidean distance between need to be explored. Curiously, the etymology of
years with 3, 25, and 200 latent components. “topic” comes from the Greek topos or place, that
as memory is other of the known hippocampus
cognitive functions.</p>
      <p>Finally, from the results obtained is evident
that sleep and memory are the most relevant words of the selected papers, these words and slow
are the few words that are the result from queries too with RESCAL decomposition. The nsNMF
decomposition gives for any number of components the same words, then it is more robust to the
change in the number of components.</p>
    </sec>
    <sec id="sec-4">
      <title>Methods and Materials</title>
    </sec>
    <sec id="sec-5">
      <title>Data extraction</title>
      <p>
        The meta-data from 11 articles from different publishers (Table 4) related to “Stimulation during
NREM sleep” in PDF files was obtained using the software CERMINE
        <xref ref-type="bibr" rid="ref11">(Tkaczyk et al., 2015)</xref>
        and
stored in JATS format. After, with a Python script, the own title, authors and abstract were extracted
and also the title and authors of references inside the time range 2008-2018. Later, the titles and
abstracts were tokenized and semantically tagged, using nltk library, to extract the adjectives and
nouns that are considered the principal terms of the articles. For de-duplicating authors, all names
are formatted to “(Last name) (First name initial.) (Middle name initial.)” For de-duplication of titles
and words, all words were transformed to lowercase and special characters were eliminated.
      </p>
      <p>For each year, a zeros square matrix Xk Ë R.na+nt+nw/.na+nt+nw/ was populated with weighted and
directed values using the next rules only in the k-year correspondent to relations:
sig.x/ = 1 +1e*x
(3)
(4)
s;p;o;t = f e.aes ; aep ; aeo ; aet /;
(6)
r r r r
f e.aes ; aep ; aeo ; aet / = É É É É aes;r1 aep;r2 aeo;r3 aet;r4 ge.r1; r2; r3; r4/: (7)
r1=1 r2=1 r3=1 r4=1</p>
      <p>The analysis of tensors, as for matrices, is possible to perform using a reduced form obtained by
factorization. One popular factorization method of tensors is the Tucker representation, however,
there are other matrices and tensor decomposition algorithms. Here, I used RESCAL and the
construction of the tensor with weighted values allow to omit the predicate dimension, then the
characteristic function becomes</p>
      <p>r r r
f e.aes ; aeo ; aet / = É É É aes;r1 aeo;r2 aet;r3 ge.r1; r2; r3/:
r1=1 r2=1 r3=1
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
Where Rk is a slice of the tensor R and for optimization a singular value decomposition of matrix A
is employed. P is the matrix such that diag.vec.P // = S, which can be constructed by rearranging
the diagonal entries of S via the inverse vectorization operator vecr*1. /
Then, for regularization, the Kronecker product of the diagonal matrix is employed.</p>
      <p>A = U V T ;</p>
      <p>S = ä
Sii =</p>
      <p>Sii</p>
      <p>Si2i + R
X ù W H;</p>
      <p>XHT
W } W W HHT + ;</p>
      <p>H } H W TWWTHX+ :</p>
      <sec id="sec-5-1">
        <title>Non-negative Matrix Factorization (NMF)</title>
        <p>This matrix factorization method finds two matrices W Ë Rnr and H Ë Rrm which multiplication
minimizes the Froebenius norm with the original matrix X Ë Rnm.</p>
        <p>
          The updates using the algorithm proposed by
          <xref ref-type="bibr" rid="ref8">(Lee and Seung, 2001)</xref>
          are:
Non-smooth Non-negative Matrix Factorization (nsNMF)
This decomposition is a modification of NMF proposed by
          <xref ref-type="bibr" rid="ref7">(Kang and Lin, 2018)</xref>
          .
        </p>
      </sec>
      <sec id="sec-5-2">
        <title>Where</title>
      </sec>
      <sec id="sec-5-3">
        <title>And using</title>
        <p>X ù W SH;
S = .1 * /I + 11T ;</p>
        <p>k
D =</p>
        <p>H m
É
j=1</p>
        <p>I</p>
        <p>Hi;j I;</p>
        <p>W = WhD*1S*1;
X ù W D*1S*1SDH:
(18)
(19)
(20)
(21)
(22)</p>
        <p>Finally, the matrix decomposition could be expressed as</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Funding</title>
      <p>This work was supported by Beca Doctorado Nacional Conicyt, Folio No 21180640.
doi:</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <surname>Alshareef</surname>
            <given-names>AM</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Alhamid</surname>
            <given-names>MF</given-names>
          </string-name>
          ,
          <source>El Saddik A. Recommending Scientific Collaboration Based on Topical, Authors and Venues Similarities</source>
          .
          <source>2018 IEEE International Conference on Information Reuse and Integration (IRI)</source>
          .
          <year>2018</year>
          ; p.
          <fpage>55</fpage>
          -
          <lpage>61</lpage>
          . https://ieeexplore.ieee.org/document/8424687/, doi: 10.1109/IRI.
          <year>2018</year>
          .
          <volume>00016</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <surname>Bizer</surname>
            <given-names>C</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Heath</surname>
            <given-names>T</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Berners-Lee</surname>
            <given-names>T</given-names>
          </string-name>
          .
          <article-title>Linked data: The story so far</article-title>
          . In:
          <article-title>Semantic services, interoperability and web applications: emerging concepts</article-title>
          IGI Global;
          <year>2011</year>
          .p.
          <fpage>205</fpage>
          -
          <lpage>227</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <surname>Eichenbaum H.</surname>
          </string-name>
          <article-title>Memory on time</article-title>
          .
          <volume>10</volume>
          .1016/j.tics.
          <year>2012</year>
          .
          <volume>12</volume>
          .007.
          <string-name>
            <surname>Memory</surname>
          </string-name>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <source>Trends in Cognitive Sciences</source>
          .
          <year>2014</year>
          ;
          <volume>17</volume>
          (
          <issue>2</issue>
          ):
          <fpage>81</fpage>
          -
          <lpage>88</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <surname>Eichenbaum H.</surname>
          </string-name>
          <article-title>Time cells in the hippocampus: A new dimension for mapping memories</article-title>
          .
          <source>Nature Reviews Neuroscience</source>
          .
          <year>2014</year>
          ;
          <volume>15</volume>
          (
          <issue>11</issue>
          ):
          <fpage>732</fpage>
          -
          <lpage>744</lpage>
          . doi:
          <volume>10</volume>
          .1038/nrn3827.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <surname>Griffiths</surname>
            <given-names>TL</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Steyvers</surname>
            <given-names>M.</given-names>
          </string-name>
          <article-title>Finding scientific topics</article-title>
          .
          <source>Proceedings of the National academy of Sciences</source>
          .
          <year>2004</year>
          ;
          <volume>101</volume>
          (
          <issue>suppl 1</issue>
          ):
          <fpage>5228</fpage>
          -
          <lpage>5235</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <surname>Kang</surname>
            <given-names>Y</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lin Kp</surname>
          </string-name>
          .
          <article-title>Topic Diffusion Discovery based on Sparseness-constrained Non-negative Matrix Factorization</article-title>
          . .
          <year>2018</year>
          ; doi: 10.1109/IRI.
          <year>2018</year>
          .
          <volume>00021</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <surname>Lee</surname>
            <given-names>D</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Seung</surname>
            <given-names>H</given-names>
          </string-name>
          .
          <article-title>Algorithms for non-negative matrix factorization</article-title>
          .
          <source>Advances in neural information processing systems</source>
          .
          <source>2001; (1)</source>
          :
          <fpage>556</fpage>
          -
          <lpage>562</lpage>
          . http://papers.nips.cc/paper/1861-algorithms
          <article-title>-for-non-negative-matrix-factorization</article-title>
          , doi: 10.1109/IJCNN.
          <year>2008</year>
          .
          <volume>4634046</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <string-name>
            <surname>Ma</surname>
            <given-names>Y</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tresp</surname>
            <given-names>V</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Daxberger</surname>
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Embedding</surname>
          </string-name>
          <article-title>Models for Episodic Memory</article-title>
          . .
          <year>2018</year>
          jun; http://arxiv.org/abs/
          <year>1807</year>
          .00228.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <string-name>
            <surname>Nickel M. Tensor</surname>
          </string-name>
          <article-title>Factorization for Relational Learning</article-title>
          . .
          <year>2013</year>
          ; p.
          <fpage>161</fpage>
          . http://nbn-resolving.de/urn:nbn:de:bvb:
          <fpage>19</fpage>
          -
          <lpage>160568</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <string-name>
            <surname>Tkaczyk</surname>
            <given-names>D</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Szostek</surname>
            <given-names>P</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fedoryszak</surname>
            <given-names>M</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dendek</surname>
            <given-names>PJ</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bolikowski</surname>
            <given-names>Ł</given-names>
          </string-name>
          .
          <article-title>CERMINE: automatic extraction of structured metadata from scientific literature</article-title>
          .
          <source>International Journal on Document Analysis and Recognition (IJDAR)</source>
          .
          <year>2015</year>
          ;
          <volume>18</volume>
          (
          <issue>4</issue>
          ):
          <fpage>317</fpage>
          -
          <lpage>335</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <string-name>
            <surname>Tresp</surname>
            <given-names>V</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ma</surname>
            <given-names>Y.</given-names>
          </string-name>
          <article-title>The Tensor Memory Hypothesis</article-title>
          . .
          <year>2017</year>
          ; http://arxiv.org/abs/1708.02918.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <string-name>
            <surname>Tresp</surname>
            <given-names>V</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ma</surname>
            <given-names>Y</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Baier</surname>
            <given-names>S</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yang</surname>
            <given-names>Y</given-names>
          </string-name>
          .
          <article-title>Embedding learning for declarative memories</article-title>
          .
          <source>Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)</source>
          .
          <year>2017</year>
          ;
          <volume>10249 LNCS</volume>
          :
          <fpage>202</fpage>
          -
          <lpage>216</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>319</fpage>
          -58068-5_
          <fpage>13</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <string-name>
            <surname>Wei</surname>
            <given-names>T</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            <given-names>M</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wu</surname>
            <given-names>C</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yan</surname>
            <given-names>XY</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fan</surname>
            <given-names>Y</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Di</surname>
            <given-names>Z</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wu</surname>
            <given-names>J</given-names>
          </string-name>
          .
          <article-title>Do scientists trace hot topics? Scientific Reports</article-title>
          .
          <year>2013</year>
          ;
          <volume>3</volume>
          :
          <fpage>3</fpage>
          -
          <lpage>7</lpage>
          . doi:
          <volume>10</volume>
          .1038/srep02207.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <string-name>
            <surname>Zeng</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shen</surname>
            <given-names>Z</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhou</surname>
            <given-names>J</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wu</surname>
            <given-names>J</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fan</surname>
            <given-names>Y</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            <given-names>Y</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stanley</surname>
            <given-names>HE</given-names>
          </string-name>
          .
          <article-title>The science of science: From the perspective of complex systems</article-title>
          .
          <source>Physics Reports</source>
          .
          <year>2017</year>
          ;
          <fpage>714</fpage>
          -
          <lpage>715</lpage>
          :
          <fpage>1</fpage>
          -
          <lpage>73</lpage>
          . https://doi.org/10.1016/j.physrep.
          <year>2017</year>
          .
          <volume>10</volume>
          .001, doi: 10.1016/j.physrep.
          <year>2017</year>
          .
          <volume>10</volume>
          .001.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>