<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Literate Sources for Content Dictionaries</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Lars Hellstrom</string-name>
          <email>lars.hellstrom@residenset.net</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Mathematics and Mathematical Statistics, Umea University</institution>
          ,
          <addr-line>Umea</addr-line>
          ,
          <country country="SE">Sweden</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>It is suggested that a LATEX document could be used as the Literate Programming source of an OpenMath content dictionary. Several approaches to literate programming are reviewed and a possible implementation is sketched. Historically, one of the key features of OpenMath has been the use of content dictionaries for de ning the symbols that may appear in formalised mathematical formulae. This allows OpenMath to be relevant for arbitrary mathematics (as opposed to just the K{14 segment of mathematics that is often the primary target of computer mathematics projects), but in order to be relevant it is necessary that interested mathematicians can nd or produce content dictionaries that are appropriate for their work. It is not realistic that these could mostly be produced by the OpenMath society|there is simply too much mathematics out there, and in addition not very many people available working on OpenMath|so the remaining possibility is that many mathematicians who wish to employ OpenMath will have to write up some content dictionaries of their own. But can they do that? They will face a number of obstacles. It's not just the .ocd le, since some of the information that may reasonably be thought of as a natural part of \de ning a mathematical symbol" should instead be placed in separate les. Type de nitions go into .sts les, and notation has to be supplied from a third (currently not standardised) source. This division is logical from a tool perspective|many tools don't care about the information in the additional les|but it is an extra complication for the author, especially when experimenting and being creative, since the separate les need to be kept in sync. A content dictionary le is XML-with-namespaces, which is not something with which the typical mathematician is familiar. XML is often criticised for being too verbose (as a consequence of its SGML ancestry), but this need not be that much of a deterrent here, since the main experience of verbosity is likely to come already when writing mathematical formulae; if you're gotten as far as writing a content dictionary, then you've probably already accepted the &lt;OMA&gt;. . . &lt;/OMA&gt; as part of the game. Moreover, it must have been obvious from the start that some manner of formal encoding was going to be employed, and among the formal languages that could have been chosen, XML has an unusually good chance of looking familiar since even mathematicians get exposed to HTML at times. The problem lies instead with the namespaces, which can pretty much be ignored when writing OMOBJs, but not when writing CDs since the latter mixes tags from http://www. openmath.org/OpenMath and http://www.openmath.org/OpenMathCD. Writing a valid content dictionary requires getting the namespace markup right, but the rules for this are not obvious (unlike those for XML-sans-namespaces elements, which one can pretty much intuit from looking at some examples), so an author extrapolating from examples is bound to get</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>confused by di erences in choice of namespace pre xes|both in that two code fragments can
look the same, but aren't, and in that they can look di erent, but are in fact equivalent. Mere
examples aren't enough to get around this hurdle.</p>
      <p>Another issue with XML is that there is quite a technological complex constructed around
the basic speci cation, and ordinary mathematicians coming to OpenMath cannot be expected
to have seen much of this beyond the occasional XML document. They will need some form of
quick orientation to this, perhaps particularly to XSLT and the process of validation, but that
is beside the main point of the present paper.</p>
      <p>There is more that can be said than what ts. In my own humble attempts at creating
content dictionaries (several of which are un nished), I have often found myself writing far more
FMPs and CMPs per symbol than there are in the o cial content dictionaries. Upon re ection,
I've realised that some of the things I wrote were perhaps not exactly about de ning/identifying
(the semantics of) the symbol in question, but more about recording some thought that played
a role in crafting the de nition details.</p>
      <p>
        This last point eventually led me to think about Literate Programming [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] as a paradigm that
might apply to writing content dictionaries, since one way of looking at literate programming is
that it merely gives programmers the opportunity to record, in a reasonably organised manner,
the nontrivial thoughts on how their program works that they should anyway form while creating
it. If there was a literate layer on top of the XML document formally de ning a content
dictionary|if there was a literate source from which the .ocd les were generated|then I
could use that literate layer to expound upon why things are de ned the way they are and
prove (if necessary) that the stated properties su ce for uniquely characterising the symbols
being de ned, allowing the XML document to focus on the essentials of the de nitions. It is
true that [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] recommends \Every symbol de ned in a CD should have at least one example",
\FMPs should be as comprehensive as reasonable", and \If an FMP is given, then the equivalent
(English) CMP should also be given", but one can expect that the typical content dictionary
reader is primarily concerned with how to use the symbols as de ned, and would be less than
well served by having to skip past material on how a symbol might alternatively have been
de ned. The .ocd le is perhaps best imagined as the reference card, whereas the literate
source le would be the full report.
      </p>
      <p>
        Less personally, one might observe that the creation of new content dictionaries is likely to
become an end-user development e ort. Quoting [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]:
      </p>
      <p>The argument for literate programming for end-user developers, especially in
knowledge-intensive problem-solving domains, is that beyond simply solving the
problem, the domain expert wants to share the solution with others. In e ect, the
expert is writing a description of the solution and to the extent that this is also
an executable (perhaps even tailorable) representation of the solution, it becomes a
compelling vehicle for sharing and reusing the artifact.</p>
      <p>
        It should be remarked that the kind of `end-user' imagined in [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] is somewhat di erent from
the one (professional mathematicians) at hand here, so there is little reason to believe that the
optimal form of the literate programming that is sought here need to be as imagined in that
paper. But what forms are there?
      </p>
      <p>
        Some Approaches to Literate Programming
It could be argued that the ancestor of literate programming is the printed book of a program/
code library, where (mostly short) passages of code are interlaced with paragraphs of ordinary
text explaining how they work; a cute example of this, that I've come across for other reasons,
is [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. A catch with this kind of book is however that they usually wouldn't be
machinereadable source even if given as a searchable digital le; the expectation is at best that a
diligent human reader could stitch together the given code fragments into a working program|
and often even that various uninteresting but necessary (for example input and output) blanks
should be lled in as appropriate|not that there would necessarily be a reliable automated
procedure for extracting a working program from the book. These texts are then literate, but
not fully programming.
      </p>
      <p>At the other end, it should be mentioned that documentation generators that work with
texts embedded into code|for example Javadoc, Doxygen, and ROBODoc|are usually not
considered literate programming systems, on the grounds that any narrative there is in the
documentation will be subordinate to the formal structures of the programming language.
They do therefore not allow the author to create literature.
2.1</p>
    </sec>
    <sec id="sec-2">
      <title>Web-style literate programming</title>
      <p>
        Web [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] is Knuth's original literate programming system, created to facilitate the coding of TEX
and MetaFont before the literate programming concept was coined. It inspired the creation of a
number of similar systems (for example CWeb, FWeb, FunnelWeb, Noweb, Nuweb, and Spider),
and the vast majority of the Computer Science literature on literate programming [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] is about
Web or one of its followers, in form or in style.
      </p>
      <p>Web consists of the two programs tangle and weave. The former is a preprocessor with
rather extensive, but specialised, code transformation features, most of which were motivated
by limitations in the Pascal in which TEX and MetaFont are implemented. Of lasting interest
is primarily the concept of modules, which are named code fragments that the programmer can
introduce in an order quite independent of that in which they will be presented to the compiler;
originally this was to overcome a restriction that for example all type de nitions would have
to be made before the rst subroutine de nition, but it turned out to be useful also to stub
out parts of subroutines so that the overall structure would appear more clearly. Weave
is a prettyprinter with some indexing features, providing powerful mathematical typography
in documentation sections as the material there is mostly handed over as-is to TEX for the
typesetting step.</p>
      <p>Because the original Web actually parsed the code fragments, it would only be applicable
to Pascal programs. One trend among the followers has been to port it to other languages, and
another trend has been to go more language-independent. A later trend has been to move away
from text le as source format, and instead employ opaque formats that require specialised
editors. Success has, on the whole, been limited. For the case of literate content dictionaries,
it is doubtful whether there is anything to salvage from Web; its two main competencies of
prettyprinting and code fragment rearrangement are pretty much irrelevant for an application
to content dictionaries.
2.2</p>
    </sec>
    <sec id="sec-3">
      <title>The doc/docstrip style of literate programming</title>
      <p>
        The by far most successful form of literate programming is instead the doc/docstrip system used
for the LATEX 2" kernel and a vast number of ditto packages. doc [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] began as a LATEX package
for using LATEX markup in the comments of a .sty le, by typesetting these and having code
lines wrapped up in a verbatim-like environment. As the amount of commentary then grew,
the docstrip utility was created as a means of stripping away the comment lines from production
.sty les. The unstripped sources then began to carry the .dtx extension, and it was discovered
that they could contain the \driver" code needed to direct typesetting, so that today the doc
equivalent of \weaving" is to `latex something.dtx'.
      </p>
      <p>
        Since docstrip is supposed to copy the code lines from the .dtx source to a generated le,
it is possible to use it for arbitrary programming languages. The docstrip module mechanism
also provides conditional code inclusion and the ability to generate multiple les from the same
source; the latter is particularly useful for projects employing multiple programming languages.
A weakness of doc is that it o ers no facilities for marking up code that is not TEX, but
xdoc2 [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] reimplements those parts of doc in such a way that additional markup commands and
environments are easy to de ne, as demonstrated by tclldoc [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>Since a .dtx le can be an arbitrary LATEX document, the doc/docstrip style of literate
programming e ectively equips the `book on a program' ideal with an automated extraction
procedure. It fully supports having a literate layer on top of the base XML of a content
dictionary. It also supports (through a judicious use of modules) keeping the corresponding
parts of .ocd, .sts, and notation les together in the source, although doing that is not at the
beginner di culty level. It provides no assistance with the XML namespace issue.
2.3</p>
    </sec>
    <sec id="sec-4">
      <title>The document as code</title>
      <p>
        A third approach to literate programming is to unify code and narrative, by making both aspects
of the same document; that one aspect becomes the code and another typeset documentation is
a matter of making di erent interpretations, not a matter of syntax. At least one experimental
such system [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] has been described in the CS literature, but here it should be more instructive
to consider a mature system. As it happens, the infrastructure used to typeset the present
paper actually exercises also this third approach to literate programming, in the form of Fontinst
encoding and metric les, although the literate aspect of these are somewhat of an afterthought.
      </p>
      <p>
        Fontinst [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] can be described as a (mis)use of TEX to serve the role of general-purpose
scripting language, in the task of converting industry-standard font metrics to something suitable
for TEX; the typical state of operation is that data are being read from an external le by
\inputting (thus executing) it, which results in data being written to zero or more other les,
and macros being rede ned, but no typesetting. Among the les being read are the encoding
les [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], which are primarily expressing a data structure, even if they are often treated more
like imperative sequences of commands. There is however also the literate aspect that they
may be typeset as ordinary LATEX documents, in which case the encoding commands produce
English phrasings of the expressed data and additional \comment" commands may contribute
narrative to tie it all together.
      </p>
      <p>Traditionally, Fontinst encoding les have been fairly rigid as texts, but could be made far
less so if the condition that processing as code should typeset nothing was lifted.
2.4</p>
    </sec>
    <sec id="sec-5">
      <title>Miscellanea</title>
      <p>
        Upon review, it was pointed out that [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] is likely a another relevant reference point for this
work. Even from a cursory glance at it I'd agree, but I have not had time to consider it in
depth before the conference.
Hellstrom
\begin{OpenMathCD}{set1}
\OMCDlicense % To generate half a page of legalese.
\CDDate{2004-03-30}
\CDStatus{official}
\CDVersion{3}{0} % version, revision
\CDDescription{
      </p>
      <p>This CD defines the set functions and constructors for basic
set theory. It is intended to be `compatible' with the
corresponding elements in MathML.
}
\begin{CDDefinition}{cartesian_product}
\Description{</p>
      <p>This symbol represents an n-ary construction function for
constructing the Cartesian product of sets. It takes n set
arguments in order to construct their Cartesian product.
}
\CDRoleApplication
\NotationNassocBinop{50}{\times}{\UnicodeChar{00D7}}</p>
      <p>% Priority, LaTeX command, character (for MathML)
\begin{STSSignature}
.
.</p>
      <p>.
\end{STSSignature}
\begin{CDExample}
.
.</p>
      <p>.</p>
      <p>\end{CDExample}
\end{CDDefinition}
.
.</p>
      <p>.
\end{OpenMathCD}</p>
      <sec id="sec-5-1">
        <title>3 Sketch of an implementation</title>
        <p>The most promising approach seems to be the third one: make the source de ning a content
dictionary a LATEX document, and include in it certain commands and environments that have
as side-e ect to output appropriate material to the corresponding .ocd, .sts, and notation
les. To the novice user, this would look like an ordinary LATEX document that however uses
some fairly structured markup for stating OpenMath symbol de nitions. A sketch of what such
a document could look like, minus all literate narrative, can be found in Figure 1. It is natural
that block structures in the content dictionary|such as the CDDe nition as a whole, individual
symbol de nitions, FMPs, CMPs, etc.|should translate to LATEX environments, whereas more
simple items such as the CD date are well cared for already by simple LATEX commands.
3.1</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Reviewing the Basic Design Choice</title>
      <p>There are no doubt those who instinctively feel that source les should be some manner of
wellde ned XML rather than a quirky jumble such as LATEX. Realistically though, any approach
that requires authors to compose what amounts to a minor math paper in XML is going to
alienate a huge share of the target demographic of mathematicians in general. A response
could be that humans are not expected to edit the XML themselves; instead they should use
a WYSIWYG editor to create the wanted XML document. But if that was a satisfactory
alternative, then why do mathematicians use LATEX rather than Word for writing papers?
WYSIWYG might not alienate quite as large a fraction of the demographic as raw XML would,
but it would still alienate far too many.</p>
      <p>
        A related, but separate, concern could be that the source format should provide for
structured representation of mathematical theories, like OMDoc [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] does; indeed, handling content
dictionaries has been presented as an application of OMDoc [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. There are two reasons not
to do this, at least not for the foreseeable future. One is that formalising the structure of
the narrative goes against the spirit of Literate Programming, moving more towards embedded
documentation. It is not given that it goes so far as to actually give up on being literate, but it
is a risk one must consider. Second, OMDoc is about formalising mathematics, whereas the act
of creating a content dictionary is arguably metamathematics, since it de nes (a piece of) the
language for one's mathematical theories. We may know fairly well how to formalise
mathematics, but that is not necessarily the same as formalising metamathematics; at the very least,
wild analogies (such as between multiplication of numbers and application of functions) can
be perfectly good metamathematics and inspire very fruitful choices of notation even though
there is (at least initially) no clear mathematical foundation for making that analogy. (Cf. the
analysis in [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] of the role that notation has played in the development of 20th century physics.)
Stressing formalisation could therefore be unnecessarily limiting. On the other hand, there is
nothing in the basic design which would prevent authors from using packages such as sTeX [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]
for formalising the literatary parts of the sources, should they prefer to do so. But an outright
requirement to do so would again be likely to alienate many prospective users.
3.2
      </p>
    </sec>
    <sec id="sec-7">
      <title>Steps towards Implementation</title>
      <p>
        The rst technical problem, which is perhaps also the largest, that one faces when asking TEX
to output XML is to produce well-formed character data. LATEX syntax follows di erent rules
than XML syntax, and whatever is presented to the user must look reasonably consistent, so
the user should not have to manually supply XML entity markup for characters that happen
to be special to XML. A solution to this problem is however halfway implemented; basically
it elaborates on the harmless character strings mechanism the author created for xdoc2 [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ],
updated to cope with the full Unicode character set, and with XML among the supported
target formats. On top of that, it is not too di cult to implement commands for outputting
arbitrary (up to equivalence and whitespace normalisation) XML fragments.
      </p>
      <p>With this in place, it becomes clear that a LATEX document can be used as source for .ocd,
.sts, and notation les, so what remains is to make this reasonably convenient by providing
suitable higher-level commands. The typical way to output an OMS element might for example
be to use a command</p>
      <p>\OMS[hcdbasei]{hcd i}{hnamei}
and an FMP environment could generate not only the FMP element but also the OMOBJ element
it must contain, and any namespace declarations that are needed.</p>
      <p>An interesting point is what the commands for OpenMath objects should typeset. A
perfectly serviceable approach is to have them typeset the same XML code as is being output;
in the examples and FMPs of a content dictionary, ne details in structure and encoding may
well be of great interest to the reader. It could be feasible to alternatively present objects by
typesetting their PopCorn encoding, but the implementation (remember that this would be
done by TEX macros) might become nontrivial. Typesetting as normal mathematical formulae
is likely to be unreasonably di cult, and probably not even desirable.</p>
      <p>High-level commands for writing OpenMath objects could also bring about a dramatic
simpli cation of certain common coding tasks. FMPs often have an outermost layer stating
`For all a, b, . . . in set S, it holds that . . . ' Conceptually, this is one thing, but in OpenMath
it has to be encoded as
&lt;OMBIND&gt;
&lt;OMS cd="quant1" name="forall"/&gt;
&lt;OMBVAR&gt; &lt;OMV name="a"/&gt; &lt;OMV name="b"/&gt; : : : &lt;/OMBVAR&gt;
&lt;OMA&gt;
&lt;OMS cd="logic1" name="implies"/&gt;
&lt;OMA&gt;
&lt;OMS cd="logic1" name="and"/&gt;
&lt;OMA&gt; &lt;OMS cd="set1" name="in"/&gt; &lt;OMV name="a"/&gt; S &lt;/OMA&gt;
&lt;OMA&gt; &lt;OMS cd="set1" name="in"/&gt; &lt;OMV name="b"/&gt; S &lt;/OMA&gt;
.
.</p>
      <p>.
&lt;/OMA&gt;
hbody of formulai
&lt;/OMA&gt;
&lt;/OMBIND&gt;
This is an awkward amount of boilerplate code, and it would be much easier on authors if they
in the source could instead simply say
}
\begin{forallin}{a,b,: : :}{
hthe set Si
hbody of formulai
\end{forallin}</p>
      <p>
        Another nice thing about using high-level commands for generating the raw XML is that
they can target several formats simultaneously. For notations, there is no established standard,
with at least two .ntn formats [
        <xref ref-type="bibr" rid="ref12 ref14">12, 14</xref>
        ] having been proposed, but the presentation in the
content dictionary collection at www.openmath.org rather relying on explicit XSLT. For the
latter, LATEX would even serve as something of a compiler (reading high-level descriptions such
as `n-associate binop' and generating low-level XSLT to make it a reality), even though one
should probably not expect it to be capable of handling unusual presentation forms; those who
want to create unusual e ects will have to supply the details themselves.
      </p>
      <p>One question that remains is how to name things. The XML encoding of content dictionaries
favours CamelCase, whereas the LATEX tradition is rather lower case in situations like this; some
of the CamelCase in Figure 1 looks uncalled-for, although there is a merit in using the same
names as in the XML encoding. There are probably other choices of a similar character still
left to identify and make.</p>
      <sec id="sec-7-1">
        <title>Acknowledgements</title>
        <p>Thanks to Christoph Lange, Paul Libbrecht, and others on the OpenMath mailing list for help
with references and explaining the situation of the varying notation systems.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Nelson</surname>
            <given-names>H. F.</given-names>
          </string-name>
          <string-name>
            <surname>Beebe</surname>
          </string-name>
          .
          <source>A Bibliography of Literate Programming (version of 11 April</source>
          <year>2012</year>
          ). http: //www.math.utah.edu/pub/tex/bib/litprog.html
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>James</given-names>
            <surname>Davenport</surname>
          </string-name>
          .
          <source>On Writing OpenMath Content Dictionaries</source>
          .
          <year>2002</year>
          . http://www.openmath.org/ documents/writingCDs.pdf
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Matthew</given-names>
            <surname>Dinmore and Anthony F. Norcio</surname>
          </string-name>
          .
          <article-title>Literacy for the Masses: Integrating Software and Knowledge Reuse for End-User Developers Through Literate Programming</article-title>
          .
          <source>In Information Reuse and Integration</source>
          ,
          <year>2007</year>
          .
          <article-title>IRI 2007</article-title>
          . IEEE International Conference on (pp.
          <volume>455</volume>
          {
          <fpage>460</fpage>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Lars</given-names>
            <surname>Hellstro</surname>
          </string-name>
          <article-title>m. The tclldoc package and class</article-title>
          .
          <source>LATEX macro package</source>
          ,
          <year>2003</year>
          . http://ctan.org/pkg/ tclldoc
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Lars</given-names>
            <surname>Hellstro</surname>
          </string-name>
          <article-title>m. The xdoc package</article-title>
          .
          <source>LATEX macro package</source>
          ,
          <year>2003</year>
          . http://ctan.org/pkg/xdoc
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Lars</given-names>
            <surname>Hellstro</surname>
          </string-name>
          <article-title>m. Writing ETX format font encoding speci cations</article-title>
          .
          <source>TUGboat</source>
          <volume>28</volume>
          :
          <issue>2</issue>
          (
          <issue>2007</issue>
          ),
          <volume>186</volume>
          {
          <fpage>197</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Alan</surname>
            <given-names>Je rey</given-names>
          </string-name>
          , Sebastian Rahtz, Ulrik Vieth, and
          <article-title>Lars Hellstrom. The fontinst utility</article-title>
          .
          <source>TEX macro package</source>
          ,
          <year>1993</year>
          {
          <year>2009</year>
          . http://ctan.org/pkg/fontinst
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Donald</surname>
            <given-names>E. Knuth. Literate</given-names>
          </string-name>
          <string-name>
            <surname>Programming</surname>
          </string-name>
          .
          <source>The Computer Journal</source>
          , vol.
          <volume>27</volume>
          , no.
          <issue>2</issue>
          (May
          <year>1984</year>
          ),
          <volume>97</volume>
          {
          <fpage>111</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Andrea</given-names>
            <surname>Kohlhase</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Michael</given-names>
            <surname>Kohlhase</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Christoph</given-names>
            <surname>Lange</surname>
          </string-name>
          .
          <article-title>sTeX { A System for Flexible Formalization of Linked Data</article-title>
          .
          <source>Article 4 in: Proceedings of the 6th International Conference on Semantic Systems (I-Semantics) and the 5th International Conference on Pragmatic Web, ACM</source>
          ,
          <year>2010</year>
          . doi:
          <volume>10</volume>
          .1145/1839707.1839712
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Michael</given-names>
            <surname>Kohlhase. OMDoc</surname>
          </string-name>
          :
          <article-title>An Infrastructure for OpenMath Content Dictionary Information</article-title>
          .
          <source>SIGSAM Bulletin</source>
          <volume>34</volume>
          (
          <issue>2</issue>
          ) (
          <year>2000</year>
          ),
          <volume>43</volume>
          {
          <fpage>48</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>Michael</given-names>
            <surname>Kohlhase</surname>
          </string-name>
          .
          <source>OMDoc { An open markup format for mathematical documents [Version</source>
          <volume>1</volume>
          .2]. Springer,
          <year>2006</year>
          . http://omdoc.org/pubs/omdoc1.2.pdf
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>Michael</given-names>
            <surname>Kohlhase</surname>
          </string-name>
          , Christine Muller, and Florian Rabe.
          <source>Notations for Living Mathematical Documents</source>
          . Pp.
          <volume>504</volume>
          {519 in: Intelligent
          <source>Computer Mathematics, Lecture Notes in Computer Science 5144</source>
          , Springer,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>Achim</given-names>
            <surname>Mahnke</surname>
          </string-name>
          and
          <article-title>Bernd Krieg-Bruckner</article-title>
          .
          <source>Literate Ontology Development</source>
          . Pp.
          <volume>753</volume>
          {757 in: On the Move to Meaningful
          <source>Internet Systems 2004: OTM 2004 Workshops, Lecture Notes in Computer Science 3292</source>
          , Springer,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>Shahid</given-names>
            <surname>Manzoor</surname>
          </string-name>
          , Paul Libbrecht, Carsten Ullrich, and
          <string-name>
            <given-names>Erica</given-names>
            <surname>Melis</surname>
          </string-name>
          .
          <article-title>Authoring Presentation for OpenMath</article-title>
          . Pp.
          <volume>33</volume>
          {48 in: Mathematical Knowledge Management,
          <source>MKM'05, Lecture Notes in Computer Science 3863</source>
          , Springer,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>Frank</given-names>
            <surname>Mittelbach</surname>
          </string-name>
          .
          <article-title>The doc{option</article-title>
          .
          <source>TUGboat</source>
          <volume>10</volume>
          :
          <issue>2</issue>
          (
          <issue>1989</issue>
          ),
          <volume>186</volume>
          {
          <fpage>197</fpage>
          .
          <article-title>An updated version of this paper is part of doc.dtx in the base LATEX distribution</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>James</given-names>
            <surname>Dean</surname>
          </string-name>
          Palmer and
          <string-name>
            <given-names>Eddie</given-names>
            <surname>Hillenbrand</surname>
          </string-name>
          .
          <article-title>Reimagining literate programming</article-title>
          .
          <source>In: OOPSLA '09 (ISBN 978-1-60558-768-4)</source>
          ,
          <volume>1007</volume>
          {
          <fpage>1014</fpage>
          . doi:
          <volume>10</volume>
          .1145/1639950.1640072
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>David</surname>
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Rydeheard</surname>
          </string-name>
          and
          <string-name>
            <surname>Rodney M. Burstall</surname>
          </string-name>
          .
          <article-title>Computational category theory</article-title>
          . Prentice Hall, New York,
          <year>1988</year>
          . ISBN 0-13-162736-8.
          <article-title>Also available for download from the author's homepage.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>Mark</given-names>
            <surname>Steiner</surname>
          </string-name>
          .
          <article-title>The applicability of mathematics as a philosophical problem</article-title>
          . Harvard University Press,
          <year>1998</year>
          . ISBN 0-674-00970-3.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>