<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Are Graph Databases Fast Enough for Static P4 Code Analysis?∗</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Dániel Lukács</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Gergely Pongrácz</string-name>
          <email>Gergely.Pongracz@ericsson.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Máté Tejfel</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>3in Research Group, Eötvös Loránd University</institution>
          ,
          <addr-line>Martonvásár, Hungary ORCID: 0000-0002-5115-9973</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Ericsson Hungary Ltd.</institution>
          ,
          <addr-line>Budapest, Hungary ORCID: 0000-0002-5115-9973</addr-line>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Faculty of Informatics Eötvös Loránd University</institution>
          ,
          <addr-line>Budapest, Hungary ORCID: 0000-0001-9738-1134</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2020</year>
      </pub-date>
      <fpage>29</fpage>
      <lpage>31</lpage>
      <abstract>
        <p>P4 is a new protocol-programming language for smart switches and virtual network functions linked up in vast software defined networks (SDNs). Our current work in progress is focused on analysing the execution cost of P4 programs using hierarchical control flow graphs (CFGs). We found that cost analysis of P4 requires storing and working with diverse information about various execution environments. In theory, versatile graph backends, such as graph databases (GDBs) could provide a unifying representation in which we can store, access and traverse the CFGs and the environment information together. On the other hand, analysis eficiency is a requirement both for large-scale testing and end user application. Unfortunately, we cannot conclusively assess - without trying it out in practice - whether GDB disk and memory reads will ruin our performance or not.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>In this work, we briefly detail how we realized P4 control flow analysis
based on GDBs, and present our measurements to conclude if the overhead
inherent in GDBs is justified for our purposes.</p>
    </sec>
    <sec id="sec-2">
      <title>1. Introduction</title>
      <p>
        Softwarization of modern networks introduces new challenges for tools supporting
agile development and analysis of new protocols. P4 is a relatively new
programming language targeting network switches [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. As the overall functioning of the
network requires switches to process packets with an acceptable latency, it is
important for switches, and P4 programs (protocols) executed by the switches, to
execute as fast as possible. The overall goal of our research is to develop a static
cost analysis framework for P4 to assist network engineers in estimating the
execution cost of the P4 program during development, well before deployment.
      </p>
      <p>To perform cost analysis, we take as input the P4 program code, and an
appropriate model of the target environment. Then, we transform this information
into an abstract cost formula. Missing runtime information will be represented in
the formula as variables. In turn, the derived cost formula can be used in many
ways: the cost of a P4 program can be inferred given an existing hardware
configuration, or inversely, we can infer the set of configurations that satisfy a given cost
constraint.</p>
      <p>This paper has two contributions. First, we reformulated our earlier cost
analysis algorithm (see below) as a graph query to enable the reuse of established graph
database (GDB) services (Section 3). Second, we implemented this query, along
with our earlier algorithm in Java, and performed experiments to measure the
performance of the two cost analyses on real-world P4 programs (Section 4). Here,
our intention is to establish the feasibility and limits of performing P4 cost analysis
real-time, i.e. during development, or possibly, at runtime of P4 programs. The
results suggest that – despite the overhead – graph databases are viable platforms
for implementing cost analysis.</p>
      <sec id="sec-2-1">
        <title>1.1. Motivation</title>
        <p>The goals of this paper are motivated by practical – instead of theoretical –
concerns. Modern software engineering involves both transactional and analytical
operations: multiple developers armed with their IDEs asynchronously work
together on a large code base, which is cyclically inspected (augmented with static
analysis and verification tools) for testing and code review purposes. In hope that
we can innovate industrial-scale software – not “just” an algorithm –, we intend
to assess whether we can reuse common database features (instead of
reinventing the wheel) without significant losses in runtime eficiency. Among these are
scalability (through ease of multi-core or multi-machine parallelization), multi-user
concurrency (transaction processing, rollback, etc.), fault-tolerance, and security
features. Cost analysis has exponential complexity. For this reason, we rely on
empirical measurements: we expect our algorithm to be near instantaneous for
programs with small number of execution paths, but we also test the analysis on
programs with large number of exeuction paths to illustrate usability limits.</p>
      </sec>
      <sec id="sec-2-2">
        <title>1.2. Earlier work</title>
        <p>
          Our approach to cost analysis of P4 programs is based on control flow graph (CFG)
path-analysis [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]. In our earlier work, we introduced an approach that, in essence,
enumerates all  execution paths in a  CFG, and derives its average execution
cost as  ( ) = ∑︀  ∈paths( )  ( )cost( ). That is, the average execution cost is the
sum of the execution cost of each paths, weighted by the probability of those paths
occurring.
        </p>
        <p>In the main section of that work, we managed to decompose path costs into
compositions of conditional expectations that encoded the dependence of a program
statement on the efect of previous statements. With this, it became possible
to analyse the components separately. Moreover, this step formalized the input
requirements of our analysis either as conditional cost formulas, or as models from
which these formulas can be derived.</p>
        <p>In that work, we also presented a procedural algorithm to calculate the above
formula given a CFG. The algorithm, starting from the root, traverses the nodes of
each CFG paths in a breadth-first manner, collecting both the cost and probability
information, and the conditions node-by-node on each path. The algorithm also
features an optimization: many CFG paths share the same prefix, and the traversal
avoids processing these segments multiple times.</p>
        <p>In Section 3, we reformulated this algorithm as a GDB query in Gremlin.
Precisely comparing the two notation is dificult, since in Gremlin, we only program
one traverser, that is automatically split by Gremlin when it meets a branching
point in the CFG. Without actually looking behind the abstraction and
inspecting the GDB implementation, we can only guess that the query denotes the same
behavior. On the other hand, this abstract formulation has advantages in itself,
beyond those GDB features listed earlier. It generalizes our original algorithm for
CFGs represented with diferent schemas, and it is ready to be parallelized,
specifically at those points, where the traverser meets a branching point. In Section 4
of this work, we use empirical claims to argue that these benefits can be achieved
without a significant loss in eficiency.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>2. Related work</title>
      <p>
        A question similar to our title was also asked by [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] about applicability of GDBs for
static analysis (SA) in general. The authors emphasize the advantage of
schemaless GDBs over RDBMSs in addressing cross-cutting concerns in SA queries. By
extending vertices and edges with new properties, data flow, CFG, call graph, type
      </p>
      <sec id="sec-3-1">
        <title>Name</title>
        <p>TinkerGraph3.4.4
Neo4J-0.7-3.2.3
OrientDB3.1.0M3
JanusGraphCassandra0.4.0</p>
      </sec>
      <sec id="sec-3-2">
        <title>In-memory</title>
      </sec>
      <sec id="sec-3-3">
        <title>Data model</title>
        <p>Yes
No
No
No
Native</p>
        <p>Native</p>
        <p>Document
Wide-column</p>
      </sec>
      <sec id="sec-3-4">
        <title>Lightweight edges</title>
        <p>Yes</p>
      </sec>
      <sec id="sec-3-5">
        <title>Single query parallelization</title>
        <p>No
Yes
Yes
No</p>
        <p>No
Yes
Yes
hierarchy, etc. information can be overlaid on the same graph structure (usually
the syntax tree). In turn, these overlays can be queried using one, universal graph
query. The authors also tested practical feasibility of GDBs for SA: they used
Neo4J to store overlays on 2 million lines of JAVA source code and formulated
graph queries in the declarative Cypher language. They found that loading took
around 10 times longer than javac compilation, queries took 3-5 times longer with a
cold (as opposed to warm) cache, and that query response time scaled linearly with
program size. As even their slowest query took around 10 seconds on this large code
base, they conclude that GDBs are indeed feasible for representing static analysis
data structures.</p>
        <p>
          In this paper, we implemented our algorithm as a graph query in Gremlin [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ].
Gremlin is a compositional programming language embedded in Java (and many
other languages) for describing graph traversals. In Gremlin, complex traversals
are realized as compositions of elementary traversals, called steps (vertex and edge
selections, conditions, repetitions, etc.). The language-specific Gremlin traversal is
compiled to the language-agnostic bytecode of the Gremlin traversal machine. This
traversal machine interacts with the graph database backends (providers ) through
a common graph API.
        </p>
        <p>
          One important feature of Gremlin is that supports many graph providers: very
diferent providers can be exchanged without any modification of the Gremlin
queries. The survey in [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] provides a comprehensive classification of the most
important graph backends as of 2019. In this work, we only utilize a fraction of
these. (TinkerGraph – intended mostly for testing purposes – is not in the survey.)
        </p>
        <p>Table 1 lists some of the features of the GDB providers we tested. Apache
TinkerGraph is the only in-memory GDB, other GDBs have to cache persisted
data as part of the queries. All providers are NoSQL databases: unfortunately,
we could not find RDBMS-based GDBs that are compatible with modern Gremlin
APIs. Native GDBs are systems that were built specifically to store graphs, as
opposed to systems that “retrofit” other NoSQL data models, such as
documentbased or wide-column DBs. In practice, this means that native GDBs use eficient,
close-to-machine implementations of the typical graph representations (adjacency
lists, edge lists, etc.). This does not mean non-native data models can not represent
graphs eficiently. OrientDB, for example, supports lightweight edges: this means
that vertices may store pointers to each other’s location (as opposed to storing a
vertex-index, on which a costly lookup is performed). While some GDBs support
utilization of multiple cores for a single query, we used each GDB in single core
mode to aid comparability.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>3. Experiment design</title>
      <p>To answer the presented question, we selected one small, and one large example P4
CFG, ran the algorithm in various configurations, and measured runtime. In this
section, we describe these examples and algorithms.</p>
      <sec id="sec-4-1">
        <title>3.1. P4 use cases</title>
        <p>For profiling the algorithm, we selected two publicly available P4 programs, and
then selected one CFG from each. In this section, we describe the selected
components and our motivation as well for selecting these instead of other components.
We collected the graph characteristics of the program components influencing our
choices into Table 2.</p>
        <p>
          The small example – chosen for testing and illustration purposes – is the
ingress component from basic_routing-bmv2.p4 [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ], depicted in Figure 1.
This switch program essentially describes a simple L3 IP router, and was found
among the test cases of the oficial P4 reference compiler, P4C. The simplicity of
P4 is well illustrated by the fact that ingress is the most complicated component
of this L3 switch program.
        </p>
        <p>This component is executed on a packet that was previously parsed into the
structure named hdr, which stores the IP4 header in its ipv4 field. First, it
declares various match-action tables (packet matching algorithms) and defines actions
(packet transformation algorithms) that are executed on matching packets. Tables
basically map packets to actions. For terseness, we omitted the full declarations
and definitions. Note, that tables are usually not defined in the P4 programs: these
are received from the SDN controller during runtime. Then, the program describes
the actual control flow. The packet is to matched by tables port_mapping, bd,
and ipv_fib. In case that last table mapped the packet to action on_miss,
ipv_fib_lpm is applied as well. Finally, table nexthop is applied.</p>
        <p>
          As a real-world, large-size use case, for conducting plausible measurements,
we chosen the egress component of switch.p4 [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]. This switch program is
capable of handling all of the most important protocols used in L2/L3 data planes.
Unfortunately, it was written in version 14 of the P4 language (P414), so we had
to use P4C to transpile it to the current version (P416). It is also unmaintained,
so an older version of P4C is required for this process.
        </p>
        <p>1For these measurements, we used a simplified cost model, and so the cost column displays
the average number of basic blocks visited when traversing from entry to exit.
Name
|V| |E|</p>
        <p>Choice</p>
        <p>Path prefixes</p>
        <p>Full paths</p>
        <p>Min</p>
        <p>Max</p>
        <p>Avg
ParserImpl 4
ingress
egress
ingress
egress
ParserImpl 50
8
3
123
67
3
9
2
133
180
90
1
2
0
24
58
24
3
11
2
15860
?
1015381
switch.p4
2
3
1
7477
374547</p>
        <p>Another candidate from switch.p4 was ingress. Unfortunately, this
component has so many execution paths, our algorithm ran out of RAM and thus it
could not finish in a reasonable time. In practice, we consider such large controls
as extreme cases though: these were specific to P4 14, as P416 introduced custom
controls that enabled better separation of concerns (resulting in smaller CFGs).</p>
        <p>We also decided not to measure ParserImpl. It has the same size as egress,
yet it is very flat, promising very fast executions. Unfortunately, it contains a
selfloop, and cycles are deprecated in current versions of the P4 specification.
1 Double costAnalysis(GraphTraversalSource g,
2 Consumer&lt;Traverser&lt;Edge&gt;&gt; f,
3 Object sid,
4 Object[] eids){
5 CostState cs = g.withSack(new CostState())
6 .V(sid)
7 .repeat(__.outE().as("e")
8 .sideEffect(f)
9 .inV())
10 .until(__.hasId(P.within(eids)))
11 .sack()
12 .fold(new CostState(),
13 (s, o) -&gt; s.merge((CostState) o))
14 .next();
15 return cs.getCost();
16 }</p>
      </sec>
      <sec id="sec-4-2">
        <title>3.2. CFG-based cost analysis algorithm</title>
        <p>
          In this current paper, we profile a plain Java implementation of the algorithm
presented in our earlier work. Beyond that – considering our motivation in the
Motivation section to extend this software with application-centered features –,
we adapt this algorithm to GDBs as well. We chose the Gremlin query language
to formulate the algorithm, as according to [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ], it is well supported across GDBs
utilizing diferent data models.
        </p>
        <p>Figure 2 depicts the query we constructed in the Java 8 language variant of
Gremlin 3.4.4. Only CostState is a custom class, the rest can be understood
using the documentation of the Java and Gremlin standard libraries. CostState
is basically a pair that keeps account of the probability product and cost sum on
the path being traversed.</p>
        <p>The traversal starts traversing the graph from the vertex with the provided ID,
and in each step creates a new copy of its local CostState, with its information
updated by function f. This function accesses the current path history in the
Traverser (assumed to include only named elements, which are the edges, in
this case), queries the probability of the current edge (conditional on the path) and
the cost of the destination vertex of that edge (conditional on the path) to update
the probability product and cost sum belonging to this path in the sack. The
copying ensures that parallel paths will not afect the state of each other. When
the exit nodes are reached, the states from all paths are collected, weighted and
summarized into one number, the average cost.</p>
        <p>While the high-level nature of GDB query languages makes it dificult to map
the Java implementation to the query language exactly, we believe this naive query
approximates the runtime characteristics of the original algorithm reasonably well.</p>
        <p>
          In comparison to our earlier paper [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ], here we applied an extremely
oversimplified environment model (and used the predicted value only for checking
consistency). Namely, all vertices have a cost of 1 unit, and sibling edges have uniform
probability (that is, if the source vertex of an edge  has  children, then  has 1
probability).
        </p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>4. Evaluation</title>
      <sec id="sec-5-1">
        <title>4.1. Configuration</title>
        <p>Here, we describe the machine environment in which we conducted the experiment,
and the measured results regarding two P4 code samples.
4.2. Case basic_routing-bmv2.p4/ingress
We measured algorithm runtime w.r.t. the two cases diferently. For the small
case, we simply repeated the experiment 100 times to average out random noise.
These results are depicted in Table 4.</p>
        <p>Provider
Runtime ( s)</p>
        <p>Java
172.16</p>
        <p>TinkerGraph
1964.64</p>
        <p>Neo4J
2517.68</p>
        <p>OrientDB
5253.09</p>
        <p>JanusGraph
6728052.78</p>
        <p>In this small case, the GDB overhead results in at least one magnitude worse
performance, compared to the Java implementation, but still, the CFG was
handled almost instantaneously by most backends. (The large, 6 second overhead of
JanusGraph was probably caused by improper configuration on our part).</p>
        <p>Not all cost analysis approaches can provide this performance: traditional
syntax tree transformation-based cost analysis methods are expected to have many
magnitudes worse performance.</p>
        <p>
          We can expect this performance to be stable even if new environment
information is added to the model. Expanding nodes (as described in [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]) into sub-CFGs
should only result in linear increase (unless we have no information about the input
distribution of the expanded component, in which case this must be inferred from
the incoming paths).
        </p>
        <p>2All queries were executed on a single core.</p>
      </sec>
      <sec id="sec-5-2">
        <title>4.3. Case switch.p4/egress</title>
        <p>For the large case, we examined what happens if we only process an initial segment,
or prefix of the graph. For all possible path lengths  , we selected the subgraph
induced by the vertices in  distance from the start vertex. We then applied the
algorithm to this subgraph, and measured its runtime. We picked 10 path lengths
as a percentage of the longest path  in the CFG: so vertices in the prefixes were
at least 110 | |, 120 | |, . . . , | | distant from the start vertex (possibly more because of
joining roundabout paths). For each length, we repeated the experiment 25 times
and averaged the samples. The results are depicted in Figure 3.</p>
        <p>Almost all charts show a similar curve: presumably, the 40%-length prefix
introduces a large number of paths, but this number does not grow significantly in
higher length prefixes. We speculate that this CFG is of a diamond shape, where
the diamond resides around 140 | | distance from the start vertex.</p>
        <p>Interestingly, all charts also consistently show a convex valley by the 60%-mark,
that is, CFGs with longer and possibly more numerous paths were processed faster.
This phenomenon possibly indicates cache warming, or it maybe an interplay of
traversal implementations and CFG-topology.</p>
        <p>Among all the providers, the Java algorithm were the fastest, closely followed
by the query propelled by the in-memory TinkerGraph. This is a positive result,
implying that the query and the original algorithm are indeed possessing similar
runtime characteristics. But, since Gremlin is a byte-code compiled language, the
diference is not necessarily explained by the overhead, and instead hints to an
actual deviance in the formulations.</p>
        <p>Somewhat surprisingly, Neo4J (a persistent database) performed quite close to
the in-memory solutions. Neo4J’s advantage over the other persistent databases is
most likely due to the combination of eficient caching and it being a native GDB:
the space eficient representation requires less file-level access, which enables Neo4J
to behave similarly to the in-memory graphs.</p>
        <p>
          In this experiment, non-native persistent GDBs were showing weaker
performance characteristics, but we should note that this could be an efect of the design
of our graph query. It concerns only the traversal itself, while data-access was
implemented on the heap. Adding this data to the graph itself, and including more
features (such as the processing of sub-CFGs, described in [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]) may change these
characteristics. Using dedicated configurations on these databases (instead of the
default ones) may also introduce significant improvements.
        </p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>5. Conclusion</title>
      <p>In this paper, we reformulated our earlier procedural P4 cost analysis algorithm as
a graph database query. Our motivation is to leverage built-in GDB features, such
as scalability and multi-user concurrency, but it was not clear if the GDB overhead
will make us trade of analysis eficiency. To substantiate that using GDBs is viable
for P4 cost analysis, we measured how our GDB query – backed by various backends
– compares against our earlier algorithm, over large, real-world P4 use cases.</p>
      <p>To answer the question in the title, our results imply that, yes, we can utilize
GDBs without losing much of the eficiency. While GDB backends must be chosen
and configured properly for the purpose, the portability of the query language will
presumably make these eforts feasible as well.</p>
      <p>We should note that these measurements are far from conclusive yet. Both our
original algorithm and the query presented here can be considered naive: while
these can analyse small CFGs real-time, future optimizations (e.g.
parallelization) may enable faster processing even for very large CFGs (such as those in
switch.p4) as well. We treated all GDBs as blackboxes: precise configuration
of these may drastically improve their runtime behaviour. Devising further graph
queries in languages other than Gremlin would enable us to investigate more graph
backends (e.g. compare various in-memory GDBs against each other). We had to
leave the in-depth exploring of these topics to the future.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Besta</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Peter</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gerstenberger</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fischer</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Podstawski</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Barthels</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Alonso</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Hoefler</surname>
          </string-name>
          ,
          <source>T. Demystifying Graph Databases: Analysis and Taxonomy of Data Organization, System Designs, and Graph Queries</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Bosshart</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Daly</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gibb</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Izzard</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>McKeown</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rexford</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schlesinger</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Talayco</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vahdat</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Varghese</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Walker</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <article-title>P4: Programming Protocol-independent Packet Processors</article-title>
          .
          <source>SIGCOMM Comput. Commun. Rev</source>
          .
          <volume>44</volume>
          ,
          <issue>3</issue>
          (
          <year>July 2014</year>
          ),
          <fpage>87</fpage>
          -
          <lpage>95</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Lukács</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pongrácz</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Tejfel</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <article-title>Performance guarantees for P4 through cost analysis</article-title>
          .
          <source>In Informatics'</source>
          <year>2019</year>
          ,
          <year>2019</year>
          IEEE 15th International Scientific Conference on Informatics, Poprad, Slovakia, November
          <volume>20</volume>
          -
          <fpage>22</fpage>
          (
          <year>2019</year>
          ), W. Steingartner, Štefan Korecko,
          <article-title>and</article-title>
          <string-name>
            <surname>A</surname>
          </string-name>
          . Szakál, Eds., IEEE, pp.
          <fpage>242</fpage>
          -
          <lpage>247</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>P4</given-names>
            <surname>Language</surname>
          </string-name>
          <article-title>Consortium. basic_routing-bmv2.p4, a small test case for the oficial P4 reference compiler</article-title>
          ,
          <source>P4C</source>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>P4</given-names>
            <surname>Language</surname>
          </string-name>
          <article-title>Consortium. switch.p4, a large P4 project handling protocols of L2/L3 data plane</article-title>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Rodriguez</surname>
            ,
            <given-names>M. A.</given-names>
          </string-name>
          <article-title>The Gremlin graph traversal machine and language (invited talk)</article-title>
          .
          <source>Proceedings of the 15th Symposium on Database Programming Languages - DBPL</source>
          <year>2015</year>
          (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Urma</surname>
          </string-name>
          , R.-G., and
          <string-name>
            <surname>Mycroft</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Source</surname>
          </string-name>
          <article-title>-code queries with graph databases-with application to programming language usage and evolution</article-title>
          .
          <source>Science of Computer Programming</source>
          <volume>97</volume>
          (
          <year>2015</year>
          ),
          <fpage>127</fpage>
          -
          <lpage>134</lpage>
          . Special Issue on
          <article-title>New Ideas and Emerging Results in Understanding Software</article-title>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>