<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Toward a Privacy Enhancing Framework in E- government</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Wiem Hammami</string-name>
          <email>wiem.hammami@ymail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lamjed Ben Said</string-name>
          <email>lamjed.bensaid@isg.rnu.tn</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sameh Hadouaj</string-name>
          <email>hadouaj@yahoo.fr</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>François Charoy</string-name>
          <email>Francois.Charoy@loria.fr</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Khaled Ghedira</string-name>
          <email>khaled.ghedira@isg.rnu.tn</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Intelligent Information Engineering Laboratory Higher Institute of Management of Tunis 41</institution>
          ,
          <addr-line>Rue de la Liberté, Cité Bouchoucha 2000 Le Bardo, Tunis</addr-line>
          ,
          <country country="TN">Tunisia</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Loria, Lorrain Research Laboratory in Computer Science and its Applications Campus Scientifique, 54506 Vandoeuvre-lès-Nancy</institution>
          ,
          <addr-line>Cedex</addr-line>
          ,
          <country country="FR">France</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>E-government involves data sharing between different partners such as citizens and government agencies. Thus, the use of personal data in such cooperative environment must be done in legal ways and for legal purposes. In this context, issues related to data protection, such as privacy, have to be considered. This paper adopts a multi-agent based approach to manage privacy concerns in e-government systems. The proposed model provides a mechanism for e-government systems to evaluate trust degree reached by digital government processes. For this purpose, concepts of responsibility proposed in multi-agent systems and access rights used in security models, are integrated in this work. The research provides an evaluative framework for trust degree related to e-government process.</p>
      </abstract>
      <kwd-group>
        <kwd>E-government</kwd>
        <kwd>privacy</kwd>
        <kwd>trust</kwd>
        <kwd>multi-agent systems</kwd>
        <kwd>simulation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1 Introduction</title>
      <p>
        Privacy refers to the ability of individuals to control the collection, retention, and
distribution of information about themselves [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. In the context of e-government,
privacy is a critical issue as there is an increased amount of private data shared
between different agencies. For example, to access public service online, citizens
must fill in some forms that require Personally Identifiable Information (PII) such as
name, social security number, credit card number, etc. Citizens need to know whether
their PII are used in the right way and for the right purpose or not. This can be
achieved by an enhanced ability of control over their personal information. In this
paper, we focus on data privacy, in particular, on privacy protection of personal data
exchanged, processed and stored in e-government systems. As citizens act at the front
office side of the e-government system, they do not know what happens to their
personal information handled in the back office side by government agencies. Agent
technology can be a suitable solution for this situation as they can act on behalf of the
user. An agent is a computer system situated in some environment, and that is capable
of autonomous actions in this environment in order to meet its design objectives [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
The main issue of this paper is to propose a mechanism to control the use of personal
information by e-government agents based on restrictions imposed on their behavior
and to evaluate the trust degree related to e-government process.
      </p>
      <p>The rest of this paper is structured as follows. In the second section we present
different visions for privacy protection in the literature. In section 3 we describe our
proposed model including the fundamental concepts used and their formal
representations. Section 4 is devoted to experimentations and simulation results. In
section 5, we make a comparative study of our work with other proposed model in the
literature. Finally, section 6 summarizes the contribution of this work, and provides
conclusions and future work.</p>
    </sec>
    <sec id="sec-2">
      <title>2 Related works</title>
      <p>
        Many approaches have been used to manage privacy concerns. We specially note
those based on users preferences such as the P3P (Platform for Privacy Preferences)
[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. P3P provides technical mechanism to inform web sites users about privacy policy
before they release their personal information. However, P3P does not provide
mechanism for ensuring that sites act according to their policies [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Additionally, we
mention approaches based on security modeling such as the Access Control Decision
System. We note for example, the Role-Based Access Control model (RBAC) [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] that
manages privacy through access control systems. In RBAC, users are assigned to
roles to have permissions that allow them to perform particular job functions.
However, we regret the lack of mechanism ensuring privacy protection of data after
their collection in both P3P and RBAC approaches.
      </p>
      <p>
        There are also a number of agent based privacy schemas in the literature. We note
Hippocratic Multi-Agent Systems (HiMAS) [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. HiMAS model define the concept of
private sphere of an agent or a user that enable to structure and to represent data
involved in privacy management. However, HiMAS model do not define metrics for
trust evaluation. We note that existing approaches are often concerned about privacy
protection in the data collection phase. But, actually we need to control data after their
collection by e-government systems. Our main contribution is that we propose a new
privacy schema based on multi-agent systems to handle such drawbacks of the
existing knowledge.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3 The Privacy Enhancing Model</title>
      <p>In this section, we introduce our multi-agent based model that we call ABC
(AgentBased Control). First, we describe the concepts used. Then, we present the ABC
mechanism, and finally we describe our proposed techniques.</p>
      <p>The ABC model enables to manage privacy concerns in e-government systems.
This model offers a mechanism based on the use of a set of privacy rules and a set of
information on the parties' rights, roles, responsibilities and restrictions to make a
statistical assessment of trust. Consequently, ABC model enables the subsequent
authorizations to transfer private data.</p>
      <sec id="sec-3-1">
        <title>3.1. Model description</title>
        <p>In the ABC model (see Fig. 1), we define two kinds of agents: Admin agents and AP
agents (Authorization Provider agents). Admin agents represent the staff working in
government agencies. AP agents are charged to provide authorizations to Admin
agents in order to communicate with each other or to access objects in the system.
Each Admin agent in our model plays a set of roles (e.g. the tax controller, the mayor,
etc). A role includes a set of responsibilities (e.g. mayor roles: sign documents,
validate, etc) that are restricted by access-rights (e.g. read only, write, read-write, etc).
These access-rights are used to protect the resources and they are managed by a set of
privacy rules. Access rights are also justified by a specific context. In fact, what is
appropriate in one context can be a violation of privacy in another context.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.1.1. Definition of agent responsibility</title>
        <p>We define responsibilities as the restricted behavior of a given agent. In other words,
responsibility is the behavior that the system expects from an agent based on
restriction rules (RR). We note that restriction rules are used to manage agent
behavior at an internal level (e.g. making temporary results in a standard format
before continuing execution, requesting specific authorizations before performing
some actions, etc.). To more explain the concept of responsibility, let’s take this
example: we suppose that A1 is an agent responsible for sending e- mails to agent A2
(we suppose that e-mails are confidential). When we observe A1’s behavior, we find
that A1 sends e-mails to agent A2 and sends a copy to the agent A3 at the same time.
Thus, A1 in this case did not assume his responsibility as some restriction rules are not
respected.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.1.2. Definition of privacy rules</title>
        <p>
          Citizens must have control over their personal information handled by government
systems. The Fair Information Practices (FIPs) [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] is an example of control enhanced
by legislation, such as: limiting collection and disclosure, identifying purposes, etc.
        </p>
        <p>In the ABC model, we define the following privacy rules that are in compliance
with the FIPs and considered as the core needed to test and apply ABC model:
- R1: each agent must assume his responsibilities.</p>
        <p>- R2: each agent has access only to objects needed for doing the set of his
responsibilities.</p>
        <p>- R3: agent cannot use the context to access linked data outside the set of his
responsibilities.</p>
        <p>
          For the privacy rules specifications, we are based on the notation of the rule-based
systems [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] used in artificial intelligence, such that:
R1: Agent (A) ∧ Responsibility (Re) ∧ Responsible-for (A,Re)→Authorization(A, Re)
R2: Responsible-for (A,Re) ∧ Access (R,O,Re)→ Authorization(A, Re, R,O)
R3: Responsible-for(A,Re)∧Access(R,O,Re)∧(O→J) → NOT(Authorization(A,Re, R,J)
        </p>
      </sec>
      <sec id="sec-3-4">
        <title>3.2. Description of ABC mechanism</title>
        <p>In the ABC model, we define a distributed architecture in which sets of the agent in
different groups are interacting with each other. Each group (container) represents the
set of Admin agents running in the same e-government agency. To have access to
objects in the system or to communicate with other agents, each Admin agent must be
authorized from AP agent that exists in his group. To keep control of the system, AP
agents use a Rule Base (RB) and a Knowledge Base (KB). RB includes the set of
privacy rules and KB includes the set of knowledge in the system: agents, their roles,
their responsibilities, their RR, their access-rights and their resources. In ABC model,
we suppose that in case of failure, AP agents can switch roles dynamically with
Admin agents. AP agents delegate the control of the system to the most trusted Admin
agent. This delegation decision is based on the computation of Admin agent’s honesty
degree that will be explained in the next section.</p>
      </sec>
      <sec id="sec-3-5">
        <title>3.3. Description of ABC techniques</title>
        <p>
          In this section, we define techniques used for privacy protection in ABC model: the
computation of the trust and the honesty degrees. Formally, our ABC model
correspond to the following set: {A, Re, Au, PR, T, R} such that:
A: the set of agents in the system
Re: the set of agent responsibilities
Au: the set of authorizations
PR: the set of privacy rules
T: the trust degree reached in an e-government process. T is defined as:
where k represents the total number of agents in the system, h represents the honesty
of agent j estimated by agent i (see Fig. 2). h is defined as follows:
(1)
(2)
After each transaction, an Admin agent i can give feedback to Admin agent j
according to the service quality of j. Thus, a feedback score S is calculated as follows:
S= P-N, where P is the number of positive feedbacks left by agents and N is the
number of negative feedbacks from agents. The S value is disclosed to all agents in
the system. This reputation model has been presented in [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]. h is decreasing when the
agent is performing unauthorized actions. We define two types of such actions:
unauthorized access to objects, and unauthorized communications with other agents.
We suppose that honesty value is between 0 and 1 and it is disclosed to all agents in
the system. αj represents the interaction degree related to agent j. it denotes a weight
used to balance T value because we must consider that agents behave differently. In
our model, honest agents (having h=1) are rewarded. However, dishonest agents
(having h&lt;0,5) are punished.
        </p>
        <p>We represent risk degree associated to the use of personal information (R), by the
following:</p>
        <p>R = 1-T .
(3)</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4 Simulation and experimental results</title>
      <p>
        Using agent technology, we can make « behavioral simulation » that enable us to
create a virtual image of the reality. This is considered as a powerful predictive
analysis tool that enable decision makers to test their idea via scenario in an artificial
environment before implementing their decision in the real world. In this section, we
present our Multi-Agent Based Simulation (MABS) of agents’ behavior during the
company formation process in Tunisia. We chose this scenario because it is complex
(involves numerous administrations and many interactions) and requires the collection
of many sensitive data at every step. We represent governmental agencies involved in
this process by a set of agents (twelve agents) interacting with each other on behalf of
the user (the citizen) and we simulate their behavior during the whole process using
the ABC model. Our MABS is implemented using JADE platform [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. We used
JADE because it is a distributed platform and supports agent mobility. The mobility is
an important issue of our work for a real application of the proposed model. For the
privacy rules’ specifications, described in section3, we used the rule engine Jess [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]
to make authorization’ decision based on both agent’s responsibilities and access
rights. To realize our simulation, we made the following hypothesis:
- All agents initially are honest (having h = 1)
- A simulation step corresponds to n successive interactions. In the following
simulation results, we assume that n=10.
      </p>
      <p>At every simulation step, we evaluate Admin agent’s honesty and the trust value.</p>
      <p>The first experimental result is the evaluation of trust degree reached during the
running of company formation process. The trust (versus the risk) degree obtained
during nine steps of our simulation is plotted in Fig. 3.</p>
      <p>As observed, we note that T is increasing notably during the simulation and R is
decreasing (as there is a dual relation between T and R).</p>
      <p>According to our model, honest agents (having h=1) are placed in a trusty zone and
dishonest agents (having h&lt;0,5) are placed in non-trusty zone. As shown in Fig. 4 (a),
we note that in the first step of our MABS only one agent is honest. However, when
we observe agent’s behaviors during next steps, we note that the number honest
agents increase (Fig. 4 (b) shows an example of this increase during the step 4 of our
MABS).</p>
      <p>According to this interpretation, we find that agents placed in the non-trusty zone
want to behave like honest agents, they want to move to the trusty zone. This enabled
us to interpret the increase of the trust value during the simulation.</p>
    </sec>
    <sec id="sec-5">
      <title>5 A comparative study</title>
      <p>
        Regarding to some standard evaluation criteria, such as the use of access control
mechanisms, the use of user preferences, the trust evaluation metrics [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] and the use
of anonymity techniques [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] our work is the most appropriate one that is able to
support all of these criteria. The following table summarizes the comparative study of
our model with P3P and RBAC models. For each criterion we attribute (+) to the
model that supports it and (-) to the model that does not support it.
In fact, the use of agent technology in our work, has many advantages:
Agent-based models provide a more convincing approach to modeling the real world
behaviors due to their ability to explicitly model a component of the real world such
as: human, organizations, etc. The dynamicity of the real world including
environmental, political and social behaviors can be captured within a software agent.
Also, in multi-agent systems, we have the possibility to encapsulate private data. So,
we do not need additional security mechanisms to ensure data protection. One of the
main characteristics of Multi-Agents Systems (MAS) is the distribution. Using MAS
we can have a decentralized framework in which tasks are dispatched to agents in the
system. So, we can distribute the control of data transfer and access. Therefore, we
take profit from the task-delegation via agents, which is impossible with other
approaches.
In this paper, we proposed a new model for privacy enhancing in e-government
context. This model enabled us to build an evaluative framework for trust degree
reached for a given e-government process. In the context of e-government it is crucial
to build a trust relationship to ensure and enforce the adoption of e-government
systems by citizens. Also the proposed approach has the potential benefit of the use of
only one trusted entity: the AP agent. For future works we propose to enrich our
model by adding further privacy rules. We also plan to incorporate different types of
risks related to privacy protection such as risks related to: data collection, data
processing, data sharing, etc. Finally, we suggest managing task delegation between
Admin agents to ameliorate performances of e-government systems.
      </p>
      <p>Acknowledgments. This work has been supported by a common grant from Tunisian
and French government relative to the project STIC DGRSRT/INRIA entitled:
multiagent coordination models for e-government systems.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Goldberg</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wagner</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brewer</surname>
          </string-name>
          , E.:
          <article-title>Privacy-Enhancing Technologies for the Internet</article-title>
          . In: COMPCON'
          <fpage>97</fpage>
          ,
          <string-name>
            <surname>Proceedings</surname>
          </string-name>
          , IEEE, pp.
          <fpage>103</fpage>
          --
          <lpage>109</lpage>
          . IEEE Press (
          <year>1997</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Wooldridge</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jennings</surname>
            ,
            <given-names>N.R.</given-names>
          </string-name>
          :
          <article-title>Intelligent agents: Theory and practice</article-title>
          .
          <source>The Knowledge Engineering Review</source>
          <volume>10</volume>
          (
          <issue>2</issue>
          ),
          <fpage>115</fpage>
          --
          <lpage>152</lpage>
          (
          <year>1995</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Cranor</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          , and al.:
          <source>The Platform for Privacy Preferences1</source>
          .
          <volume>1</volume>
          (
          <issue>P3P1</issue>
          .
          <article-title>1) Specification W3C (</article-title>
          <year>2004</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Agrawal</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bhattacharya</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gupta</surname>
            ,
            <given-names>S.K.</given-names>
          </string-name>
          :
          <article-title>Protecting Privacy of Health Information through Privacy Broker</article-title>
          .
          <source>In: 39th International Conference on System Sciences, Proceedings</source>
          (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Sandhu</surname>
            ,
            <given-names>R.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Coyne</surname>
            ,
            <given-names>E.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Feinstein</surname>
            ,
            <given-names>H.L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Youman</surname>
            ,
            <given-names>C.E.</given-names>
          </string-name>
          :
          <article-title>Role-based access control models</article-title>
          .
          <source>IEEE Computer 29 (2)</source>
          , pp.
          <fpage>38</fpage>
          --
          <lpage>47</lpage>
          (
          <year>1996</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Crépin</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vercouter</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jaquenet</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Demazeau</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Boissier</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          :
          <article-title>Hippocratic multi-agent systems</article-title>
          .
          <source>In: 10th International Conference of Entreprise Information Systems</source>
          , pp.
          <fpage>301</fpage>
          --
          <lpage>308</lpage>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <article-title>Organisation for Economic Co-operation and Development: OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data (</article-title>
          <year>1980</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Sydenham</surname>
            ,
            <given-names>P.H.</given-names>
          </string-name>
          <article-title>and</article-title>
          <string-name>
            <surname>Thorn</surname>
          </string-name>
          , R.:
          <article-title>Handbook of Measuring System Design</article-title>
          . John Wiley &amp; Sons, ISBN:
          <fpage>0</fpage>
          -
          <lpage>470</lpage>
          -02143-
          <fpage>8</fpage>
          (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>K.J. Lin</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Lu</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Yu</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          en Tai.
          <article-title>: A reputation and trust management broker framework for web applications</article-title>
          . In: IEEE International Conference on e-Technology, e-Commerce and e-Service, pp.
          <fpage>262</fpage>
          --
          <lpage>269</lpage>
          (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Bellifemine</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bergenti</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Caire</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Poggi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <string-name>
            <surname>JADE- A Java Agent</surname>
          </string-name>
          <article-title>Development Framework</article-title>
          . In:
          <string-name>
            <surname>Multi-Agent</surname>
            <given-names>Programming</given-names>
          </string-name>
          , pp.
          <fpage>125</fpage>
          --
          <lpage>147</lpage>
          (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Friedman-Hill</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          :
          <article-title>Jess The Rule Engine for the Java Platform Version 7</article-title>
          .1p2 (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lin</surname>
            ,
            <given-names>K.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wong</surname>
            ,
            <given-names>D.S.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Varadharajan</surname>
          </string-name>
          , V.:
          <article-title>Trust Management Towards Service-Oriented Applications</article-title>
          . In: The IEEE International Conference on e-Business
          <source>engineering (ICEBE</source>
          <year>2007</year>
          ), pp.
          <fpage>129</fpage>
          --
          <lpage>146</lpage>
          (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Flegel</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          :
          <article-title>APES: Anonymity and Privacy in Electronic Services</article-title>
          .
          <source>Advances in Information Security</source>
          , vol.
          <volume>35</volume>
          ,
          <string-name>
            <surname>Springer</surname>
            <given-names>US</given-names>
          </string-name>
          , pp.
          <fpage>171</fpage>
          --
          <lpage>176</lpage>
          (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>