<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Video see-through in the clinical practice</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Vincenzo Ferrari</string-name>
          <email>vincenzo.ferrari@endocas.org</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mauro Ferrari, Franco Mosca</string-name>
          <email>name.surname@med.unipi.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Centro EndoCAS, Università di Pisa</institution>
          ,
          <addr-line>+39 (0) 50 995689</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Centro EndoCAS, Università di Pisa</institution>
          ,
          <addr-line>+39 (0) 50 995689</addr-line>
        </aff>
      </contrib-group>
      <fpage>19</fpage>
      <lpage>24</lpage>
      <abstract>
        <p>In this paper, we discuss potentialities and technological limits to overcome for the introduction in the clinical practice of useful functionalities, using video see-through visualizations, created mixing virtual preoperative information, obtained by means of radiological images, with real patient live images, for procedures where the physician have to interact with the patient (palpation, percutaneous biopsy, catheterism, intervention, etc…). The detailed information contained in a volumetric dataset are fully used during the diagnostic phase, but are partially lost passing from the radiological department to the surgical department. In fact, generally, surgeons plan interventions just using limited information provided by the radiologist and consisting in the textual diagnosis coupled with few 2D significant images selected from the volumetric dataset.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Mixed reality</kwd>
        <kwd>surgical navigation</kwd>
        <kwd>general surgery</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        The application of the “computer assisted” model to the
patient workflow, consisting of computer aided diagnosis
(CAD) and computer aided surgery (CAS) technologies,
allows the optimal use of medical dataset and to overcome
the above cited limitations of the current clinical practice.
The 3D visualization of patient specific virtual models of
anatomies [23; 24], extracted from medical dataset,
drastically simplifies the interpretation process of exams
and provides benefits both in diagnosing and in surgical
planning phases. Computer assisted technologies allow to
augment real views of the patient, grabbed by means of
cameras, with virtual information[
        <xref ref-type="bibr" rid="ref26">26</xref>
        ]. This
augmentedreality, or in general mixed-reality techniques [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ],
introduces many advantages for each task where the
Copyright © 2011 for the individual papers by the papers'
authors. Copying permitted only for private and academic
purposes. This volume is published and copyrighted by
the editors of EICS4Med 2011.
physician have to interact with the patient (palpation,
introduction of biopsy needle, catheterization, intervention,
etc.) [9; 10; 25]
The next figure shows a binocular see-through mixed reality
system at work implemented using a HMD (Head Mounted
Display) and external cameras [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
To implement this kind of systems is generally required to
localize the anatomy in respect to the real video source and
to determine its projection model in order to coherently mix
virtual and real scenarios. Localization can be done using
commercial tracking systems, introducing additional costs
and logistic troubles in the traditional clinical scenario, with
large errors on soft tissues, while the projection model of
the video source can be calculated using theoretical
algorithms that impose some constrains for the real camera.
In the following is described in the detail the problem and
possible solutions to avoid the need of the tracker or to
improve the localization quality on soft tissues taking into
account the limits of the current images source used in
surgery.
      </p>
      <p>HOW TO OBTAIN A MIXED REALITY VIEW
The following picture essentially describes the video
seethrough concept.
How to determine camera projection model
Line scan and telecentric cameras are used for particular
industrial applications, while for all visualization purposes,
including laparoscopy, the perspective projective camera is
the only used, because it offers the most similar images in
respect to human vision.</p>
      <p>Regarding the sensor, two technologies are predominantly
applied: CCD (Charge Coupled Device) and CMOS
(Complementary Metal Oxide Semiconductor). In each case
unitary elements (pixels) are disposed on a regular grid
(with fixed resolution).</p>
      <p>Each camera, composed of a projective optics and a grid
sensor, can be represented by the following model:
Real video frames, grabbed by of real camera/s, are mixed
with virtual objects not visible in the real scene and shown
on a display/s. This virtual information can be obtained
using radiological images as depicted in the next figure.
The using of volumetric scanners, like CT (Computed
Tomography) or MRI (Magnetic Resonance Imaging),
allows to obtain a 3D virtual model of the anatomy [4; 6],
which can be loaded in a virtual scene, running on a
computer, rendered from a point of view coherent with the
real point of view.</p>
      <p>
        The mixing of the real (2D) images with the virtual (2D)
rendered images can be done using a hardware video mixer
or using the real images in the scene graph as foreground or
background [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. The concept and the work to do are
similar: in the first case the mixing is done by external
hardware after the rendering of the virtual scene, while in
the second one by the GPU during the rendering. Figure 4
shows this concept. The real camera acquires video frames
from the real environment (a spleen in this case). Video
frames are shown as background of the virtual scene.
Virtual objects are positioned in the scene (green flashes in
this case) and rendered from a virtual camera.
      </p>
      <p>In order obtain a coherent fusion we have to obtain a virtual
scene where:
virtual camera projection model ≈ to the real one
virtual camera position ≈ to the real one
virtual objects positions ≈ to the real ones
The following paragraphs describes how to obtain the
previous three conditions.</p>
      <p>M p
where f is the focal distance and (Cx, Cy) are the
coordinates of the projection of the Oc on the image
reference frame (with origin in OI).</p>
      <p>Other internal camera parameters parameterize the model of
the radial distortion, introduced by common lens, by means
of which the projected point Pp is deviated on Pd.
The pixelization process is defined by the pixel dimensions
dx and dy and the image sensor dimensions Dx and Dy. These
The perspective projection matrix Mp, mapping a generic
3D point Pc = [x, y, z, 1]T, in the camera reference system,
to the corresponding 2D point Pp = [u, v, 1]T in the image
reference system (fixed on the center of the sensor), i.e.:
Pp</p>
      <p>M p c</p>
      <p>P
is defined starting from the internal camera parameters (f,
Cx, Cy) as follows:</p>
      <p>f
0
0
0
0
f</p>
      <p>Cx
Cy
1
0
0
0
(1)
(2)
internal parameters of the camera allow to convert
measurements done on the image (in pixels) in real
measurements (in millimeters) and vice-versa.</p>
      <p>
        All internal camera parameters can be determined in a
calibration phase acquiring some images of a knowing
object in different positions with fixed camera configuration
(in terms of diaphragm and camera focus) and using
calibration routines like described in [
        <xref ref-type="bibr" rid="ref30">30</xref>
        ].
      </p>
      <p>These parameters have to be used to adjust the virtual
camera to the real one.</p>
      <p>Using traditional surgical endoscopes a new camera
calibration and virtual camera adjustment is required
whenever either the optic zoom or the diaphragm opening
are changed. Another important source of error can be
determined by the mechanical joint between the optic and
the camera body. Their relative movements can determines
a shift of the center of projection C up to tens of pixels.
How to localize the camera
Camera position and orientation can be obtained using a
tracker able to track a sensor mounted on the camera body
as shown in the following figure.
The tracker offers in real time the transformation matrix T1
relative to the sensor. The calibration matrix Tc,
representing the relative transformation of the camera
viewpoint with respect to the sensorized frame, necessary to
determine position and orientation of the camera projection
center OC, can been computed using a sensorized
calibration grid. During the calibration T1 and T2 are given
by the localization system, while the transformation T3 is
determined using computer vision methods that allow to
localize, in the camera reference frame, objects with known
geometry (the sensorized calibration grid).</p>
      <p>
        Another approach could be the localization using directly
video frames acquired by the cameras as done in some
applications. Several computer vision libraries (OpenCV or
Halcon by MVTec) offers many tools for this purpose.
Using a single camera, we could localize objects with
known geometry or texturing [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] as in the case of EasyOn
by Seac02 (www.seac02.it). The localization accuracy is
enough for many applications, but requires knowing in
advance the dimensions and the texture of a rigid object in
the scene (or different objects rigidly linked together).
Interesting monoscopic solutions have been applied using
laparoscopic images: see-through systems applying on
organs artificial markers [SOFT TISSUE], recovering the
position of needle [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ] and the pose of surgical instruments
[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>How to register the patient
In surgical applications, virtual objects, representing patient
anatomies, are acquired in the reference frame of the
radiological instrumentation just before or days before the
surgical procedure, whereas the intra-operative information
is related to the reference frame of the surgical room
(generally defined by means of a tracking system) during
the intervention.</p>
      <p>
        In case of rigid objects like bones, a changing of reference
frame, performed aligning fiducial points or fiducial
surfaces, acquired on the radiology department and in the
surgical room, can be enough [1; 3]. Deformations of the
fiducial structure composed by elements, such as points of a
cloud or points characterizing a surface, introduce
systematic errors in the registration. In order to minimize
the registration error, at least on fiducials elements, each
fiducial point (or fiducial surface) in the proximity of steady
element on the patient has to be chosen, and its
configuration has to be as replicable as possible [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ].
In case of soft tissue, further than the changing of reference
frame, there are many deformation effects to avoid or to
compensate, due to: changing of patient decubitus,
changing in bed configuration, physiological movements
(breathing, heart beating, gastrointestinal movements,
etc…), constraints due to the radiological scanners (breath
hold, arts positions, etc…).
      </p>
      <p>
        To reduce these movement effects we can employ practical
and useful artifices, used routinely by radiotherapists
reproducing meticulously the patient settings during the
treatment as in the planning room. By following their work,
bed positioning and its shape, during the acquisition of
medical datasets, can be chosen accordingly to the bed
configuration used inside the surgical room for the specific
intervention (considering the requirements of the used
radiological device and the type of intervention to be
performed). Furthermore during the intervention, the exact
decubitus of the patient during radiological scanning
requires to obtain the same relative position of the basin
and the thoracic cage. A realignment of these structures
needs immobilization devices and/or additional iterative
work in the surgical room in order to find a perfect
correspondence between pre-operative and intra-operative
patient positioning [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ].
      </p>
      <p>The using of intra-operative imaging devices like 3D RA
(Rotational Angiograph), which could be diffused in the
early future, thanks to the decreasing of their price and the
possibility to be portable (Ziehm Vision FD Vario 3D or
Siemens ARCADIS Orbic 3D), allows to avoid the change
of reference frame for each patient. These scanners,
positioned in the operating room, can be easily and
precisely calibrated with the localizer by means of sensors.
Furthermore the acquisition of the anatomy directly on the
surgical bed allows to dramatically simplify the problem, by
removing error due to the change of bed and patient
decubitus. This simplification will allow to obtain high
precision also on soft tissues. As proven by experimental
results, the application of predictive models of organs
motion due to breathing, driven by simple intra-operative
parameters like the trajectory of a point on the patient skin
or the time over the breathing cycle, can be applied in the
real surgical scenario [14; 22].</p>
      <p>
        ALTERNATIVE SOLUTIONS
Head mounted tracker-free stereoscopic video
seethrough
Depth perception can be drastically increased using head
mounted stereoscopic devices [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], that allow to evaluate
object depth dislocation, like in the natural binocular view.
The use of localized head mounted displays (HMD), like
the one shown in figure 1, allows to see a synthetic scene
from a point of view aligned with the real user’s point of
view.
      </p>
      <p>For the implementation of head mounted mixed reality
systems, the video see-through approach, based on the
acquisition of real images by means of external cameras, is
preferable to the optic see-through approach that projects
virtual information on semi transparent glasses. This is due
to the fact that tracking of eye movements, strictly required
for optical see-through approach, is very difficult to be
performed with sufficient precision [16; 18]. On the
contrary, head tracking, required for video see-through
approach, can be performed with high precision using
external localizer based on different technologies [2; 12],
like described before.</p>
      <p>
        We implemented a head mounted stereoscopic video
seethrough system, that does not require the use of an external
localizer to track head movements [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Our system
implements mixed-reality aligning in real-time virtual and
real scene just using geometric information extracted by
segmenting coloured markers, attached on the patient’s
skin, directly from couples of camera images.
      </p>
      <p>
        Epipolar geometry [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], using two or more cameras, allows
to detect the 3D position of each conjugate points,
identifiable in the images. In a stereoscopic configuration,
knowing the internal camera parameters, for each marker
position, in the image plane, the relative projection line in
the 3D world, defined as the line l passing through the
camera center of projection Oc and lying on the point Pc, is
determined. These steps, performed both on left and right
images, identify respectively two projection lines ll and lr.
Knowing the relative pose of the right camera to the left
camera (expressed by a roto-traslation matrix determinable
in a calibration phase), the 3D position of each marker is
then defined as the intersection point between ll and lr.
Since ll and lr do not intersect (due to pixelization process
and noise in marker identification) the 3D marker position
is approximated with the position of the closest point to
both projection lines. After fiducials localization a rigid
registration is performed using a point based approach.
Results demonstrate that stereoscopic localization
approach, adopted in our system, is enough for system
usability.
      </p>
      <p>
        Laparoscope auto localization
As described before, localization using monoscopic
cameras can be done in case of objects with known
geometry or texturing. In case of laparoscopic interventions
the localization of the endoscopic camera can be
determined using information offered by endoscopic video
images without the introduction of any artificial add-on in
the scenario[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>The position and orientation of the endoscopic camera can
be determined, with respect to a reference frame fixed to
the access ports configuration, elaborating video images
and knowing the distances between insertion points. During
laparoscopic interventions, camera movements are minor
respect to instruments movements. Therefore the
laparoscope can be considerate steady in a time interval,
and a reference frame fixed on the camera can be used to
perform measurements [21; 28].</p>
      <p>
        The projections of instrument axis on the image plane
(projection lines), which can be simply determined using
HSV color space and Hough transform [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ], are
constrained to pass through the projection of the insertion
point on the image plane [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ] (figure 8).
      </p>
      <p>Insertion point projection on the image plane can be
calculated as the barycentre of the intersection of couples of
projection lines, for each instrument. It allows (after camera
calibration) to determine the direction of the insertion point
in the camera reference frame (Fig. 9 Left). Therefore,
versors Tl and Tr, representing respectively the direction of
the left and the right instrument insertion point, are
determined. The versor Tc, representing the direction of the
camera insertion point, lies on the Z axis of the camera
reference frame (using 0 degree optic).
The geometrical relations between Tl, Tr, Tc, and insertion
points are shown on the right of figure 9. In the figure lc, ll
and lr represent distances of the insertion points from the
camera origin, which have to be chosen in order to guaranty
the distances between access ports d1, d2 and d3. The
tetrahedral configuration allows to determine univocally lc,
ll and lr and consequently, having Tl, Tr and Tc, to localize
the access ports respect to the camera (and vice versa).
The localization accuracy depends on the instruments
configuration and on their movements. The proposed
solution allows to provide a cheap and tracker-free
implementation for a class of computer assisted surgical
systems that do not require extremely accurate localization.
For example, offering 3D pre-operative model visualization
with automatic point of view selection and remote
assistance using virtual objects on the laparoscopic monitor.
CONCLUSIONS
The development of video see-through systems is useful
and possible using various approaches.</p>
      <p>In order to reduce misalignment errors, between real and
virtual world, using commercial trackers, it would be
necessary, in the future, the development of endoscopic
cameras taking into account the previous considerations.
Endoscopes should natively integrate sensors for their
localization and manufactures should take into account the
stability of the joint between optic and camera body.
On the other hand it is possible the development of
tracker-free implementations elaborating camera images,
allowing to reduce costs and logistic troubles related to the
need of sensors and the tracker in the operating room.
The using of intra-operative imaging devices like 3D RA,
which could be diffused in the early future, thanks to the
decreasing of their price and the possibility to be portable,
will allow to obtain high precision in see-through systems
also in case of soft tissues.
Navigation Guidance for High Intensity Focused
Ultrasound Treatment. Paper presented at the Conf Proc
ICABB, International Conference on Applied Bionics
and Biomechanics 2010, Venice, Italy.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Arun</surname>
            ,
            <given-names>K. S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Huang</surname>
            ,
            <given-names>T. S.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Blostein</surname>
            ,
            <given-names>S. D.</given-names>
          </string-name>
          (
          <year>1987</year>
          ).
          <article-title>Least-squares fitting of two 3-D point sets</article-title>
          .
          <source>IEEE Trans. Pattern Anal. Mach</source>
          . Intell.,
          <volume>9</volume>
          (
          <issue>5</issue>
          ),
          <fpage>698</fpage>
          -
          <lpage>700</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Baillot</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Julier</surname>
            ,
            <given-names>S. J.</given-names>
          </string-name>
          (
          <year>2003</year>
          ).
          <article-title>A tracker alignment framework for augmented reality</article-title>
          ,
          <source>In Proc. Second IEEE and ACM International Symposium on Mixed and Augmented Reality</source>
          (pp.
          <fpage>142</fpage>
          -
          <lpage>150</lpage>
          ): IEEE.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Besl</surname>
            ,
            <given-names>P. J.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>McKay</surname>
            ,
            <given-names>N. D.</given-names>
          </string-name>
          (
          <year>1992</year>
          ).
          <article-title>A Method for Registration of 3</article-title>
          -
          <string-name>
            <given-names>D</given-names>
            <surname>Shapes</surname>
          </string-name>
          .
          <source>IEEE Trans. Pattern Anal. Mach</source>
          . Intell.,
          <volume>14</volume>
          (
          <issue>2</issue>
          ),
          <fpage>239</fpage>
          -
          <lpage>256</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Coll</surname>
            ,
            <given-names>D. M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Uzzo</surname>
            ,
            <given-names>R. G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Herts</surname>
            ,
            <given-names>B. R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Davros</surname>
            ,
            <given-names>W. J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wirth</surname>
            ,
            <given-names>S. L.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Novick</surname>
            ,
            <given-names>A. C.</given-names>
          </string-name>
          (
          <year>1999</year>
          ).
          <article-title>3-dimensional volume rendered computerized tomography for preoperative evaluation and intraoperative treatment of patients undergoing nephron sparing surgery</article-title>
          .
          <source>J Urol</source>
          ,
          <volume>161</volume>
          (
          <issue>4</issue>
          ),
          <fpage>1097</fpage>
          -
          <lpage>1102</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Doignon</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nageotte</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Maurin</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Krupa</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>2007</year>
          ).
          <article-title>Model-based 3-D pose estimation and feature tracking for robot assisted surgery with medical imaging, From Features to Actions - Unifying Perspectives in Computational and Robot Vision</article-title>
          , Workshop at the IEEE Int.
          <source>Conf. on Robotics and Automation.</source>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Ferrari</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Carbone</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cappelli</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Boni</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cuschieri</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pietrabissa</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , et al. (
          <year>2010</year>
          ).
          <article-title>Improvements of MDCT images segmentation for surgical planning in general surgery - practical examples</article-title>
          .
          <source>Proceedings of the International Congress and Exhibition. IJCARS</source>
          Volume
          <volume>5</volume>
          , Supplement 1 / June.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Ferrari</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Megali</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pietrabissa</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Mosca</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          (
          <year>2009</year>
          ).
          <article-title>Laparoscope 3D auto-localization</article-title>
          .
          <source>Proceedings of the International Congress and Exhibition. IJCARS</source>
          Volume
          <volume>4</volume>
          , Supplement 1 / June.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Ferrari</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Megali</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Troia</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pietrabissa</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Mosca</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          (
          <year>2009</year>
          ).
          <article-title>A 3-D mixed-reality system for stereoscopic visualization of medical dataset</article-title>
          .
          <source>IEEE Trans Biomed Eng</source>
          ,
          <volume>56</volume>
          (
          <issue>11</issue>
          ),
          <fpage>2627</fpage>
          -
          <lpage>2633</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Freschi</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ferrari</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Porcelli</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Peri</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pugliese</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Morelli</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          , et al. (
          <year>2010</year>
          ). An Augmented Reality
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Freschi</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Troia</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ferrari</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Megali</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pietrabissa</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Mosca</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          (
          <year>2009</year>
          ).
          <article-title>Ultrasound guided robotic biopsy using augmented reality and human-robot cooperative control</article-title>
          .
          <source>Conf Proc IEEE Eng Med Biol Soc</source>
          ,
          <year>2009</year>
          ,
          <fpage>5110</fpage>
          -
          <lpage>5113</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Gao</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Ahuja</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          (
          <year>2004</year>
          ).
          <article-title>Single camera stereo using planar parallel plate</article-title>
          ,
          <source>Pattern Recognition, 17th International Conference on (ICPR'04)</source>
          Volume
          <volume>4</volume>
          (pp.
          <fpage>108</fpage>
          -
          <lpage>111</lpage>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Genc</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sauer</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wenzel</surname>
          </string-name>
          , F.,
          <string-name>
            <surname>Tuceryan</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Navab</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          (
          <year>2000</year>
          ).
          <article-title>Optical see-through HMD calibration: A stereo method validated with a video seethrough system</article-title>
          ,
          <source>International Symposium for Augmented Reality.</source>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Hartley</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Zisserman</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>2004</year>
          ).
          <article-title>Multiple View Geometry in Computer Vision</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Hawkes</surname>
            ,
            <given-names>D. J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Penney</surname>
            ,
            <given-names>G. P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Atkinson</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Barratt</surname>
            ,
            <given-names>D. C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Blackall</surname>
            ,
            <given-names>J. M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Carter</surname>
            ,
            <given-names>T. J.</given-names>
          </string-name>
          , et al. (
          <year>2007</year>
          ).
          <article-title>Motion and Biomechanical Models for Image-Guided Interventions</article-title>
          ,
          <source>ISBI</source>
          (pp.
          <fpage>992</fpage>
          -
          <lpage>995</lpage>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Hinson</surname>
            ,
            <given-names>W. H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kearns</surname>
          </string-name>
          , W. T.,
          <string-name>
            <surname>Ellis</surname>
            ,
            <given-names>T. L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sprinkle</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cullen</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Smith</surname>
            ,
            <given-names>P. G.</given-names>
          </string-name>
          , et al. (
          <year>2007</year>
          ).
          <article-title>Reducing set-up uncertainty in the elekta stereotactic body frame using stealthstation software</article-title>
          .
          <source>Technology in cancer research &amp; treatment, 6</source>
          (
          <issue>3</issue>
          ),
          <fpage>181</fpage>
          -
          <lpage>186</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Hua</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Krishnaswamy</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Rolland</surname>
            ,
            <given-names>J. P.</given-names>
          </string-name>
          <article-title>Videobased eyetracking methods and algorithms in headmounted displays</article-title>
          .
          <source>Optics Express</source>
          ,
          <volume>14</volume>
          ,
          <fpage>4328</fpage>
          -
          <lpage>4350</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Johnson</surname>
          </string-name>
          , L.,
          <string-name>
            <surname>Philip</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lewis</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Hawkes</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          (
          <year>2004</year>
          ).
          <article-title>Depth perception of stereo overlays in image-guided surgery</article-title>
          ,
          <source>Medical Imaging, Proceedings of the SPIE</source>
          , Volume
          <volume>5372</volume>
          , pp.
          <fpage>263</fpage>
          -
          <lpage>272</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>E. C.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Park</surname>
            ,
            <given-names>K. R.</given-names>
          </string-name>
          (
          <year>2008</year>
          ).
          <article-title>A robust eye gaze tracking method based on a virtual eyeball model</article-title>
          .
          <source>Machine Vision</source>
          and Applications.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Megali</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ferrari</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Freschi</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Morabito</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cavallo</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Turini</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          , et al. (
          <year>2008</year>
          ).
          <article-title>EndoCAS navigator platform: a common platform for computer and robotic assistance in minimally invasive surgery</article-title>
          .
          <source>The International Journal of Medical Robotics and Computer Assisted Surgery</source>
          ,
          <volume>4</volume>
          ,
          <fpage>242</fpage>
          -
          <lpage>251</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Milgram</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Kishino</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          (
          <year>1994</year>
          ).
          <article-title>A Taxonomy of Mixed Reality Visual Displays</article-title>
          .
          <source>IEICE transactions on information and systems</source>
          ,
          <volume>77</volume>
          (
          <issue>12</issue>
          ),
          <fpage>1321</fpage>
          -
          <lpage>1329</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Naogette</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zanne</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Doignon</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>De Mathelin</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          (
          <year>2006</year>
          ).
          <article-title>Visual Servoing-Based Endoscopic Path Following for Robot-Assisted Laparoscopic Surgery</article-title>
          ,
          <source>International Conference on Intelligent Robots and Systems (IROS).</source>
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Olbricha</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Traubb</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wiesnerb</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wicherta</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Feussnera</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>N.Navabb.</surname>
          </string-name>
          (
          <year>2005</year>
          ).
          <article-title>Respiratory Motion Analysis: Towards Gated Augmentation of the Liver</article-title>
          ,
          <article-title>CARS 2005 Computer Assisted Radiology and Surgery, 19st International Congress</article-title>
          and Exhibition.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Peters</surname>
            ,
            <given-names>T. M.</given-names>
          </string-name>
          (
          <year>2006</year>
          ).
          <article-title>Image-guidance for surgical procedures</article-title>
          .
          <source>Physics in Medicine and Biology</source>
          ,
          <volume>51</volume>
          (
          <issue>14</issue>
          ),
          <fpage>R505</fpage>
          -
          <lpage>R540</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Peters</surname>
            ,
            <given-names>T. M.</given-names>
          </string-name>
          (
          <year>2000</year>
          ).
          <article-title>Image-guided surgery: from Xrays to virtual reality</article-title>
          .
          <source>Computer Methods in Biomechanics and Biomedical Engineering</source>
          ,
          <volume>4</volume>
          (
          <issue>1</issue>
          ),
          <fpage>27</fpage>
          -
          <lpage>57</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Pietrabissa</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Morelli</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ferrari</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Peri</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ferrari</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Moglia</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , et al. (
          <year>2010</year>
          ).
          <article-title>Mixed reality for robotic treatment of a splenic artery aneurysm</article-title>
          .
          <source>Surg Endosc</source>
          ,
          <volume>24</volume>
          (
          <issue>5</issue>
          ),
          <fpage>1204</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <surname>Shuhaiber</surname>
            ,
            <given-names>J. H.</given-names>
          </string-name>
          (
          <year>2004</year>
          ).
          <article-title>Augmented Reality in Surgery</article-title>
          . Archive of surgery,
          <volume>139</volume>
          ,
          <fpage>170</fpage>
          -
          <lpage>174</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <surname>Tonet</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ramesh</surname>
            ,
            <given-names>T. U.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Megali</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Dario</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          (
          <year>2007</year>
          ).
          <article-title>Tracking endoscopic instruments without localizer: a shape analysis-based approach</article-title>
          . Computer Aided Surgery,
          <volume>12</volume>
          (
          <issue>1</issue>
          ),
          <fpage>35</fpage>
          -
          <lpage>42</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          28.
          <string-name>
            <surname>Voros</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Long</surname>
            ,
            <given-names>J.-A.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Cinquin</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          (
          <year>2006</year>
          ).
          <article-title>Automatic Localization of Laparoscopic Instruments for the Visual Servoing of an Endoscopic Camera Holder</article-title>
          ,
          <source>MICCAI (1)</source>
          (pp.
          <fpage>535</fpage>
          -
          <lpage>542</lpage>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          29.
          <string-name>
            <surname>Wengert</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bossard</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Baur</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Szekely</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Cattin</surname>
            ,
            <given-names>P. C.</given-names>
          </string-name>
          (
          <year>2008</year>
          ).
          <article-title>Endoscopic navigation for minimally invasive suturing</article-title>
          .
          <source>Comput Aided Surg</source>
          ,
          <volume>13</volume>
          (
          <issue>5</issue>
          ),
          <fpage>299</fpage>
          -
          <lpage>310</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          30.
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          (
          <year>2000</year>
          ).
          <article-title>A Flexible New Technique for Camera Calibration</article-title>
          .
          <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>
          ,
          <volume>22</volume>
          (
          <issue>11</issue>
          ),
          <fpage>1330</fpage>
          -
          <lpage>1334</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>