A surgical assistance system for transcatheter aortic valve implantation based on a magic lens concept S. Franke¹, D. Schulz¹, J. Seeburger2, B. Preim3, T. Neumuth¹ ¹ Innovation Center Computer Assisted Surgery, Universität Leipzig, Leipzig, Germany 2 Herzzentrum Leipzig, Universität Leipzig, Leipzig, Germany 3 Institut für Simulation und Graphik, Otto-von-Guericke-Universität Magdeburg, Magdeburg, Germany Kontakt: stefan.franke@iccas.de Abstract: In general, minimally invasive procedures are less stressful for the patient. However, the surgeon does not have a clear view of the surgical field. Technical systems could assist the surgeon in orientation based on preoperative images. We developed a surgical assistance system for Transcatheter Aortic Valve Implantation to address these issues. The system is based on a Magic Lens concept. It combines tracking technology and visualization on a mobile display in real-time. A prototype was implemented to demonstrate the context and focus dependent presentation of patient information. The system allowed an intuitive interaction with preoperatively acquired patient data during the intervention. A preliminary user study with seventeen cardiac surgeons was conducted to evaluate the interaction concept and poten- tial acceptance of such a system. The study results indicated the strong potential of the proposed concept and provided important hints for further development of the technique. Schlüsselworte: Surgical assistance system, Aortic Valve replacement, Augmented Reality, Magic Lens 1 Introduction The amount of minimally invasive procedures increased significantly in recent years in several clinical disciplines. In cardiac surgery, more than sixteen thousand aortic valve replacements are performed in Germany per year. Approxi- mately thirty percent of the interventions were performed as Transcatheter Aortic Valve Implantation (TAVI) [1, 2]. In general, minimally invasive procedures are less stressful for the patient. However, the surgeon does not have a clear view of the surgical field. Technical systems could assist the surgeon in orientation based on preoperative images. Tech- niques of Augmented Reality might be used in surgical assistance systems to allow easy interaction with preoperative data in direct relation to the surgical area. We propose a novel surgical assistance system to address these issues with focus on access planning. The clinical use case of TAVI was used to demonstrate the feasibility of the concept. 2 Material and Methods The developed assistance system was based on the concept of a Magic Lens. The Magic Lens Paradigma was published by Bier et al. [3] in 1993 for graphical user interface elements. Later, physical lenses [4] and extension to the Magic Lens concept [5] were proposed. A Magic Lens provides additional information in relation to its position and the object of interest. The visualization of the information basically depends on context and focus. This is common to many Aug- mented Reality applications using head-mounted displays [6] and video overlays [7]. However they do not provide tac- tile interaction. The surgical situation, the patient and its anatomy were the context of the assistance system. Hence, the visualization focused on blood vessels, the heart and other relevant anatomical structures. These structures were not directly visible to the surgeon but were additional information to be displayed by the Magic Lens. The visualization was designed to be focus dependent. By means of that, the lens should be able to display information in relation to the patient and with re- spect to the view of the surgeon. This allowed a direct mental integration of the virtual and real patient anatomy. Several requirements that need to be fulfilled for context and focus dependence were identified. The system needed to combine tracking and visualization in real-time. The focus aspect required a stable, accurate tracking of the lens and a marker-free, robust tracking of the surgeon. Additionally, the system needed to cope with multiple persons and partial 165 occlusion in the field of view. The visualization on the lens display finally combined the tracking information with pre- operative patient data, i.e. CT, to provide a real-time rendering that respects the field of view of the surgeon. The con- text aspect required a clear visualization of the patient anatomy. Especially, occlusion of anatomical structures had to be resolved by adapted visualization techniques. Furthermore, an easy adaptation of the visualization to different surgical tasks was required. The Magic Lens system was designed for transfermoral as well as transapical approaches. Hence, the visualization had to address various surgical issues: provide an overview, present structure and course of vessels in relation to bone structures and show calcifications. Preoperatively, a segmentation of the patient image data had to be generated including all relevant anatomical struc- tures. Skin, bones, heart and main vessels with calcifications were required at least. Additional segmentations could be displayed if present. However, the preoperative pipeline is currently not in the focus of the project. A freely available dataset of the Institute of Research against Digestive Cancer at the University of Strasbourg was used for development and evaluation. The intraoperative setup consisted of three major components: A tracking system, a workstation and an active display used as a lens. An accurate tracking of the lens and a robust tracking of the surgeon were required. Markers were not applicable for person tracking in the OR. Hence, a standard marker based tracking (NDI Polaris) was combined with a consumer Time-of-Flight camera (Microsoft Kinect). Both devices were attached and registered to each other. This al- lowed a marker-based lens tracking and a robust surgeon tracking in a shared coordinate system. The acquired tracking data were streamed to the workstation using a TiCoLi based OR bus implementation [8]. The workstation calculated the view to be rendered on the lens display. Thereto the tracking data were combined with a patient registration and the dis- play size to generate a sheared camera in the virtual patient anatomy scene. The rendered image was streamed to the lens over a wireless network and displayed there. The visualization pipeline was implemented based on the Visualization Toolkit (VTK). The reconstructed 3D models of the anatomical structures were used to render the view of the Magic Lens. Four presets were defined to adapt the visual- ization to the surgical situation. The default preset showed all available models of anatomical structures to provide an overview. None of the structures, except the skin, were displayed opaque for a clear presentation of their position to one another. The transapical preset focused on the apex of the heart. The bones and the apex were displayed opaque whereas all the other structures were displayed semi-transparent. The visualization was combined with silhouette rendering to support the differen- tiation of structures and depth impression. A detail of the patient model displayed semi-transparent with silhouettes is depicted in figure 1. The transfemoral preset was designed to present the vascular system. The vessels and the bones were rendered opaque. The rest of the structures were displayed semi-transparent again with silhouettes. This allowed a clear view on the vascular structures and their course. The preset was useful if the vessel planned for access could not be palpated manually. Figure 1: Semi-transparent rendering with silhouettes. Finally, a calcification preset was defined. The preset emphasized the calcifications with semi-transparent vessel struc- tures. Heavy calcification at the incision is a risk factor for transcatheter procedures. Hence, the calcification preset supported finding a suitable incision point. The selection of anatomical structures to display as well as all their visualization parameters, including color, opacity and silhouettes could be intraoperatively changed at the workstation. However, a sterile interaction was required for the surgeon. The surgeon could directly control the focus by lens and head movement. The person with a hand closest to the lens was expected to be the one interacting with the lens. The visualization was automatically adapted to the corre- sponding viewpoint. This increased the flexibility because it was possible to hand over the lens. Additionally, the lens itself had a touch display to grab user input. The interaction needed to be simple and intuitive. Hence, the display was split into three areas. The presets could be changed at the left and right borders, switching for- ward and backward through the four presets. At the center of the display, the surgeon could toggle another mode. The mode allowed setting a clipping plane by moving the lens. The dataset, except the vessels and the heart were clipped transversal. This provided an additional possibility to avoid disturbing occlusions. 166 3 Results A prototype of the Magic Lens for cardiac surgery was implemented. The setup in the demonstration OR is shown in figure 2 on the left. The infrared tracking camera with the attached Microsoft Kinect (A) is shown in the background. Next to that, the graphical user interface of the workstation (B) is visible. The lens (C) is implemented using a tablet lo- cated at the patient dummy (D). Figure 2: The technical setup of the Magic Lens (left) and an image with person tracking (right). The right hand side of figure 2 shows an image of the Microsoft Kinect camera with the tracked skeleton of the user. The lens view adapted to the users head and lens movements in real-time on standard hardware. The view also sheared according to the viewing angle of the user. Hence, the Magic Lens system provided the field of view the user would have through the lens. The person tracking was very robust against partial occlusion caused by the OR table and the pa- tient. Additionally, the marker-based tracking of the lens was very stable. The mobile view practically did not show any disturbing jitter. A photo of the Magic Lens in use and the corresponding visualization with the default preset is depicted in figure 3. Figure 3: Photo of the Magic Lens in use (left) and the corresponding visualization (right). A preliminary user study was conducted to evaluate the potential of the proposed intraoperative assistance. Seventeen cardiac surgeons from the Herzzentrum Leipzig tested the prototype under laboratory conditions. A subsequent ques- tionnaire focused on visualization quality, user interaction and general acceptance. Visualization and interaction were rated well on average. All surgeons indicated they would use such a type of assistance system at least in complicated cases. However, the results also indicated additional features that might be useful, such as see-through functionalities and overlay with preoperative planning data [9] and intraoperative imaging modalities. 167 4 Discussion We proposed a novel surgical assistance system for minimally invasive cardiac surgery based on the concept of a Magic Lens. The current prototype provided context and focus dependent anatomical information to the surgeon. Thus, it ful- filled the two basic requirements of the Magic Lens concept. The prototype demonstrated a way of interactive integra- tion of preoperative patient data into the surgical area. The interaction is intuitive and reduces the workload for the sur- geon to mentally integrate the information, in contrast to common stationary systems. However, there are still some lim- itations that need to be overcome. The preoperative segmentation workload needs to be reduced for clinical routine. Additionally, a surface-based registration technique is required because of the lack of stable anatomical landmarks. Nonetheless, the preliminary user study indicated the potential usefulness of the proposed concept. The intuitive way of interaction by movements and switching through the presets contributed to the acceptance of the designed system. The next important step for use in operating rooms will be an enhanced patient registrations and accuracy measurements. 5 Conclusion The implemented assistance system demonstrates a promising approach to interact with preoperative patient datasets during the intervention. The surgeon directly interacts with the data in an intuitive way. The surgeon is relieved of the task of integrating the data with the surgical area mentally. Thus, the system might contribute to patient safety, if the ad- ditional workload can be minimized to allow the use in clinical routine. Although the prototype was designed for mini- mally invasive cardiac surgery, the basic concept can be applied to different use cases in several clinical disciplines. 6 Acknowlegdements ICCAS is funded by the German Federal Ministry of Education and Research (BMBF) and the Saxon Ministry of Sci- ence and Fine Arts (SMWK) in the scope of the Unternehmen Region with grant number 03Z1LN12 and by the Euro- pean Regional Development Fund (ERDF) and the state of Saxony within the frame of measures to support the technol- ogy sector. 7 References [1] A-K Funkat, A Beckmann, J Lewandowski, M Frie, W Schiller, M Ernst, K Hekmat, Cardiac Surgery in Germa- ny during 2011: A Report on Behalf of the German Society for Thoracic and Cardiovascular Surgery, The Tho- racic and Cardiovascular Surgeon, 60(06) 371-382 (2012) [2] F W Mohr, J Garbade, Alternative Zugangswege und minimalinvasive Herzchirurgie, Herzchirurgie, 665-690, 978-3540797128, Springer, Berlin Heidelberg (2010) [3] E A Bier, M C Stone, K Pier, W Buxton, T D DeRose, Toolglass and magic lenses: The see-through interface, SIGGRAPH '93 Proceedings of the 20th annual conference on Computer graphics and interactive techniques, SIGGRAPH '93, 73-80 (1993) [4] G W Fitzmaurice, Situated information spaces and spatially aware palmtop computers, Communications of the ACM, 36(7), 39-49 (1993) [5] R Gasteiger, M Neugebauer, O Beuing, B Preim, The FLOWLENS: A Focus-and-Context Visualization Approach for Exploration of Blood Flow in Cerebral Aneurysms, Visualization and Computer Graphics, IEEE Transactions on , 17(12), 2183-2192 (2011) [6] M Bajura, H Fuchs, R Ohbuchi, Merging virtual objects with the real world: seeing ultrasound imagery within the patient, SIGGRAPH '92 Proceedings of the 19th annual conference on Computer graphics and interactive techniques, SIGGRAPH '92, 203-210 (1992) [7] M Scheuering, A Schenk, A Schneider, B Preim, G Greiner, Intraoperative augmented reality for minimally inva- sive liver interventions, Medical Imaging 2003: Visualization, Image-Guided Procedures, and Display, 407-417 (2003) [8] M Gessat, S Bohn, A Vorunganti, S Franke, O Burgert, TiCoLi: an open software infrastructure for device inte- gration in the digital OR, International Journal of Computer Assisted Radiology and Surgery, 6(1), 284 (2011) [9] M Gessat, D Merk, V Falk, T Walther, S Jacobs, A Nöttling, O Burgert, A planning system for transapical aortic valve implantation, Medical Imaging 2009: Visualization, Image-Guided Procedures, and Modeling, SPIE, 12 (2009). 168