Manipulating and Augmenting Digital Content Through Physical Counterpart Filip Sprostan1 , Matjaž Kljun1,3,∗ and Klen Čopič Pucihar1,2,3 1 University of Primorska, Faculty of Mathematics, Natural Sciences and Information Technologies, Koper, Slovenia 2 Faculty of Information Studies, Novo Mesto, Slovenia 3 Stellenbosch University, Department of Information Science, Stellenbosch, South Africa Abstract Traditional photography often fails to capture the complete appearance, and depth qualities of objects like monuments. Additionally, historical artefacts and museum pieces may be off-limits for physical engagement due to their age and preservation needs. Augmented reality (AR) and tangible user interfaces (TUIs) provide viable solutions to these challenges by allowing users to interact with replicated physical artefacts, with which they control the digital replica with the correct look and additional information. In this paper we explore the feasibility of creating real-looking digital replicas of physical objects representing historical and cultural heritage artefacts, as well as creating and interacting with physical replicas to control the digital one. To this end, we demonstrate the design and fabrication process of our solution. We first used photogrammetry for creating 3D models of real-world objects from multiple photographs. These 3D models were then 3D printed as physical objects to be interacted with. Unity and Vuforia (an augmented reality software development kit) were used for developing object detection and tracking. Specifically, through the Model Target feature of Vuforia, the trained 3D models were seamlessly incorporated into Unity, enabling real-time recognition and precise tracking of the object as well as overlay the material information and display additional information. Keywords tangible user interface, TUI, AR, augmented reality, photogrammetry 1. Introduction Traditional photography often falls short in capturing the complete appearance, depth, and material qualities of historical artefacts, potentially distorting perception and leading to mis- understandings. It is not uncommon for such artefacts to be strictly off-limits for physical engagement due to preservation attempts. In addition, it can also be difficult to visit places where such objects are exhibited. Nevertheless, it is widely acknowledged that physical interac- tion with objects facilitates deeper comprehension and familiarity. The advent of augmented reality (AR) and tangible user interfaces (TUIs) offers a viable solution to this issue by replicating different artefacts, allowing users to engage with them without compromising the integrity of the originals. HCI SI 2023: Human-Computer Interaction Slovenia 2023, January 26, 2024, Maribor, Slovenia Envelope-Open 89191030@student.upr.si (F. Sprostan); matjaz.kljun@upr.si (M. Kljun); klen.copic@famnit.upr.si (K. Čopič Pucihar) Orcid 0000-0002-6988-3046 (M. Kljun); 0000-0002-7784-1356 (K. Čopič Pucihar) © 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings http://ceur-ws.org ISSN 1613-0073 CEUR Workshop Proceedings (CEUR-WS.org) CEUR ceur-ws.org Workshop ISSN 1613-0073 Proceedings In our quest to enhance the immersive experience with AR and physical objects we explored the feasibility of (i) creating real-looking digital replicas of physical objects representing his- torical and cultural heritage artefacts, as well as (ii) creating and interacting with physical replicas to control digital ones. To this end we describe the fabrication process of digital and physical copies of historical artefacts. First, we used photogrammetry to extract 3D data from multiple photographs of a particular object, enabling the precise reconstruction of an accurate 3D model of the object in question. Subsequently, this 3D model was physically replicated on a smaller scale with a 3D printer. To add the material aspect of the original objects, we used the Unity game engine in conjunction with Vuforia, an AR software development kit (SDK) for mobile devices. This integration facilitated manipulation functionalities, including rotation and adjustment of the camera distance, all while accurately showcasing the object’s material composition. As a result, we were able to provide an accurate presentation of the object’s appearance by overlaying its original material in AR. Furthermore, additional information about the object can be displayed on demand. Interacting with such printed object gives us the freedom to explore and engage with it beyond just visual observation or inspection of its photographs. 2. Related work 2.1. Tangible user interfaces Tangible user interface (TUIs) draw upon our knowledge and skills of interacting with the real world by enabling interaction with and leverage digital information through the means of physical objects [1, 2]. Physical objects serve as both input and output devices, offering users feedback through physical haptic sensations and digital visual or auditory cues. This vibrant research area has showcased various directions. One is object recognition based on camera system [3], tokens [4] or fiducial markersn [5]. Another is providing tangible interaction in virtual reality (VR) by using passive haptic proxy objects in VR applications [6] as well as active haptic devices to generate “virtual” forces and support tactile and kinesthetic haptic modalities [7, 8, 9, 10, 11]. Numerous studies explored the application of tangible interaction to improve the lives of various user groups, particularly children and individuals with medical conditions. Researchers for example investigated how can tangible interfaces enhance learning compared to traditional methods [12, 13]. In the realm of medical conditions, research highlights the benefits of tangible interaction for individuals with aphasia, a language impairment resulting from brain injury [14]. Tangible systems that provide one-to-one mapping to digital information have shown advantages such as reduced need for visual attention [15], improved interaction efficiency [16], nuanced control [17], and enhanced object manipulation [16] compared to standard computer interfaces – the mouse, keyboard, or touchscreen. 2.2. Object detection and tracking In order to understand interaction with the physical object at hand, object detection, segmen- tation and tracking is required. Object detection uses computer vision and image processing techniques for identifying specific objects within digital images and videos [18, 19]. It has a widespread applicability, including recognising a variety of objects in order to annotate images, count vehicles, recognise activities, segment objects, track objects. Object segmentation involves the precise separation of objects within images or video [20, 19], while object tracking involves the use of algorithms to identify and track objects as trajectories in a recorded video or a live video stream, capturing their movement over time [21, 19]. In the context of this thesis, we will use geometric object tracking that is usually supported by the augmented reality SDKs (software development kits). Geometric object tracking automates the identification and tracing of the movement of geometric objects in sequences of images or videos. This process involves analysing spatial and temporal characteristics to estimate parameters such as position, size, shape, and motion of objects over time. Before the tracking begins, objects of interest are detected within image or video frames. Tracking algorithms then estimate and update the position and orientation of the object in subsequent frames. When multiple objects are tracked simultaneously, data association techniques match objects across frames, maintaining their identities. Dealing with occlusions, scenarios where objects are partially or temporarily hidden from view, poses a substantial challenge in object tracking. Geometric object tracking has widespread applications including video surveillance, au- tonomous driving, human-computer interaction, augmented reality, and robotics. The continu- ous advancement of algorithms and techniques by researchers and engineers strives to enhance accuracy, robustness, and real-time capabilities of object tracking systems. 2.3. 3D scanning In order to bring our idea to life, we had to explore ways to create digital replicas of selected artefacts that users are going to interact with. The process of 3D scanning requires capturing the precise shape, geometry and texture of real-world objects or environments. The core principle is the point cloud measurements, which can be used to reconstruct surfaces. This data is acquired through methods such as emitting laser beams (laser scanning), projecting patterns, or capturing multiple images from different angles (photogrammetry). The level of accuracy and detail achieved by 3D scanning varies based on the scanning technology, resolution and other parameters. While high-precision scanners excel in capturing intricate details, handheld or more affordable scanners might yield comparatively less precise outcomes. We decided to further explore photogrammetry as an approach in our fabrication process because of its simplicity and possibility to be carried out by anyone with a smartphone. This technique extracts 3D information from photographs or sequences of images, which involves a detailed analysis of geometric attributes, shape, and surface characteristics of objects or scenes. The fundamental principle of photogrammetry is triangulation, which leverages multiple overlapping images to precisely determine the position and shape of the objects depicted. After extracting distinct features and establishing correlations between corresponding points, 3D geometry can be reconstructed and texture mapping applied. Advancements in computer processing have significantly expedited the reconstruction process, making photogrammetry an increasingly efficient method for generating accurate 3D models. However, the precision and quality of this technique hinge upon several factors, including surface and texture of the object being scanned, lighting conditions, image resolution, camera calibration, and image overlap. Scenes marked by complex characteristics like reflective or transparent surfaces may present challenges in achieving accurate and reliable reconstructions. Addressing these challenges often involves meticulous adjustments of parameters and employing advanced algorithms that can account for varied surface properties and lighting conditions. Nevertheless, photogrammetry represents a cost-effective and accessible means of transforming ordinary photographs into detailed 3D models. This is why we selected this technology to turn real-world objects into virtual representations. 3. System design and implementation The main function of our prototype is to provide users with a proxy physical object that is detected by the camera of our hardware setup. While users interact with the physical object the virtual twin of the objects is moved in the same way as the physical one enabling users to explore the object from various sides and angles, “zoom in” the details, and explore information not present in the physical space. We envisioned the system with the following functionalities: (i) the digital twin is displayed on a screen when the physical object is in the camera’s field of view and detected, (ii) the system will track the physical object after it is detected and remains in the camera’s field of view, (iii) the system detects when the user interacts with the touch screen displaying additional information about the detected object. 3.1. Hardware used We used a Xiaomi Redmi Note 7 (2019) mobile device equipped with a high-resolution 48 MP camera to capture images of the historic artefacts. The use of this device allowed us to capture fine detailed photographs from various angles and sides. For photogrametry we used Autodesk Recap Photo. We used the same smartphone for later object detection and displaying AR content. The reason is mainly the high availability of such devices. The mobile application was created in Unity (2021.3.18f1) and integrated with Vuforia (10.14.4) together with Model Target Generator 10.14.4 – a desktop application that allows quick conversion of an existing 3D model into a Vuforia Engine dataset. The prototype should work on any Android device using version 8.0 (Oreo) or higher. Nevertheless, both platforms also allow for the prototype to be ported to iOS platform. The system should also support various mobile screen resolutions. Figure 1: Basic system architecture for our ptototype The basic system architecture acting as a design basis can be seen in Figure 1. The mobile device utilises its camera to capture the real world and display it as a background through a video stream. It then employs object detection algorithms to locate and identify the object present in its field of view. If the object is found within the database, it is considered detected. However, since the object exists in the physical world and can be manipulated by the user, it is crucial to track its movements and precisely determine its position and orientation. Tracking the object allows for accurate placement and alignment of virtual content with the physical object. By continuously monitoring the object’s location and movement, the system ensures that the virtual overlays, such as designated materials or additional visual elements, remain properly synchronised with the real-world object. The detection, tracking, and rendering cycle is repeated continuously throughout the duration of the video stream, enabling seamless and dynamic interaction between the virtual and physical worlds. 3.2. Selecting appropriate objects The selection of objects for 3D reconstruction using photogrammetry was an important decision. Several factors were taken into consideration to ensure an engaging and meaningful experience for users. We decided to select three diverse objects in the old Venetian city of Koper. The objects should have visually interesting features, as well as a significant historical or cultural value to represent a part of cultural heritage, with which users could interact. Another factor was the weather such as rain, or strong sunlight that can affect the appearance and visibility of object details. This needed to be taken into account if we were to obtain good photographs of objects from different angles and sides. Figure 2: Objects selected for the fabrication process. After careful consideration, three objects were selected to be included in the initial prototype. All three are visible in Figure 2. The first object selected is a Venetian fountain situated in the courtyard of the Praetorian Palace. This fountain holds historical and architectural significance, serving as a prominent feature within the palace complex. The second object chosen is a decorative component representing a conifer cone or a pinecone of the Da Ponte fountain near Gate Muda. This fountain holds cultural importance and represents a notable landmark in the city. The third object is the Medusa symbol from the Medusa Gallery in Koper. As a symbolic representation with mythological roots, this object offers a captivating narrative and visual appeal. We acknowledge that these objects present a tiny fraction of the cultural heritage of Koper, but our aim was to showcase the potential of our prototype in preserving and presenting these significant artefacts in a digital and interactive format. 3.3. Fabrication process The resulting 3D models generated by Autodesk Recap Photo were further refined in the same software by cleaning up the models to improve their visual appearance and overall quality. To prepare the models for 3D printing, we exported them in the OBJ file format. Next, we imported the file in the Ultimaker Cura software for preparing the 3D model for printing. We then printed several test objects to evaluate the size, material, and overall appearance of the printed models. We opted for a 0.06 mm layer height, which enabled to show enough depth and authenticity of the lines and curves, closely resembling the real object. The printing duration varied depending on the complexity of the model, ranging from 7 to 23 hours. The fabrication of each object is described in more detail below. 3.3.1. Venetian fountain The size if the actual fountain is 1.2 m ×1.4 m ×1.4 m (height, width and depth). A total of 59 photographs were used to generate the 3D model. The initial attempts with 37 and 46 photographs did not yield satisfactory results, as they did not accurately capture the complexity of the object. Therefore, it was decided to increase the number of photographs to 59, resulting in a more detailed and precise representation of the object in the 3D model. To enhance the quality of the model, the photographs were carefully cropped to exclude unnecessary surrounding elements. This focused approach helped to capture the object more accurately and eliminate any potential distractions from the final model. Additionally, the crop feature available in Autodesk Recap Photo was used during the project creation phase, further refining the model’s fidelity. The final result is visible in Figure 3 left. A printed 3D model exhibited enhanced depth and detail, closely resembling the original object as visible in Figure 3 right. Its size is 12 cm ×14 cm ×14 cm high, wide and deep. 3.3.2. Pinecone decorative element For the second object (0.6 m ×0.3 m ×0.3 m (height, width and depth)), a total of 43 photographs were used to create the 3D model. Initially, a few test runs were conducted with fewer pho- tographs, but it was determined that 43 photos provided the optimal coverage and accuracy to represent the object correctly. Similar to the previous object, the photographs were cropped Figure 3: Left: The model of the Fountain obtained with photogrammetry. Right: The printed model of the Fountain to focus solely on the object itself. The resulting 3D model is visible in Figure 4 left. As vis- ible in Figure 4, the resulted printed model is a faithful replication of the object’s form and characteristics. Figure 4: Left: The model of the pinecone obtained with photogrammetry. Right: The printed model of the pinecone. 3.3.3. Medusa symbol For the third object (0.26 m ×0.26 m ×0.05 m (height, width and depth)), a total of 23 photographs were used to create an accurate 3D model as seen in Figure 5. Despite the low number of photographs, the resulting model successfully replicated the object’s appearance and details with precision. However, a challenge arose during the printing process due to the object’s proximity to the wall. This proximity made it difficult to slice and clean up the model after it was generated. The initial printed results were not good enough as shown in Figure 6 left. Figure 5: Third model obtained with photogrammetry To address this issue, a support material was employed during the printing process to ensure the model’s structural integrity and prevent any holes or gaps from forming. Several test prints were conducted to optimise the printing settings and achieve the desired outcome as seen in Figure 6 centre. Ultimately, the printed model shown in Figure 6 right exhibited a high level of fidelity, with only a minor gap at the bottom that did not affect the object’s detection and tracking in subsequent stages. Figure 6: Left: Third model after printing without support material. Center: Third model with support material. Right: Third model after printing with support material. 3.4. Tracking and identification In order to import the models of our three objects to Unity, we first exported them as FBX files. This file format allows for the preservation of the models’ materials, which is essential for achieving realistic visual rendering in Unity. Importing the FBX files into Unity was performed with the Material Description import option. The detection of objects was achieved using Vuforia’s Model Target feature, while for tracking the Model Target Generator software from Vuforia was employed. In this process, a Vuforia database was created which was then used with Vuforia Engine’s Unity integration. This database contained all three models, forming the foundation for their recognition and tracking within the application. Subsequently, three distinct Model Targets were created in Unity, each associated with its corresponding model within the database. This configuration established the link between the physical objects to be detected and their respective digital representations. To enhance the visual experience and seamlessly merge virtual objects with the real world, additional steps were taken. The 3D model objects, obtained from the FBX files, were added as child objects to their designated Model Targets within Unity. Furthermore, the scale of these objects was increased by a factor of 0.02, creating a mask-like effect when they were detected. This technique allowed for the virtual objects, with their associated materials, to overlay and appear as if they were placed upon the real-world objects being tracked. In order to validate the effectiveness of the object detection and tracking capabilities within the developed AR application, a series of tests were conducted. These tests aimed to assess the accuracy and reliability of the detection process and ensure that the desired visual effects, such as the seamless display of materials on the tracked objects, were achieved. For example, additional 3D objects were placed on top of the objects being tracked (see Figure 7 left). This allowed for the evaluation of the application’s ability to differentiate between the physical object and the overlaid virtual content. By carefully observing the detection performance in these scenarios, any potential issues, such as false positives or incorrect tracking, could be identified and addressed. Figure 7: Left: Testing detection with a cube. Right: Testing detection with a different material. Another aspect of the testing process involved calibrating the so-called “mask” effect, which aimed to display the virtual material as accurately as possible on the real-world object. This calibration process involved applying random materials to the tracked objects (see Figure 7 right) and observing how the virtual material interacted and aligned with the physical surface. By adjusting parameters and fine-tuning the mask, the goal was to achieve a visually pleasing and realistic representation of the virtual material on the object. Through these tests, the robustness and accuracy of the object detection and tracking system were evaluated. Any necessary adjustments or refinements could be made based on the obser- vations. Ultimately, these tests served to ensure that the system delivered accurate and visually appealing virtual overlays on the real-world objects being tracked. Upon integrating all three 3D models into a single database for object detection, a slight decrease in response time was observed compared to the previous individual testing of each object. This can be attributed to the increased complexity of the database, which now contained multiple models to be recognised. However, it is important to note that the objects can only be recognised one by one, meaning that the application can detect and track only a single object at a time within its field of view. In terms of detection difficulty, the Medusa object proved to be the easiest to detect among the three, as seen in Figure 8 left. Its distinct features and well-defined shape allowed for reliable and swift recognition by the algorithm. Figure 8: Left: The Medusa object after being detected. Right: The part of Da Ponte fountain object after being detected The pinecone from the Da Ponte fountain (Figure 8 right) posed some challenges due to its intricate shape and difficulties in accurately determining the object’s orientation or respond- ing promptly to rotations and manipulations. And despite the Venetian fountain decoration (Figure 9) having recognisable characteristics, it turned out to be the most difficult object for detection. Furthermore, lighting conditions played a significant role in the detection performance. Best results were obtained under daylight conditions. In indoor environments, it was observed that employing at least three light sources was crucial to achieve detection results comparable to Figure 9: The Venetian fountain object after being detected those in outdoor lighting conditions. This emphasises the importance of adequate lighting for accurate and reliable object detection. 3.5. Additional information The graphic design platform Canva was employed to create informative cards for each object. The information about each object (historical background, significance, and relevant contextual information) included in the system was obtained from the Koper Regional Museum [22], Koper Tourist Information Centre [23], and the Medusa Gallery [24]. In Unity we detect the user’s touch input and trigger the appropriate actions to show or hide the info card. The info card images were configured as Sprites (2D and UI) and were then added as child objects under their respective Model Targets. This arrangement enabled the specific display of the info cards when their corresponding targets were detected, providing users with relevant information related to the objects of interest as seen in Figure 10. Figure 10: Detecting user interaction and displaying information about the object 4. Conclusion, limitations and future work In this paper we demonstrate the fabrication process of the creating copies of the cultural heritage objects using photogrammetry and 3D printing. Further, we developed an application in Unity utilising Vuforia augmented reality software development kit for object detection and tracking. Specifically, through the Model Target feature of Vuforia, the trained 3D models were seamlessly incorporated into Unity, enabling real-time detection and precise tracking of the objects as well as overlay the material information over the objects and display additional information. One limitation of the presented work is that we did not conduct a user study. The evaluation performed as a part of exploring the feasibility of the idea was done informally by the authors of the paper and several other users that have tested the prototype to give an opinion on tracking and material overlay. The usability study is planned for the future and we envision several applications of the prototype, particularly within the realms of museum exhibitions, tourism, and education where users can benefit by exploring and interacting with realistic 3D models of cultural heritage as well as other objects. Moving forward, there are several avenues for future work and improvements. Firstly, the system’s performance can be further optimised by exploring advanced object recognition algorithms that can handle multiple objects simultaneously. Additionally, integrating additional sensors could enhance the tracking accuracy and robustness of the system. Also, the prototype uses a handheld device, which limits the interaction capabilities with the system since one hand is used to hold the handheld device in hand. Porting the prototype to a head mounted display or a public interactive display could allow users to use both hands for interacting with the object. Moreover, expanding the scope of the database to encompass a broader range of objects and incorporating a wider variety of lighting conditions would contribute to a more comprehensive and adaptable system. Furthermore, the integration of the developed system on other platforms and devices, would extend its accessibility and potential impact. Acknowledgments 5. Acknowledgements This research was funded by the Slovenian Research Agency, grant number P1-0383, P5-0433, IO-0035, J5-50155 and J7-50096. This work has also been supported by the research program CogniCom (0013103) at the University of Primorska. References [1] O. Shaer, E. Hornecker, Tangible user interfaces: Past, present, and future directions, Foundations and Trends® in Human–Computer Interaction 3 (2010) 4–137. URL: http: //dx.doi.org/10.1561/1100000026. doi:10.1561/1100000026 . [2] P. D. Wellner, W. E. Mackay, R. Gold, Computer-augmented environments: Back to the real world - introduction to the special issue., Communications of The ACM 36 (1993) 24–26. [3] D. Avrahami, J. O. Wobbrock, S. Izadi, Portico: Tangible interaction on and around a tablet (2011). [4] B. Ullmer, H. Ishii, R. J. K. Jakob, Token+constraint systems for tangible interaction with digital information (2005). [5] M. Kaltenbrunner, R. Bencina, reactivision: A computer-vision framework for table-based tangible interaction (2007). [6] Y. Zhao, L. H. Kim, Y. Wang, M. L. Goc, S. Follmer, Robotic assembly of haptic proxy objects for tangible interaction and virtual reality (2017). [7] V. Hayward, O. Ashley, C. Hernandez, D. Grant, G. Robles‐De‐La‐Torre, Haptic interfaces and devices, Sensor Review 24 (2004) 16–29. doi:10.1108/02602280410515770 . [8] M. Aiple, A. Schiele, Pushing the limits of the cybergrasp™ for haptic rendering, 2013 IEEE International Conference on Robotics and Automation (2013) 3541–3546. [9] I. Choi, E. Hawkes, D. Christensen, C. Ploch, S. Follmer, Wolverine: A wearable haptic inter- face for grasping in virtual reality, 2016, pp. 986–993. doi:10.1109/IROS.2016.7759169 . [10] T. Massie, The phantom haptic interface: A device for probing virtual objects, 1994. [11] C. R. Wagner, S. J. Lederman, R. D. Howe, A tactile shape display using rc servomotors, Pro- ceedings 10th Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems. HAPTICS 2002 (2002) 354–355. [12] M. S. Horn, R. J. Crouser, M. U. Bers, Tangible interaction and learning: the case for a hybrid approach (2011). [13] M. S. Horn, Tangible interaction and cultural forms: Supporting learning in informal environments (2018). [14] T. Neate, AbiRoper, S. Wilson, JaneMarshall, MadelineCruice, Creatable content and tangible interaction in aphasia (2020). [15] Y. Jansen, P. Dragicevic, J.-D. Fekete, Tangible remote controllers for wall-size displays (2012). doi:10.1145/2207676.2208691 . [16] P. Tuddenham, D. Kirk, S. Izadi, Graspables revisited: Multi-touch vs. tangible input for tabletop displays in acquisition and manipulation tasks, volume 2010, 2010, pp. 2223–2232. doi:10.1145/1753326.1753662 . [17] S. Voelker, K. I. Øvergård, C. Wacharamanotham, J. Borchers, Knobology revisited: A comparison of user performance between tangible and virtual rotary knobs, in: ITS ’15: Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces, ACM, 2015. doi:10.1145/2817721.2817725 . [18] S. Dasiopoulou, V. Mezaris, I. Kompatsiaris, V.-K. Papastathis, M. G. Strintzis, Knowledge- assisted semantic video object detection, IEEE Transactions on Circuits and Systems for Video Technology 15 (2005) 1210–1224. [19] A. Yilmaz, O. Javed, M. Shah, Object tracking: A survey, Acm computing surveys (CSUR) 38 (2006) 13–es. [20] R. Girshick, J. Donahue, T. Darrell, J. Malik, Rich feature hierarchies for accurate object detection and semantic segmentation, in: 2014 IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp. 580–587. doi:10.1109/CVPR.2014.81 . [21] H. Kato, M. Billinghurst, Marker tracking and hmd calibration for a video-based augmented reality conferencing system, in: Proceedings 2nd IEEE and ACM International Workshop on Augmented Reality (IWAR’99), 1999, pp. 85–94. doi:10.1109/IWAR.1999.803809 . [22] Koper regional museum, https://www.pokrajinskimuzejkoper.si, ???? [23] Visit koper, https://visitkoper.si, ???? [24] Medusa gallery, https://www.obalne-galerije.si/galerija-meduza, ????