<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">All Around Audio Symposium</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<title level="a" type="main">All Around Audio Symposium</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">C6BC3D3D8D3F34B0B7E55231BD5A25D6</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-23T21:06+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract/>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Ultrasonic Communication: Risks and Chances of a Novel Technology</head><p>Matthias Zeppelzauer, St. Pölten UAS, AT</p><p>The ultrasonic frequency band represents a novel and so far hardly used channel for the communication of different devices, such as mobile phones, computers, TVs, and personal assistants like Google Chromecast. Ultrasonic communication is a promising technology since it requires only a standard loudspeaker and a microphone (as built into our phones) for communication. While offering a number of opportunities for innovative services (e.g. in the domain of Internet of Things), the technology, however, also bears a number risks. Companies like Silverpush employ ultrasonic data exchange to track users across devices and to collect information about their behavior without their knowledge. In my talk I will present the novel technology of ultrasonic communication, show how it works and which risks and chances are linked to it. Additionally, I will present the project SoniControl which aims at the development of an ultrasonic firewall to protect the privacy of users as well as the project SoniTalk which aims at developing a safe and privacy-oriented protocol for ultrasonic communication.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Modular Synthesizer Ensemble gammon, Vienna, AT</head><p>The Modular Synthesizer Ensemble performs with fixed instruments and variable orchestration. 12 Modular Synthesizers provide the starting point for this participatory music project, with the aim to present electronic music live as an ensemble. With the analog Modular Synthesizer the participants are able to shape the process of electronic sound formation by themselves, even with no previous knowledge. Proceeding with the originated sound material we will in-vent, prove, execute, improvise and compose. A simultaneous prizes of composing and executing electronic music is evolving. The aim of the Project is, to perform the musical result live as an ensemble. The Installation of the modular synthesizers at the hall will be supervised by Gammon and Jessica and Thomas from http://schneidersladen.de. http://www.gammon.at</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>On Models and Pragmatic Features in Digital Musical Instruments</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Cornelius Pöpel, Ansbach UAS, DE</head><p>The digitalization of objects, methods and working procedures is a big topic in our times. In the field of audio, digitalization has been done in many areas already. A core issue in digitalization is the development of models and their transformation by formalization. According to the model theorist <ref type="bibr">Stachowiak (1973, p. 131</ref>) the term "model" can be understood to include three features: a) the feature of mapping, b) the feature of reduction, c) the pragmatic feature. In order to create a model the essentials of an object (which the model maps to) have to be figured out. The model only includes those essentials. Things that are not essential are left out. The question which properties of the object are essential is coupled to what Stachowiak calls "pragmatic feature". The model was created from a specific group of persons, in a specific time and for a specific reason. Given the precondition of the pragmatic-feature-settings in which the model was created and for which the model was of use and so to say valid, the question comes up what may happen to the model in case the pragmatic-feature-setting has changed. What does it mean for a model if persons, time and reason differ in comparison to when the model was created? One of the reasons musicians give for why they do unplugged music is that they want to get beck to the essentials when making music. Since the models implemented in synthesizers were always thought to cover the essentials of tones, sounds and playing an instrument it is questionable in how far the models used in the digitalization of audio really do cover the essentials those musicians are talking about. The digitalization in audio has brought a huge mass of new opportunities in working with sound. According to a seemingly loss of essentials in music it may be seen as a need to do research on qualities in sound that have been forgotten, unseen or lost. One question may be what the essentials are that have not been covered yet. A second question might be what factors play a role when it comes to models that do not cover the essentials needed by musicians. Another question can be in how far this loss plays a role for the younger generation of digital natives who may be more interested in the new opportunities of digital musical instruments than in a loss which does not play a bigger role for them. The paper will cover selected parts of the author's findings when doing research on the development and usage of models for musical purposes with a specific focus on the pragmatic feature. It will include as well results of a study on how digital natives did couple musical ideas with the difficulty of creating a digital musical instrument. Although 3D audio is considered a novel way of producing, the aesthetic desire, and the capabilities for a three-dimensional positioning of sound in the 360 degree sphere can be traced back to the antiquity and even to the time before. Numerous 20th century composers tried to implement their 3D audio 'visions', but the full technological possibility to accomplish sonic plasticity has come up quite recently only by the availability of innovative 3D sound systems. However, applying these systems in a technically correct way does not automatically lead to convincing artistic results. So, what needs to be clarified and explored in order to create plausible artistic 3D audio productions? In this presentation we would like to give an overview on selected topics of our 3D audio artistic research at Darmstadt's SEM-Lab. It assumes that 3D audio needs distinct aesthetic concepts and criteria, in order to prove its necessity, beyond just providing hyped-up versions of already familiar artistic phenomena. Based on the rich cultural history of 3D sound creation, this presentation will point out major categories and main criteria which reflect the specifics of 3D audio. It will point out why the approach given by the concept of soundscape can be crucial. Trendy terms like immersion, tangibility, illusion, and virtuality are questioned and investigated in reference to overused aesthetics, naive realism and the lack of providing the position of critical distancing. We will suggest and point out that a huge artistic potential for specific 3D audio productions can lie in dramaturgical approaches like fragmentization, deconstruction, as well as in the careful conceptualization of auditory materials and their representational potential.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>References</head><p>Acoustic holograms: Artistic approach to 3D-Audio Natascha Rehberg, Darmstadt UAS/Soundscape-&amp; Environmental Media Lab (SEM-Lab), DE Emerging 3D-Audio technologies locate and treat sounds as three-dimensional, virtual sound sources with a certain position, dimension and shape -acoustic holograms, that provide an increasingly tangible experience (in particular referring to my experiences in working with the SpatialSound Wave System (SSW) by Fraunhofer IDMT Ilmenau, during an ongoing research project at Darmstadt UAS). The emancipation of sound from a speaker is altering the role of the listener, as well as the role of the listening: the frontal stage disappears and the auditory perception becomes an omnidirectional experience, in which the listener interacts with the acoustic environment. With the objective of expanding artistic means of expression by the use of such an Apparatus, this writing situates 3D-Audio within the conceptual framework of soundscape and hints at aspects of conceptualization and practical implementation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Soundscape: concept of 3D-Thinking</head><p>Understanding 3D-Audio compositions as soundscapes has many implications for conceptualizing and composing. The term and concept of soundscape refers to the appearance of all sounds in a room, place or landscape within a 360 degree sphere -an acoustic envelop, shaped by all properties of the environment <ref type="bibr" target="#b0">[1]</ref>[2] <ref type="bibr" target="#b3">[4]</ref>. Based on the premise, that hearing is an environmental form of perception, it indicates a non-selective, omnidirectional method of listening, which is a prerequisite for comprehensive 3D-Audio composition <ref type="bibr" target="#b2">[3]</ref>. Moreover, the associated terminology contextualizes sound in its interdependent relations, identifies functional categories for the elements of a soundscape, such as keynote sound, sound mark, signal sound and provides design-related criteria, that are helpful to (re-)evaluate proportions <ref type="bibr" target="#b2">[3]</ref>.</p><p>The artist as sound architect and choreographer The arrangement of virtual sound sources creates figures, structures and forms -an architecture of sound, in which artistic intentions are expressed through construction, deconstruction and transformation of spatial relations. Thus, perspective and proportions are crucial criteria and fundamental design issues. From a conceptual viewpoint, the perspective significantly determines, whether the listener literally is immersed or in a more distant position. The implementation assumes a material concept, which takes in account the object-based production principle: sound is not assigned to a certain speaker, but to a to an object, that is positioned via graphic interface or other, even interactive devices. Objects consist of audio sig-nal and meta-data (room-coordinates and other data) <ref type="bibr" target="#b4">[5]</ref>. Meta-data potentially can be delivered by any device or software (sensors or game engines). The container-format technically allows to define various parameter as meta-data, such as volume or sound effects. The audio material constitutes the microstructure (inner structure) of a virtual sound source, which may contain a single note or a whole soundscape. A collapsing tree thus can be implemented as one virtual sound source (e.g. a distant event) or as a complex figure: a spatial construct of several virtual sound sources (e.g. an immersive situation). As overlapping sounds in an audio file cannot be spatially separated, a distinguished spatial polyphony requires thoughtfully prepared audio material. The collapsing tree also hints at the expanded possibilities of artistic expression through motion of sounds. Beyond illustrative or narrative aspects, sonic motion performances can generate fascinating, unheard structures and formsadvanced features such as programmable motion patterns prospectively enable to animate sound like a choreographer and therefore intensify a media-specific aesthetic.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Conclusion</head><p>Through holographic spatialization of sound, polyphonic textures and figures acquire a sophisticated manifestation. Moreover, the object-based production is best suited for interactive settings. Consequently, 3D-Audio offers the potential to create new forms of sonic or multimedia art with more holistic notions of auditory experience <ref type="bibr" target="#b3">[4]</ref>. The soundscape approach is a valuable tool to develop dramaturgical expressiveness of spatialization, that goes far beyond reproduction or naturalistic, simulative and illusionary representation. It's a whole field of artistic exploration to create appropriate, object-oriented implementation methods. It's up to us, the forward-listeners, to take 3D-Audio as a gift to create an artistic and social value.</p><p>Steps Toward an A/R/Tography of Sound Hans Ulrich Werner / HUW, Offenburg UAS, DE</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>AllKlang</head><p>Qualitative scholarship, artistic research, and research-based learning bring together insights from practice and experience. In the autoethnography of own auditory workshops, and of the cultures of other studios, I evaluate the new interdiscipline of sound (studies) and extend it with ideas for practice and theory, from the (still unknown) a/r/tography to a future a/r/tophony: artistic research in music and through sound composition, radio art and visual music.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>EinKlang</head><p>The waveform symbolizes artistic research as a complex resonance in music, sound composition, and radio art. The picture was taken by Dan Curticapean at the Technorama in Winterthur. Curticapean is a physicist with a passion for art, who does research in photonics and creates through photography. His image highlights the interweaving of the methods of perception: through sound itself, in the highly developed discipline of sound art, and as the core of our work in all media, including those as yet unknown to us.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>VielKlang</head><p>In the medium, sound unfolds as both a material and a workflowmadeupofmatter, in timeandspace, fromsoundscapetosound design, from perception to form and effect. Early soundscape models and "acoustic communication" (Truax 2001) encounter the now global phenomenon of sound studies: from natural to technological sound, cultural to societal audio image, always characterized by mediality, mediation, mediology, mediamorphosis (Smudits 2002). As a system, such transformation is a whole, but also fractal, in the practice of many sound artists, researchers, and educators. The Canadian discipline of a/r/tography is an especially intensive explora-tion of the trio of a/rtist, r/esearcher, and t/eacher, with the transitions between them deliberately included (Werner 2015). It proceeds from the sound-generating person to his or her aural environment, from sonic moments to sonic spaces: from research to creation, analysis, synthesis, and experience (cf. Dewey 1934). The composer Murray Schafer -the "great ear of Canada," as Klaus Schöning called him -titled his 1977 artistic instruction manual The Tuning of the World; in Sabine Breitsameter's 2010 translation, this became Die Ordnung der Klänge (The ordering of sounds). The two complement each other, becoming a third thing. The triad of practice, forward-thinking inquiry (Krippendorff 2011), and education keeps us close to the protagonists and lends itself to autoethnography as well. There are also connections here to ideas on the activity of music (Stroh 1984), "reflection-in-action" (Schön 1983), and learning through research (Huber 1970). Similarly, the thematic emphasis on "Künstlerisches Forschen in der Musik" (Artistic research in music) at the University of Münster Conservatory) does not come across as a variation on the global theme of artistic research as new system (currently on the rise in Germany as elsewhere); the verb form (forschen) foregrounds doing. The focus is on aural activity; what matters is audio art and the auditory in a broader concept of music and sound, rather than a general account of a future system of art and research. We are exhorted to "follow the actors" (Latour 2005, 12) -a call embodied by the sociologist of music Howard S. Becker in his decades of practice as teacher, researcher, and jazz musician. This gives rise to transitions in which "the actions of the scientist begin to approach artistic action" <ref type="bibr">(Hildebrand 1994, 13)</ref>. With Germany's longest-running series of media-research publications, the University of Siegen's MuK (Medien und Kommunikation), I have been experimenting with methods of analysis and "microtheories" on ways of presenting, describing, and dealing with sound. As an active participant, I query practitioners who work as scholar/artist/ educators (few of whom identify themselves as such). Basic research, too, is increasingly coming into contact with the aural: see for example Max Ackermann's 2003 dissertation on the "culture of hearing," or the symposium on "audio media cultures" held at Siegen in 2010 (Volmar and Schröter 2013). This reflection of sound as theorem has the potential to affect the creative act and the creators themselves. In the triad of content and aesthetic, communication and organization, and technology and actor, it can be put into practice in any studio and any sound. In dialogues and diagonals, the materiality of sound meets its mediality: temporality plus mediation (Debray 1996), sonic space plus culture and mediamorphosis.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Situating Performance in the Performing of Situation: The Effect of Situational Context on Performer Expressivity</head><p>Hans-Peter Gasselseder, Maria Kallionpää, Aalborg University, DK How to articulate what is believed to be the fundamental artistic idea, and more arguably, the representative character of a state of mind or situative quality that is ascribed to a music composition? Apart from actually applying the operating instructions of a score to an instrument, several aspects of acoustic scene, ergonomics, attention focus and mood need to be taken into account when adapting to the situative affordances of a particular piece of music. But what if a performer lacked intuition and expertise to adapt to these contextual variables? Or in other words, what if one lacked the ability to adapt the handling of an instrument in different contexts or under varying acoustic conditions? Interpreting the current situation and selecting an appropriate action in a real-time performance setting often proves to be a challenging task. This is even more the case when thoughts and actions require an extra step of mediation [the instrument]. In order to bypass this step towards non-mediated representations of control, extensive practising allows the building of mental models detailing interactional patterns that are implicitly activated by environmental cues. The detection of these cues may vary depending on a performers' awareness of situational context; a cognitive representation of how we relate to our surroundings and give purpose to actions. Thus, we expect situational context to affect mental models of performer-instrument interactions and expressivity. In order to test this hypothesis, we examined to what extent specific parameters of acoustic scenery alter a performers' rendition of contemporary piano works. Utilising a combination of binaural DSP microphone/earphone setup, we were able to present subjects with life-like, immersive acoustic sceneries decoupled from their visual appearance. Data gathered from audio-and MIDI recordings as well as focus interviews with seven professional pianists illustrate how alterations of spectral-dynamic features and room acoustics affect the performing under varying situational demands.</p><p>When More is More: How to Supersize Musical Expression Maria Kallionpaa, Hans-Peter Gasselseder, Aalborg University, DK "Super" or "hyper" instruments are sometimes mentioned within the discussions among musicians but both terms are used relatively flexibly. Whereas some composers and performers refer to them with regards to certain software (for example, the hyper score software by Machover), our research regards the "super instrument" as a piece-specific concept or phenomenon. Rather than referring to any particular instrumentation or technological solution, the super instrument comes to be defined as a bundle of more than one instrumental lines that achieve a coherent overall identity when generated in real time. On the basis of our own personal experience of performing the works discussed at this lecture concert, super instruments vary a great deal but each has a transformative effect on the identity and performance practice of the pianist. An increasing number of composers, performers, and computer programmers have thus become interested in different ways of "supersizing" acoustic instruments in order to open up previously-unheard instrumental sounds. This leads us to the question of what constitutes a super instrument and what challenges does it pose aesthetically and technically? We argue that the essence of the super instrument lies in the enhancement of the technical and expressive capabilities of the performer and composer, as well as in the better interaction between the performer, instrument, and liveelectronic systems in a concert situation. Our presentation explores the effects that super instruments have on the identity of a given solo instrument, on the identity of a composition and on the experience of performing this kind of repertoire. The purpose of this lecture concert is to showcase the essence and role of piano or toy piano in a super instrument constellation, as well as the performer's role as a "super instrumentalist". We consider these issues in relation to case studies drawn from our own compositional work and a selection of works by other contemporary composers.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Breaking The (Imaginary) Wall between Performers and their Audience in Live Music</head><p>Oliver Hödl, University of Vienna, AT Breaking The Wall is a research project at the intersection of art and technology in live music. Its goal is to explore how to use technology to involve the audience in live music performances, or metaphorically speaking, how to break the imaginary wall between performers and their audience in live concerts. The project is a collaboration between the Vienna University of Technology, the University of Applied Arts Vienna and the University of Music and Performing Arts. Throughout the project, the research team and the involved artists developed four performances. These four performances were showcased at the music-event Breaking The Wall in Vienna in June 2017 and two of them additionally at the Ars Electronica Festival 2017 in Linz. During their concerts, the three musicians Electric In-digo, null.head and Johannes Kretz call for participation in the interplay of artist, audience and technology. The artists played electronic, electro-acoustic and industrial music and the audience participated through robots, smartphones and laser tracking. The fourth performance was not music-based and provoked the audience to make them aware of surveillance aspects in technology-mediated audience participation. This talk presents the development process of the performances and the actual technologies used for the concerts. Furthermore, you learn about the results of the scientific evaluation and how to use the new knowledge in future projects around technology-mediated audience participation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Line &amp; Hemisphere -A Hybrid Studio Setup for Immersive Experiments in Spatial Audio and Music</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Paul Modler, Hochschule für Gestaltung Karlsruhe, DE</head><p>The development of new audio reproduction systems are based on multichannel speaker setups to apply recent distribution techniques such as Higher Order Ambisonics (HOA), Vector Based Amplitude Panning (VBAP) or Wave Field Synthesis (WFS). The presented studio setup aims to combine audio projection approaches to provide a test bed for experiments in order to investigate new possibilities of increased immersive perception of spatial audio and music. For this a hemispheric speaker setup is extended with a horizontal linear speaker arrangement. The hemispheric setup is based on standard high quality active loudspeakers, whereas the WFS is based on multi speaker boxes combined with 16 channel audio amps developed as a low budget feasibility study. According to the number of channels the system can operate from one CPU or of two remote CPUs controlled through network sockets.</p><p>The system is implemented in a standard class room with no or very basic acoustic treatment, to showcase achievability with restricted resources found in normal environments.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>AudioAllAround: Immersive Audio -Evolution of Techniques and Tools</head><p>Martin Mayer, Diana Mayer, Mister Master, Klosterneuburg, AT This short talk about our work in the last 20 years, wants to show the evolution of our techniques and tools by presenting individual projects. The spectrum ranges from early experiments with analog 4-channel technology, through large scale outdoor opera productions, to recordings and concerts in full 3D Audio. Today's technologies provide a level of realism being impossible until recently. This opens up new areas beyond obvious applications in music, theater, cinema, TV, museums and exhibitions, which were our main-fields of interest in the past and present. But immersive audio is now also increasingly gathering interest from areas such as recreation and health-care, with promising new approaches in therapies against dementia, tinnitus and different phobias in artificial but completely realistic 3D audio wave-fields. Our new ATMIX 3D Audio Lab has opened it's doors in early 2017 as a new space to experience, experiment and evaluate in a full-dome speaker setup using WFS and other immersive technologies and formats.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>MED-EL Hearing Implants and the Science Center AUDIOVERSUM in Innsbruck</head><p>Eckhard Schulz, Ewald Thurner, MED-EL GmbH, Innsbruck, AT</p><p>The sense of hearing is besides the important aspect of human communication and interaction also a major channel to express our emotions. The sophisticated anatomy of the ear (outer, middle and inner ear structures) plays an important role in processing sound information. Hearing loss is caused by damage to one or multiple parts of the ear. The sense of hearing is the only human sense, which can be replaced and/or reproduced by means of technology. Hearing implants may enable relief for those affected by hearing loss. The Austrian company MED-EL with its headquarter in Innsbruck, dedicated the past 27 years of focused research to overcome the barrier of hearing loss by developing an innovative and wide-ranging product portfolio. The commitment of its founders, Ingeborg and Erwin Hochmair, in fostering a company culture of excellence, advanced MED-EL to the industry's technology leader in implantable hearing solutions for a va-riety of indications. The ScienceCenter AUDIOVER-SUM, which opened in 2013, was initiated by MED-EL and aims to raise awareness for hearing loss among the society by giving a combination of medical, technical, educational and art exhibitions with regard to the sense of hearing. The AUDIOVERSUM is unique in Europe and fascinates its visitors with interesting facts about hearing and the accompanied senses. The learning objectives of the given presentation comprise the anatomy of the ear and the physiology of hearing, different types and degrees of hearing loss and how they can be treated with MED-EL hearing implants, as well as the interactive ScienceCenter AUDIOVERSUM in Innsbruck with its various exhibitions. The presentation will be held by Dr. Eckhard Schulz, former Managing Director of MED-EL Germany and founder of the AUDIOVERSUM and by DI Ewald Thurner, Area Manager of MED-EL Vienna.</p><p>Heart Sound -how sound and radio can help to improve the relationship between people with dementia and their carers Christine Schön, Berlin, DE Imagine the sound of happily screaming children, splashing water, a light breeze and chirring crickets. Can you feel the summer? Now imagine the sound of a stiff breeze blowing the icy branches and crunching steps in the deep snow. Can you feel the cold? Sound translates directly into an emotion. The ear is a very sensitive organ: hearing is the first sense we develop in the womb and from this point onwards our ears can never be closed again. Sounds are deeply rooted in us -when we listen to a familiar sound, it triggers an emotional memory. That's why sound is so suitable for people with dementia whose reactions to emotional stimuli are much stronger than to cognitive ones. Collective listening is a very familiar thing for today's elderly: in their youth, radio was the most common medium -families and friends got together to listen to entertainment programmes, sportscasts and concerts. Dementia -The current situation: 46.8 million people worldwide live with dementia; due to estimations it will be 131.5 million in 2050 (statistic presented at the Alzheimer Europe Conference 2017). In Germany, there are 1.5 million people living with dementia. The German Alzheimer Foundation estimates that this number will have reduplicated itself by 2050. Every year, 300.000 people are diagnosed with dementia. This effects their friends, family and carers, too. People with dementia have the right to live a fulfilling life with their impairments and to play an active role in society. To ensure this, there need to be offers specially tailored to their needs and capabilities. It can be very difficult for carers and relatives to get into emotional contact with people with dementia who often live in their own world. Motivated by this challenging situation we developed Hörzeit -Radio wie früher ("Listening Time -radio like in the old days") and Herzton ("Heart sound"). The major concern of our sound projects is to strengthen the relationship between people with dementia and their carers.</p><p>How do "Heart Sound" and "Listening Time" work?</p><p>Hörzeit -Radio wie früher is a worldwide unique radio programme especially designed for people with dementia. It is produced in the style of the 1950s radio entertainment shows. Each programme focuses on a different subject, such as children, travel and professions -timeless topics to delve into a conversation with people with dementia. Christine Schön and Frank Kaspar lead the listeners through the programme; they speak about their personal experiences -about their children, their most impressive journeys or their dream jobs. They present sound collages, feature reports, famous pieces of music, proverbs and rhymes. The presentation converts the communication techniques of validation: it applies a genuine and deep appreciation of people with dementia, takes them seriously with their feelings and emotional states and doesn't give too much information. Helga Rohra, a person with dementia, has a regular column in every issue. The programme for people with dementia is around 50 minutes long; a following programme for relatives and caregivers is around 20 minutes long. In this second programme Schön and Kaspar review books, films and games, present institutions and interview experts (http://www.hoerzeit-radio-wie-frueher. de). The non-profit sound-based web portal Herzton ("heart sound") helps to activate people with dementia individually using all acoustic means: e. g. self-sung songs, dialects, interviews with contemporary witnesses and easy accessible soundscapes. They are recorded and produced by sensitive journalists, sound artists and musicians. Relatives and caregivers can select pieces individually for the people with dementia entrusted to their care: for example people who grew up in the countryside may enjoy the sounds of a farm; for someone from Bavaria, it might be a pleasure to listen to a story told in the Bavarian dialect. Herzton will be launched in late December 2017 (http://www.herzton.org).</p><p>Philology of electronic music -New methods, strategies, falsifications and historic cleansing: Stockhausen, Xenakis, KRAFTWERK Reinhold Friedl, Goldsmith University, London, GB It is astonishing the classical philological methods have not been adapted for electronic music so far. This lecture will discuss how this can be made and that applying this new methods astonishing results can be achieved. This will be shown on three prominent musical examples: Karlheinz Stockhausens "Konkrete Etude" that is actually not his piece, Iannis Xenakis hiding of the real sound sources in his multitrack compositions and KRAFTWERKs historical cleansing of their body of work. This research is part of a PHD project at Goldsmiths University London. From 2006 to 2016, the artist duo Eva Paulitsch und Uta Weyrich collected mobile phone videos shot in public space by teenagers and young adults; with this material, they created a video archive that is unique in the world -the Mobile Video Archive. The skewed and unexpected fragments of the world from young adults' perspective -which simultaneously open up space for associations and belie a fascination for moving images -were Paulitsch and Weyrich's motives for speaking to young people in public places about their videos of daily life. The artists asked for the videos as a gift and began creating an archive with them. Their interest was in the mobile phone videos that were "resting" on the smartphones' memory cards, and not the consciously staged videos made for YouTube, for example. Their collection campaign saved the videos from being deleted and declared them to be basic artistic material. The artistic transformations usually took place in cooperation with experts from the fields of music, computers or the theater. In walk-through video installations, they created spaces that visitors could approach from many different perspectives. In contrast to the video installations, in # CRESCENDO, it is not the moving images that are in the focus, but rather the respective audio tracks of specific videos. In this work, the artists only explore the mobile phone videos' audio tracks. They had the original sound of all the videos in their collection transcribed and thus expanded their "no-story video" archive to include a "no-story audio" collection. In the transcription process, it is possible to represent the spoken language as well as the context of the speaking situation beyond the content of what is said. Abbreviations, punctuation marks and special characters frame individual words. Meaning is only constructed when reading the text: these are acts of speech, dialogues -teenage slang. The translation from sound to a manuscript has its own power, which already exists in the texts' unusual codification. In the materialization in script, language itself becomes an image -the ephemeral, often incomprehensible but perceivable sounds are paused; new spaces and new meanings develop. By decoupling the soundtrack from the film level, the fragmentary dialogues become singular and achieve an autonomous reality.</p><p>DaVinci Head project: The best price/performance binaural head</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Vytenis Gadliauskas, LT</head><p>There are a lot of binaural microphones for consumers in the market. Some of them are dedicated to professionals, others look like they were created as a toy. Though all them use slightly different approach to record Interaural Level Differences (ILD's) and Interaural Time Differences (ITD's), the goal is the same -immersive spatial audio experience to the end user. Binaural head is one of the most accurate but also pricey approach. DaVinci Head project started as home build binaural head with no intention to go worldwide. It was the final prototype, test results and creator's motivation, that later set the goal of the project -DaVinci Head have to be the best price/performance binaural head in the market.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head></head><label></label><figDesc>Stachowiak, H. (1973). Allgemeine Modelltheorie [A general theory of models] Vienna, Austria: Springer. 3D Audio: Sculpting with Sound -Report on an Artistic Research Project Sabine Breitsameter, Darmstadt UAS/Soundscape-&amp; Environmental Media Lab (SEM-Lab), DE</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head></head><label></label><figDesc>&lt;&lt;cresc &gt;&gt;Worte werden Raum Eva Paulitsch, Coburg University of Applied Sciences And Arts, DE "Fast könnte man sagen, dass vom Tempo, der Geduld und Ausdauer des Verweilens beim Einzelnen, Wahrheit selber abhängt" (Theodor W. Adorno) "One could almost say that truth itself is dependent on an individual's tempo, patience and endurance in lingering." (Theodor W. Adorno)</figDesc></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<author>
			<persName><forename type="first">Sabine</forename><surname>Breitsameter</surname></persName>
		</author>
		<title level="m">Hörgestalt und Denkfigur -Zur Geschichte und Perspektive von R. Murray Schafers Die Ordnung der Klänge. An introductory essay</title>
				<editor>
			<persName><forename type="first">R</forename><forename type="middle">Murray</forename><surname>Schafer</surname></persName>
		</editor>
		<meeting><address><addrLine>Mainz -Berlin</addrLine></address></meeting>
		<imprint>
			<publisher>Schott International</publisher>
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
	<note>Die Ordnung der Klänge. Eine Kulturgeschichte des Hörens (pub-. lished and translated by Sabine Breitsameter</note>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<author>
			<persName><forename type="first">R</forename><surname>Schafer</surname></persName>
		</author>
		<author>
			<persName><surname>Murray</surname></persName>
		</author>
		<title level="m">Voices of tyranny: temples of silence</title>
				<meeting><address><addrLine>Ontario, Canada</addrLine></address></meeting>
		<imprint>
			<publisher>Arcana Editions</publisher>
			<date type="published" when="1993">1993</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Sculpting with Sound: 3D Audio and Its Aesthetic Specificity -Selected Problems from an Artistic Research Project</title>
		<author>
			<persName><forename type="first">Sabine</forename><surname>Breitsameter</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of Sweep -Symposium on Sound Research at the University of Kassel</title>
				<meeting>Sweep -Symposium on Sound Research at the University of Kassel</meeting>
		<imprint>
			<date type="published" when="2016">May 18-19, 2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">Murray</forename><surname>Schafer</surname></persName>
		</author>
		<title level="m">Die Ordnung der Klänge. Eine Kulturgeschichte des Hörens</title>
				<meeting><address><addrLine>Mainz -Berlin</addrLine></address></meeting>
		<imprint>
			<publisher>Schott International</publisher>
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
	<note>published and translated by Sabine Breitsameter</note>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Jan Plogsties in: MPEG-H Audio-The New Standard for Universal Spatial / 3D Audio Coding</title>
		<author>
			<persName><forename type="first">Jürgen</forename><surname>Herre</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Johannes</forename><surname>Hilpert</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Achim</forename><surname>Kuntz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">AES 137th Convention</title>
				<imprint>
			<date type="published" when="2014-10">October 2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Understanding immersive Audio: A historical and socio-cultural exploration of auditory displays</title>
		<author>
			<persName><forename type="first">Milena</forename><surname>Droumeva</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of ICAD 05-Eleventh Meeting of the International Conference on Auditory Display</title>
				<meeting>ICAD 05-Eleventh Meeting of the International Conference on Auditory Display<address><addrLine>Limerick, Ireland</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2005">July 6-9, 2005</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
