<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Rethinking Hearing Aids as Recommender Systems</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Alessandro</forename><surname>Pasta</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Michael</forename><forename type="middle">Kai</forename><surname>Petersen</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Juul</forename><surname>Kasper</surname></persName>
						</author>
						<author>
							<persName><surname>Jensen</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Jakob</forename><forename type="middle">Eg</forename><surname>Larsen</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Jakob</forename><forename type="middle">Eg</forename><surname>Jensen</surname></persName>
						</author>
						<author>
							<persName><surname>Eg</surname></persName>
						</author>
						<author>
							<persName><surname>Larsen</surname></persName>
						</author>
						<author>
							<affiliation key="aff0">
								<orgName type="institution">Technical University of Denmark</orgName>
								<address>
									<settlement>Kongens Lyngby</settlement>
									<country key="DK">Denmark</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff1">
								<orgName type="institution">Eriksholm Research Centre Snekkersten</orgName>
								<address>
									<country key="DK">Denmark</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff2">
								<orgName type="institution">Oticon A/S Smørum</orgName>
								<address>
									<country key="DK">Denmark</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff3">
								<orgName type="institution">Technical University of Denmark</orgName>
								<address>
									<settlement>Kongens Lyngby</settlement>
									<country key="DK">Denmark</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Rethinking Hearing Aids as Recommender Systems</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">D5C24DA4FF4A5C0277BE7C17A6B418F9</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T10:28+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>CCS CONCEPTS</term>
					<term>Information systems → Personalization</term>
					<term>Recommender systems</term>
					<term>• Human-centered computing → Ambient intelligence</term>
					<term>User centered design Personalization, recommender systems, hearing healthcare, hearing aids</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The introduction of internet-connected hearing aids constitutes a paradigm shift in hearing healthcare, as the device can now potentially be complemented with smartphone apps that model the surrounding environment in order to recommend the optimal settings in a given context and situation. However, rethinking hearing aids as context-aware recommender systems poses some challenges. In this paper, we address them by gathering the preferences of seven participants in real-world listening environments. Exploring an audiological design space, the participants sequentially optimize three audiological parameters which are subsequently combined into a personalized device configuration. We blindly compare this configuration against settings personalized in a standard clinical workflow based on questions and pre-recorded sound samples, and we find that six out of seven participants prefer the device settings learned in real-world listening environments.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">INTRODUCTION</head><p>Despite decades of research and development, hearing aids still fail to restore normal auditory perception as they mainly address the lack of amplification due to loss of hair cells in the cochlea <ref type="bibr" target="#b15">[16]</ref>, rather than compensating for the resulting distortion of neural activity patterns in the brain <ref type="bibr" target="#b21">[22]</ref>. However, the full potential of hearing aids is rarely utilized as devices are frequently dispensed with a "one size fits all" medium setting, which does not reflect the varying needs of users in real-world listening scenarios. The recent introduction of internet-connected hearing aids represents a paradigm shift in hearing healthcare, as the device might now be complemented with smartphone apps that model the surrounding environment in order to recommend the optimal settings in a given context.</p><p>Whereas a traditional recommender system is built based on data records of the form &lt; user,item,rating &gt; and may apply collaborative filtering to suggest, for instance, new items based on items previously purchased and their features, recommending the optimal hearing aid settings in a given context remains highly complex. Rethinking hearing aids as recommender systems, different device configurations could be interpreted as items to be recommended to the user based on previously expressed preferences as well as preferences expressed by similar users in similar contexts. In this framework, information about the sound environment and user intents in different soundscapes could be treated as contextual information to be incorporated in the recommendation, building a context-aware recommender system based on data records of the form &lt; user,item,context,rating &gt; <ref type="bibr" target="#b0">[1]</ref>. However, addressing some challenges related to the four aforementioned data types is essential to make it possible to build an effective context-aware recommender system in the near future. In this paper, we discuss the main challenges posed when rethinking hearing aids as recommender systems and we address them in an experiment conducted with seven hearing aid users.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.1">Rating</head><p>In order to be able to precisely and accurately recommend optimal device settings in every situation, gathering relevant user preferences (expressed as ratings) is essential. However, learning user preferences poses some challenges. Firstly, the device settings reflect a highly complex audiological design space involving multiple interacting parameters, such as beamforming, noise reduction, compression and frequency shaping of gain. It is important to explore the different parameters, in order not to disregard some parameters that might have relevant implications for the user listening experience, and to identify which parameters in an audiological design space <ref type="bibr" target="#b9">[10]</ref> define user preferences in a given context. Secondly, the preferred device settings depend on the human perception of the listening experience and it is therefore difficult to represent the perceptual objective using an equation solely calculated by computers <ref type="bibr" target="#b20">[21]</ref>. Having to rely on user feedback, it is important to limit the complexity of the interface, to make the interaction as effective as possible. Thirdly, capturing user preferences in multiple real-world situations not only guarantees that the situations are relevant and representative of what the user will experience in the future, but it also allows the user to test the settings with a precise and real intent in mind. However, this increases the complexity of the task, since the real-world environment is constantly changing and a user might explore the design space while performing other actions (e.g. conversing).</p><p>A traditional approach to find the best parameter combination (i.e. the best device configuration) is parameter tweaking, which consists in acting on a set of (either continuous or discrete) parameters to optimize them. Similarly to enhancing a photograph by manipulating sliders defining brightness, saturation and contrast <ref type="bibr" target="#b20">[21]</ref>, the hearing aid user could control her listening experience by tweaking the parameters that define the design space and find the optimal settings in different listening scenarios. However, this method can be tedious when the user is moving in a complex design space defined by parameters that interact among each other <ref type="bibr" target="#b12">[13]</ref>. One frequently used method to simplify the task of gathering preferences is pairwise comparison, which consists in making users select between two contrasting examples. A limitation of this approach is efficiency, given that a single choice between two examples provides limited information and many iterations are required to obtain the preferred configuration. Based on pairwise comparisons, an active learning algorithm may apply Bayesian optimization <ref type="bibr" target="#b1">[2]</ref> to automatically reduce the number of examples needed to capture the preferences <ref type="bibr" target="#b2">[3]</ref>, assuming that the samples selected for comparison capture all parameters across the domain. Alternatively, one might decompose the entire problem into a sequence of unique one-dimensional slider manipulation tasks. As exemplified by Koyama et al. <ref type="bibr" target="#b12">[13]</ref>, the color of photographs can be enhanced by proposing users a sequence of tasks. At every step, the method determines the one-dimensional slider that can most efficiently lead to the best parameter set in a multi-dimensional design space defined by brightness, contrast and saturation. Compared to pairwise comparison tasks, the single-slider method makes it possible to obtain richer information at every iteration and accelerates the convergence of the optimization.</p><p>Inspired by the latter approach we likewise formulate the learning of audiological preferences in a given listening scenario as an optimization problem:</p><formula xml:id="formula_0">z = arg max f x ∈X (x)</formula><p>where x defines parameters related to beamforming, attenuation, noise reduction, compression, and frequency shaping of gain in an audiological design space X <ref type="bibr" target="#b9">[10]</ref> and the global optimum of the function f : X → ℜ returns values defining the preferred hearing aid settings in a given listening scenario.</p><p>However, while it remains sensible to assume that individual adjustments would converge when crowdsourcing (i.e. asking crowd workers to complete the tasks independently) the task of enhancing an image <ref type="bibr" target="#b12">[13]</ref>, it is less likely that hearing impaired users would have similar preferences due to individual differences in their sensorineural processing <ref type="bibr" target="#b15">[16,</ref><ref type="bibr" target="#b21">22]</ref>. Therefore, at least in the first phase, we need to ask the same user many times about her preferences, until her optimal configuration is found. Furthermore, in order to optimize the device in different listening scenarios, we need to ask the same user to move in the same design space multiple times. Altering the one-dimensional slider at every step of the evaluation procedure might make the task difficult, since the user would not know the trajectory defined by the new slider. We believe that decoupling the parameters and allowing users to manipulate one parameter at a time, moving in a one-dimensional space that is clearly understood, would allow them to better predict the effects of their actions and hence more effectively assess their preferences.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.2">Item</head><p>In order to enhance the hearing aid user experience, it is important to appropriately select the parameters that define the hearing aid configurations evaluated by users. Indeed, not only should the parameters have a relevant impact on the user listening experience, but the different levels of the parameters should also be discernible by untrained users. Three parameters have been demonstrated to be particularly important for the experience of hearing impaired users:</p><p>(1) Noise reduction and directionality. Noise reduction reduces the effort associated with speech recognition, as indicated by pupil dilation measurements, an index of processing effort <ref type="bibr" target="#b22">[23]</ref>. By allowing speedier word identification, noise reduction also facilitates cognitive processing and thereby frees up working memory capacity in the brain <ref type="bibr" target="#b17">[18]</ref>. Moreover, fast-acting noise reduction proved to increase recognition performances and reduce peak pupil dilation compared to slow-acting noise reduction <ref type="bibr" target="#b22">[23]</ref>. Given that the ability of users to understand speech in noisy environments may vary by up to 15 dB <ref type="bibr" target="#b3">[4]</ref>, it is essential to be able to individualize the threshold levels for the activation of noise reduction. (2) Brightness. While a lot of research has been focused on adapting the frequency-specific amplification which compensates for a hearing loss based on optimized rationales like VAC+ <ref type="bibr" target="#b4">[5]</ref>, rationales still reflect average preferences across a population rather than individual ones. Several studies indicate that some users may benefit from increasing high-frequency gain in order to enhance speech intelligibility <ref type="bibr" target="#b10">[11,</ref><ref type="bibr" target="#b11">12]</ref>. (3) Soft gain. The perception of soft sounds varies largely among individuals. Hearing aid users with similar hearing losses can perceive sounds close to the hearing threshold as being soft or relatively loud. Thus, proposing a medium setting for amplification of soft sounds may seem right when averaging across a population, but would not be representative of the large differences in loudness perception found among individual users <ref type="bibr" target="#b16">[17]</ref>. For this reason, modern hearing aids provide the opportunity to fine-tune the soft gain by acting on a compression threshold trimmer <ref type="bibr" target="#b13">[14]</ref>.</p><p>Taking a naive approach, treating each parameter independently, the preferences could subsequently be summed up in a general hearing aid setting, by simply applying the most frequently preferred values along each audiological parameter.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.3">User</head><p>Hearing aids are often fitted based on a pure tone audiometry, a test used to identify the hearing threshold of users. However, as mentioned above, users perceive the sounds differently and might benefit from a fully personalized hearing aid configuration. For this reason, it is essential to fully understand what drives user preferences and which is the relative importance of users' characteristics and context. It is interesting to analyse whether users exhibit similar preferences when optimizing the hearing aids in several real-world environments and whether they result into similar configurations.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.4">Context</head><p>Users often prefer to switch between highly contrasting settings depending on the context <ref type="bibr" target="#b10">[11]</ref>. It has been shown that a context-aware hearing aid needs to combine different contextual parameters, such as location, motion, and soundscape information inferred by auditory measures (e.g. sound pressure level, noise floor, modulation envelope, modulation index, signal-to-noise ratio) <ref type="bibr" target="#b11">[12]</ref>. However, these contextual parameters might fail to capture the audiological intent of the user, which depends not only on the characteristics of the sound environment but also on the situation the user is in. For this reason, in addition to retrieving the characteristics of the sound environment and the preferred device settings, it is also important to capture the contextual intents of users in the varying listening scenarios. Contextual information, in this exploratory phase, can be explicitly obtained by directly asking the user to define the situation she is in. However, in the future, to enable an automatic adaptation to the needs of users in real-world environments, relevant contextual information will need to be inferred using a predictive model that classifies the surrounding environment.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">METHOD 2.1 Participants</head><p>Seven participants (6 men and 1 woman), from a screened population provided by Eriksholm Research Centre, participated in the study. Their average age was 58.3 years (std. 12 years). Five of them were working, while two were retired. They were suffering from a binaural hearing loss ranging from mild to moderately severe, as classified by the American Speech-Language-Hearing Association <ref type="bibr" target="#b5">[6]</ref>. The average hearing threshold levels are shown in Figure <ref type="figure" target="#fig_0">1</ref>. They were all experienced hearing aid users, ranging from 5 to 20 years of experience with hearing aids. All test subjects received information about the study and signed an informed consent before the beginning of the experiment.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Apparatus</head><p>The participants were fitted according to their individual hearing loss with a pair of Oticon Opn S 1 miniRITE <ref type="bibr" target="#b7">[8]</ref>. All had iPhones with iOS 12 installed and additionally downloaded a custom smartphone app connected to the hearing aids via Bluetooth. The app enabled collecting data about the audiological preferences and the corresponding context. e. the sound level below which a person's ear is unable to detect any sound <ref type="bibr" target="#b6">[7]</ref>) levels for the 7 participants. The participants had a hearing loss ranging from mild to moderately severe. Error bars indicate ±1 standard deviation of the hearing thresholds.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3">Procedure</head><p>The experiment was divided into four weeks. As shown in Table <ref type="table" target="#tab_0">1</ref>, the first three weeks were devoted to optimizing the three audiological parameters, one at a time. Each of the first three weeks, the participants were fitted with four levels of the respective parameter, while the other two parameters were kept neutral at a default level. For instance, in week 1, each participant could select between four levels of noise reduction and directionality. The participants were instructed to compare, using a smartphone app, the four levels of the parameter in different situations during their daily life and to report their preference. To ensure that the participants would evaluate the different levels in relevant listening situations and when motivated to optimize their device, they were instructed to perform the task on a voluntary basis. Moreover, every time they reported their preference, the participants were asked to specify:</p><p>• The environment they were in (e.g. office, restaurant, public space outdoor). Different environments are characterised by different soundscapes and pose disparate challenges for hearing aid users. • Their motion state (e.g. stationary, walking, driving). Motion tells more about the activity conducted by the person, but may also mark the transition to a different activity or environment <ref type="bibr" target="#b8">[9]</ref>. • Their audiological intent (e.g. conversation, work meeting, watching TV, listening to music, ignoring speech). Complementing the contextual information by gathering the intent of the participants in the specific situation might provide a • The usefulness of the parameter in the specific situation (on a scale ranging from 1 to 5). This evaluation is important not only to understand the relative importance of each preference, but also to assess the perceived benefit of the parameter in diverse situations.</p><p>The fourth week each participant compared two different device configurations in a blind test:</p><p>• An individually personalized configuration combining the most frequently selected preferences of the three audiological parameters gathered in real-world listening environments during the previous three weeks. • A configuration personalized in a standard clinical workflow based on questions and on pairwise comparisons of pre-recorded sound samples capturing different listening scenarios including, for instance, speech with varying levels of background noise.</p><p>The participants were instructed to compare the two personalized configurations in different listening situations throughout the day and report their preference, while also labeling the context. At the end of the week, the participants were asked to select the configuration they preferred.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">RESULTS</head><p>During the four weeks of test, the participants actively interacted with their devices, changing the hearing aid settings, overall, 4328 times (i.e. the level of the parameter during the first three weeks or the final configuration during the last week) and submitting 406 preferences. On average, the participants tried the different hearing aid settings 11 times before submitting a preference. Although one parameter affects the perception of the others, isolating them allows to analyse their perceived impact on the listening experience. As illustrated in Figure <ref type="figure" target="#fig_1">2</ref>, the brightness parameter was on average rated higher in perceived usefulness. This result is consistent among the seven participants. Conversely, the noise reduction and directionality parameter resulted to have the lowest perceived usefulness for five participants out of seven. The soft gain parameter resulted to have an average perceived usefulness between those of the other two parameters.</p><p>Recording, together with each preference, the perceived usefulness of the parameter in the specific situation also allows to understand how much each parameter contributes to the overall setting of the hearing aid. Figures <ref type="figure" target="#fig_3">3, 4</ref>, 5 display the preferences of test participants for different levels of noise reduction and directionality, brightness, and soft gain, respectively. Only the preferences Brightness is perceived to be the most useful parameter.</p><p>Noise reduction and directionality tends to be perceived as the least useful parameter.</p><p>recorded in situations where the usefulness of the parameter is rated higher than two out of five are considered. Firstly, the results indicate that the participants have widely different audiological preferences, rather than converging towards a shared optimal value. As the participants are ordered by age (A being the youngest), there seem, nevertheless, to be some common tendencies among younger or older participants across all parameters.</p><p>Secondly, most participants are not searching for a single optimum but select different values within each parameter. When adjusting the perceived brightness (Figure <ref type="figure" target="#fig_3">4</ref>), six participants out of seven prefer, most of the time, the two highest levels along this parameter. Thirdly, the participants frequently prefer highly contrasting values within each parameter, depending on the context. Figure <ref type="figure">3</ref>: Preferences for the 4 levels of noise reduction and directionality, which correspond (from level 1 to level 4) to increasing directionality settings, increasing levels of noise reduction in simple and complex environments and earlier activation of noise reduction <ref type="bibr" target="#b14">[15]</ref>. The participants exhibited different noise reduction and directionality preferences and five of them preferred more than one level in different situations. Figure <ref type="figure">5</ref>: Preferences for the 4 levels of soft gain, which correspond (from level 1 to level 4) to increasing amplification of soft sounds, thus increasing dynamic range compression <ref type="bibr" target="#b13">[14]</ref>. The participants exhibited different soft gain preferences and five of them preferred more than one level in different situations.</p><p>In order to combine the sequentially learned preferences, we summed up the most frequently chosen values along each parameter into a single hearing aid configuration. For each participant, we subsequently compared it against individually personalized settings configured in a standard clinical workflow based on questions and pre-recorded sound samples. After the fourth week, six out of seven participants responded they appreciated having more than one general hearing aid setting, as they used both configurations in different situations. They also wished to keep both personalized configurations after the end of the test. However, in a blind comparison of the two configurations, six out of seven participants preferred the hearing aid settings personalized by sequentially optimizing parameters in real-world listening scenarios.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">DISCUSSION</head><p>Due to the aging population, the number of people affected by hearing loss will double by 2050 <ref type="bibr" target="#b19">[20]</ref> and this will have large implications for hearing healthcare. Rethinking hearing aids as recommender systems might enable the implementation of devices that automatically learn the preferred settings by actively involving hearing impaired users in the loop. Not only would this enhance the experience of current hearing aid users, but it could also help overcome the growing lack of clinical resources. Personalizing hearing aids by integrating audiological domain-specific recommendations might even make it feasible to provide scalable solutions for the 80% of hearing impaired users who currently have no access to hearing healthcare worldwide <ref type="bibr" target="#b18">[19]</ref>. The accuracy of the recommendation primarily depends on the ability of the system to gather user preferences, while the user explores a highly complex design space. In this study, we proposed an approach to effectively optimize the device settings by decoupling three audiological parameters and allowing the participants to manipulate one parameter at a time, comparing four discrete levels. The fact that the participants preferred the hearing aid configuration personalized in real-world environments suggests that the proposed optimization approach manages to capture the main individual parameter preferences.</p><p>Looking into the individual preferences learned when sequentially adjusting the three parameters, several aspects stand out. The results suggest that the brightness parameter has the highest perceived usefulness. This could be due to the fact that enhancing the gain of high frequencies may increase the contrasts between consonants and as a result improve speech intelligibility. Likewise, it may amplify spatial cues reflected from the walls and ceiling, improving the localization of sounds and thereby facilitating the separation of voices. The participants seemed to appreciate a brighter sound when listening to speech or when paying attention to specific sources in a quiet environment. Despite the advances in technology that reduce the risk of audio feedback and allow the new instruments to be fitted to target and deliver the optimal gain <ref type="bibr" target="#b7">[8]</ref>, in some situations most of the participants seemed to benefit from even more brightness. Conversely, users might prefer a more round sound in noisy situations or when they want to detach themselves.</p><p>Adjusting the noise reduction and directionality parameter is perceived as having the lowest usefulness. Essentially, this parameter defines how ambient sounds coming from the sides and from behind are attenuated, while still amplifying signals with speech characteristics. Although the benefits of directionality and noise reduction are proven, our results indicate that users find it more difficult to differentiate the levels of this parameter if the ambient noise level is not sufficiently challenging. The four levels of the parameter mainly affect the threshold for when the device should begin to attenuate ambient sounds. However, these elements of signal processing are partly triggered automatically based on how noisy the environment is. Therefore, in some situations, changing the attenuation thresholds (i.e. the parameter levels) might not make a difference. Thus, users may feel less empowered to adjust this parameter. On the other hand, the data also shows that participants actively select the lowest level of the parameter (level 1), which provides an immersive omnidirectional experience without attenuation of ambient sounds in simple listening scenarios. This suggests that, in some contexts, users express a need for personalizing the directionality settings and the activation thresholds of noise reduction. Furthermore, previous studies have shown that the perception of soft sounds varies largely among individuals. Our results not only confirm that users have widely different audiological preferences, but also suggest they would benefit from a personalized dynamic adaptation of soft gain dependent on the context.</p><p>Focusing on the optimization problem in the audiological design space, some indications can be inferred. The large differences among the participants suggest that, in a first phase, users' interaction is essential to gather individual preferences and thereby reach the optimum configuration for each single user. Simplifying the optimization task and offering a clear explanation of the one-dimensional slider made the process more transparent and increased users' empowerment. Once a recommender system is in place, this component might also prove useful in enhancing users' trust in the recommendations provided. Moreover, performing the optimization task in real-world environments ensured an accurate assessment and communication of users' preferences. In the short term, user preferences collected with this approach could flow into the standard clinical workflow and help hearing care professionals to fine-tune the hearing aids. However, a single static configuration, although personalized, might not fully satisfy the user. Our results indicate that such recommender systems should not simply model users as a sole set of optimized audiological parameters, because the preferred configuration varies depending on the context. It is therefore essential for these models to likewise classify the sound environment and motion state in order to infer the intents of the user. Being fully aware of the intent, by automatically labeling it, would add further value to the collected preferences and would allow to ask for user feedback in specific situations. That would make it feasible to verify hypotheses based on previous data, and progressively optimize several device configurations for different real-world listening scenarios. Once some configurations are learned, the hearing aids could automatically recommend them in specific situations and, by monitoring users' behavior, continuously calibrate to the preference of the user.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">CONCLUSION</head><p>Internet-connected hearing aids open the opportunity for truly personalized hearing aids, which adapt to the needs of users in realworld listening scenarios. This study addressed the main challenges posed when rethinking hearing aids as recommender systems. It investigated how to effectively optimize the device settings by gathering user preferences in real-world environments. A complex audiological space was simplified by decoupling three audiological parameters and allowing the participants to manipulate one parameter at a time, comparing four discrete levels. The participants sequentially optimized the three audiological parameters, which were subsequently combined into a personalized device configuration. This configuration was blindly compared against a configuration personalized in a standard clinical workflow based on questions and pre-recorded sound samples, and six out of seven participants preferred the device settings learned in real-world listening environments. Thus, the approach seemed to effectively gather the main individual audiological preferences. The parameters resulted to have a different perceived usefulness, differently contributing to the listening experience of hearing aid users. The seven participants exhibited widely different audiological preferences. Furthermore, our results indicate that hearing aid users do not simply explore the audiological design space in search of a global optimum. Instead, most of them select multiple highly contrasting values along each parameter, depending on the context.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure1: Average hearing threshold (i.e. the sound level below which a person's ear is unable to detect any sound<ref type="bibr" target="#b6">[7]</ref>) levels for the 7 participants. The participants had a hearing loss ranging from mild to moderately severe. Error bars indicate ±1 standard deviation of the hearing thresholds.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Average perceived usefulness of three parameters (noise reduction and directionality, brightness, soft sounds).Brightness is perceived to be the most useful parameter. Noise reduction and directionality tends to be perceived as the least useful parameter.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 :</head><label>4</label><figDesc>Figure4: Preferences for the 4 levels of brightness, which correspond (from level 1 to level 4) to increasing amplification of high-frequency sounds. The participants exhibited different brightness preferences and six of them preferred more than one level in different situations.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 :</head><label>1</label><figDesc>Study timeline</figDesc><table><row><cell>Week Activity</cell></row><row><cell>W. 1 Optimization of noise reduction and directionality</cell></row><row><cell>W. 2 Optimization of brightness (amplification of high-frequency</cell></row><row><cell>sounds)</cell></row><row><cell>W. 3 Optimization of soft gain (amplification of soft sounds)</cell></row><row><cell>W. 4 Final test of preference</cell></row><row><cell>deeper insight into how the different audiological parameters</cell></row><row><cell>help them in coping with different sounds.</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>ACKNOWLEDGMENTS</head><p>We would like to thank Oticon A/S, Eriksholm Research Centre, and Research Clinician Rikke Rossing for providing hardware, access to test subjects, clinical approval and clinical resources.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Context-Aware Recommender Systems</title>
		<author>
			<persName><forename type="first">Gediminas</forename><surname>Adomavicius</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alexander</forename><surname>Tuzhilin</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-0-387-85820-3_7</idno>
		<ptr target="https://link.springer.com/chapter/10.1007/978-0-387-85820-3_7" />
	</analytic>
	<monogr>
		<title level="m">Recommender Systems Handbook</title>
				<editor>
			<persName><forename type="first">F</forename><surname>Ricci</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">L</forename><surname>Rokach</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">B</forename><surname>Shapira</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Kantor</surname></persName>
		</editor>
		<meeting><address><addrLine>Boston, MA, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="217" to="253" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m" type="main">A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning</title>
		<author>
			<persName><forename type="first">Eric</forename><surname>Brochu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Vlad</forename><forename type="middle">M</forename><surname>Cora</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Nando</forename><surname>De Freitas</surname></persName>
		</author>
		<idno>CoRR abs/1012.2599</idno>
		<imprint>
			<date type="published" when="2010">2010. 2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Active Preference Learning with Discrete Choice Data</title>
		<author>
			<persName><forename type="first">Eric</forename><surname>Brochu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Nando</forename><surname>De Freitas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Abhijeet</forename><surname>Ghosh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 20th International Conference on Neural Information Processing Systems (NIPS &apos;07)</title>
				<meeting>the 20th International Conference on Neural Information Processing Systems (NIPS &apos;07)</meeting>
		<imprint>
			<date type="published" when="2007">2007</date>
			<biblScope unit="page" from="409" to="416" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">New Thinking on Hearing in Noise: A Generalized Articulation Index</title>
		<author>
			<persName><forename type="first">C</forename><surname>Mead</surname></persName>
		</author>
		<author>
			<persName><surname>Killion</surname></persName>
		</author>
		<idno type="DOI">10.1055/s-2002-24976</idno>
		<ptr target="https://doi.org/10.1055/s-2002-24976" />
	</analytic>
	<monogr>
		<title level="j">Seminars in Hearing</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="page" from="57" to="076" />
			<date type="published" when="2002-01">2002. January 2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m" type="main">Client Target and Real-ear Measurements</title>
		<author>
			<persName><forename type="first">Susanna</forename><forename type="middle">L</forename><surname>Callaway</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Andreea</forename><surname>Micula</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2016">2016</date>
			<pubPlace>Denmark</pubPlace>
		</imprint>
		<respStmt>
			<orgName>Oticon A/S, Smørum</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Technical Report</note>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Uses and Abuses of Hearing Loss Classification</title>
		<author>
			<persName><forename type="first">John</forename><surname>Clark</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ASHA: a journal of the American Speech-Language-Hearing Association</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="page" from="493" to="500" />
			<date type="published" when="1981-08">1981. August 1981</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title level="m" type="main">Glossary: Hearing threshold</title>
		<ptr target="https://ec.europa.eu/health/scientific_committees/opinions_layman/en/hearing-loss-personal-music-player-mp3/glossary/ghi/hearing-threshold.htm" />
		<imprint>
			<date type="published" when="2019-08-16">2019. August 16, 2019</date>
			<publisher>European Commission</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<title level="m" type="main">Oticon Opn S Clinical Evidence</title>
		<author>
			<persName><forename type="first">Josefine</forename><forename type="middle">J</forename><surname>Jensen</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2019">2019</date>
			<pubPlace>Denmark</pubPlace>
		</imprint>
		<respStmt>
			<orgName>Oticon A/S, Smørum</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Technical Report</note>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Inferring User Intents from Motion in Hearing Healthcare</title>
		<author>
			<persName><forename type="first">Benjamin</forename><surname>Johansen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Maciej</forename><forename type="middle">J</forename><surname>Korzepa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Michael</forename><forename type="middle">K</forename><surname>Petersen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Niels</forename><forename type="middle">H</forename><surname>Pontoppidan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jakob</forename><forename type="middle">E</forename><surname>Larsen</surname></persName>
		</author>
		<idno type="DOI">10.1145/3267305.3267683</idno>
		<ptr target="https://doi.org/10.1145/3267305.3267683" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium</title>
				<meeting>the 2018 ACM International Joint Conference and 2018 International Symposium</meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Modelling User Utterances as Intents in an Audiological Design Space</title>
		<author>
			<persName><forename type="first">Benjamin</forename><surname>Johansen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Michael</forename><forename type="middle">K</forename><surname>Petersen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Niels</forename><forename type="middle">H</forename><surname>Pontoppidan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jakob</forename><forename type="middle">E</forename><surname>Larsen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Workshop on Computational Modeling in Human-Computer Interaction</title>
				<meeting><address><addrLine>CHI</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">2019. 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Personalizing the Fitting of Hearing Aids by Learning Contextual Preferences From Internet of Things Data</title>
		<author>
			<persName><forename type="first">Benjamin</forename><surname>Johansen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Michael</forename><forename type="middle">K</forename><surname>Petersen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Maciej</forename><forename type="middle">J</forename><surname>Korzepa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jan</forename><surname>Larsen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Niels</forename><forename type="middle">H</forename><surname>Pontoppidan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jakob</forename><forename type="middle">E</forename><surname>Larsen</surname></persName>
		</author>
		<idno type="DOI">10.3390/computers7010001</idno>
		<ptr target="https://doi.org/10.3390/computers7010001" />
	</analytic>
	<monogr>
		<title level="j">Computers</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="page">1</biblScope>
			<date type="published" when="2018">2018. 2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Learning Preferences and Soundscapes for Augmented Hearing</title>
		<author>
			<persName><forename type="first">J</forename><surname>Maciej</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Benjamin</forename><surname>Korzepa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Michael</forename><forename type="middle">K</forename><surname>Johansen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jan</forename><surname>Petersen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Niels</forename><forename type="middle">H</forename><surname>Larsen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jakob</forename><forename type="middle">E</forename><surname>Pontoppidan</surname></persName>
		</author>
		<author>
			<persName><surname>Larsen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IUI Workshops</title>
				<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Sequential Line Search for Efficient Visual Design Optimization by Crowds</title>
		<author>
			<persName><forename type="first">Yuki</forename><surname>Koyama</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Issei</forename><surname>Sato</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Daisuke</forename><surname>Sakamoto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Takeo</forename><surname>Igarashi</surname></persName>
		</author>
		<idno type="DOI">10.1145/3072959.3073598</idno>
		<ptr target="https://doi.org/10.1145/3072959.3073598" />
	</analytic>
	<monogr>
		<title level="j">ACM Trans. Graph</title>
		<imprint>
			<biblScope unit="volume">36</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page">11</biblScope>
			<date type="published" when="2017-07">2017. July 2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<title level="m" type="main">Amplifying Soft Sounds -a Personal Matter</title>
		<author>
			<persName><forename type="first">Nicolas</forename><surname>Le</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Goff</forename></persName>
		</author>
		<imprint>
			<date type="published" when="2015">2015</date>
			<pubPlace>Denmark</pubPlace>
		</imprint>
		<respStmt>
			<orgName>Oticon A/S, Smørum</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Technical Report</note>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<title level="m" type="main">An Introduction to OpenSound Navigator TM</title>
		<author>
			<persName><forename type="first">Nicolas</forename><forename type="middle">Le</forename><surname>Goff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jesper</forename><surname>Jensen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Michael</forename><forename type="middle">S</forename><surname>Pedersen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Susanna</forename><forename type="middle">L</forename><surname>Callaway</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2016">2016</date>
			<pubPlace>Denmark</pubPlace>
		</imprint>
		<respStmt>
			<orgName>Oticon A/S, Smørum</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Technical Report</note>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Why Do Hearing Aids Fail to Restore Normal Auditory Perception</title>
		<author>
			<persName><forename type="first">Nicholas</forename><forename type="middle">A</forename><surname>Lesica</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.tins.2018.01.008</idno>
		<ptr target="https://doi.org/10.1016/j.tins.2018.01.008" />
	</analytic>
	<monogr>
		<title level="j">Trends in Neurosciences</title>
		<imprint>
			<biblScope unit="volume">41</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="174" to="185" />
			<date type="published" when="2018-04">2018. April 2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Loudness Growth in Individual Listeners with Hearing Losses: A Review</title>
		<author>
			<persName><forename type="first">Jeremy</forename><surname>Marozeau</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Mary</forename><surname>Florentine</surname></persName>
		</author>
		<idno type="DOI">10.1121/1.2761924</idno>
		<ptr target="https://doi.org/10.1121/1.2761924" />
	</analytic>
	<monogr>
		<title level="j">The Journal of the Acoustical Society of America</title>
		<imprint>
			<biblScope unit="volume">122</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="L81" to="L87" />
			<date type="published" when="2007">2007. 2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Effects of Noise and Working Memory Capacity on Memory Processing of Speech for Hearing-aid Users</title>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">N</forename><surname>Elaine</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Mary</forename><surname>Ng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Thomas</forename><surname>Rudner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Michael</forename><surname>Lunner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jerker</forename><surname>Syskind Pedersen</surname></persName>
		</author>
		<author>
			<persName><surname>Rönnberg</surname></persName>
		</author>
		<idno type="DOI">10.3109/14992027.2013.776181</idno>
		<idno>.776181</idno>
		<ptr target="https://doi.org/10.3109/14992027.2013" />
	</analytic>
	<monogr>
		<title level="j">International Journal of Audiology</title>
		<imprint>
			<biblScope unit="volume">52</biblScope>
			<biblScope unit="issue">7</biblScope>
			<biblScope unit="page" from="433" to="441" />
			<date type="published" when="2013">2013. 2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<title level="m">Multi-Country Assessment of National Capacity to Provide Hearing Care</title>
				<meeting><address><addrLine>Geneva, Switzerland</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2013">2013</date>
		</imprint>
		<respStmt>
			<orgName>World Health Organization</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<title level="m" type="main">Deafness and Hearing Loss</title>
		<ptr target="https://www.who.int/news-room/fact-sheets/detail/deafness-and-hearing-loss" />
		<imprint>
			<date type="published" when="2019-06-30">2019. June 30, 2019</date>
		</imprint>
		<respStmt>
			<orgName>World Health Organization</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<author>
			<persName><forename type="first">Antti</forename><surname>Oulasvirta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Per</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Xiaojun</forename><surname>Kristensson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Andrew</forename><surname>Bi</surname></persName>
		</author>
		<author>
			<persName><surname>Howes</surname></persName>
		</author>
		<title level="m">Computational Interaction</title>
				<imprint>
			<publisher>Oxford University Press</publisher>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">The Neural Consequences of Age Related Hearing Loss</title>
		<author>
			<persName><forename type="first">Jonathan</forename><forename type="middle">E</forename><surname>Peele</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Arthur</forename><surname>Wingfield</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.tins.2016.05.001</idno>
		<ptr target="https://doi.org/10.1016/j.tins.2016.05.001" />
	</analytic>
	<monogr>
		<title level="j">Trends in Neurosciences</title>
		<imprint>
			<biblScope unit="volume">39</biblScope>
			<biblScope unit="issue">7</biblScope>
			<biblScope unit="page" from="486" to="497" />
			<date type="published" when="2016">2016. 2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Impact of Noise and Noise Reduction on Processing Effort</title>
		<author>
			<persName><forename type="first">Dorothea</forename><surname>Wendt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Renskje</forename><forename type="middle">K</forename><surname>Hietkamp</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Thomas</forename><surname>Lunner</surname></persName>
		</author>
		<idno type="DOI">10.1097/aud.0000000000000454</idno>
		<ptr target="https://doi.org/10.1097/aud.0000000000000454" />
	</analytic>
	<monogr>
		<title level="j">Ear and Hearing</title>
		<imprint>
			<biblScope unit="volume">38</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="690" to="700" />
			<date type="published" when="2017">2017. 2017</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
