<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Copenhagen, Denmark, September</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Rethinking Hearing Aids as Recommender Systems</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Kasper Juul Jensen Oticon A/S Smørum</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Denmark kjen@oticon.com</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Alessandro Pasta Technical University of Denmark Kongens</institution>
          <addr-line>Lyngby</addr-line>
          ,
          <country country="DK">Denmark</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Jakob Eg Larsen Technical University of Denmark Kongens</institution>
          <addr-line>Lyngby</addr-line>
          ,
          <country country="DK">Denmark</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Michael Kai Petersen Eriksholm Research Centre Snekkersten</institution>
          ,
          <country country="DK">Denmark</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2019</year>
      </pub-date>
      <volume>20</volume>
      <issue>2019</issue>
      <fpage>11</fpage>
      <lpage>17</lpage>
      <abstract>
        <p>The introduction of internet-connected hearing aids constitutes a paradigm shift in hearing healthcare, as the device can now potentially be complemented with smartphone apps that model the surrounding environment in order to recommend the optimal settings in a given context and situation. However, rethinking hearing aids as context-aware recommender systems poses some challenges. In this paper, we address them by gathering the preferences of seven participants in real-world listening environments. Exploring an audiological design space, the participants sequentially optimize three audiological parameters which are subsequently combined into a personalized device configuration. We blindly compare this configuration against settings personalized in a standard clinical workflow based on questions and pre-recorded sound samples, and we find that six out of seven participants prefer the device settings learned in real-world listening environments.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>CCS CONCEPTS</title>
      <p>• Information systems → Personalization; Recommender
systems; • Human-centered computing → Ambient intelligence;
User centered design.
Personalization, recommender systems, hearing healthcare, hearing
aids</p>
    </sec>
    <sec id="sec-2">
      <title>INTRODUCTION</title>
      <p>
        Despite decades of research and development, hearing aids still fail
to restore normal auditory perception as they mainly address the
lack of amplification due to loss of hair cells in the cochlea [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ],
rather than compensating for the resulting distortion of neural
activity patterns in the brain [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ]. However, the full potential of
hearing aids is rarely utilized as devices are frequently dispensed
with a “one size fits all” medium setting, which does not reflect
the varying needs of users in real-world listening scenarios. The
recent introduction of internet-connected hearing aids represents
a paradigm shift in hearing healthcare, as the device might now be
complemented with smartphone apps that model the surrounding
environment in order to recommend the optimal settings in a given
context.
      </p>
      <p>
        Whereas a traditional recommender system is built based on data
records of the form &lt; user,item,rating &gt; and may apply collaborative
ifltering to suggest, for instance, new items based on items
previously purchased and their features, recommending the optimal
hearing aid settings in a given context remains highly complex.
Rethinking hearing aids as recommender systems, diferent device
configurations could be interpreted as items to be recommended
to the user based on previously expressed preferences as well as
preferences expressed by similar users in similar contexts. In this
framework, information about the sound environment and user
intents in diferent soundscapes could be treated as contextual
information to be incorporated in the recommendation, building a
context-aware recommender system based on data records of the
form &lt; user,item,context,rating &gt; [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. However, addressing some
challenges related to the four aforementioned data types is
essential to make it possible to build an efective context-aware
recommender system in the near future. In this paper, we discuss the main
challenges posed when rethinking hearing aids as recommender
systems and we address them in an experiment conducted with
seven hearing aid users.
1.1
      </p>
    </sec>
    <sec id="sec-3">
      <title>Rating</title>
      <p>
        In order to be able to precisely and accurately recommend optimal
device settings in every situation, gathering relevant user
preferences (expressed as ratings) is essential. However, learning user
preferences poses some challenges. Firstly, the device settings
relfect a highly complex audiological design space involving multiple
interacting parameters, such as beamforming, noise reduction,
compression and frequency shaping of gain. It is important to explore
the diferent parameters, in order not to disregard some parameters
that might have relevant implications for the user listening
experience, and to identify which parameters in an audiological design
space [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] define user preferences in a given context. Secondly, the
preferred device settings depend on the human perception of the
listening experience and it is therefore dificult to represent the
perceptual objective using an equation solely calculated by computers
[
        <xref ref-type="bibr" rid="ref21">21</xref>
        ]. Having to rely on user feedback, it is important to limit the
complexity of the interface, to make the interaction as efective as
possible. Thirdly, capturing user preferences in multiple real-world
situations not only guarantees that the situations are relevant and
representative of what the user will experience in the future, but
it also allows the user to test the settings with a precise and real
intent in mind. However, this increases the complexity of the task,
since the real-world environment is constantly changing and a user
might explore the design space while performing other actions (e.g.
conversing).
      </p>
      <p>
        A traditional approach to find the best parameter combination
(i.e. the best device configuration) is parameter tweaking, which
consists in acting on a set of (either continuous or discrete)
parameters to optimize them. Similarly to enhancing a photograph by
manipulating sliders defining brightness, saturation and contrast
[
        <xref ref-type="bibr" rid="ref21">21</xref>
        ], the hearing aid user could control her listening experience
by tweaking the parameters that define the design space and find
the optimal settings in diferent listening scenarios. However, this
method can be tedious when the user is moving in a complex
design space defined by parameters that interact among each other
[
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. One frequently used method to simplify the task of
gathering preferences is pairwise comparison, which consists in making
users select between two contrasting examples. A limitation of
this approach is eficiency, given that a single choice between two
examples provides limited information and many iterations are
required to obtain the preferred configuration. Based on pairwise
comparisons, an active learning algorithm may apply Bayesian
optimization [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] to automatically reduce the number of examples
needed to capture the preferences [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], assuming that the samples
selected for comparison capture all parameters across the domain.
Alternatively, one might decompose the entire problem into a
sequence of unique one-dimensional slider manipulation tasks. As
exemplified by Koyama et al. [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], the color of photographs can be
enhanced by proposing users a sequence of tasks. At every step, the
method determines the one-dimensional slider that can most
eficiently lead to the best parameter set in a multi-dimensional design
space defined by brightness, contrast and saturation. Compared to
pairwise comparison tasks, the single-slider method makes it
possible to obtain richer information at every iteration and accelerates
the convergence of the optimization.
      </p>
      <p>Inspired by the latter approach we likewise formulate the
learning of audiological preferences in a given listening scenario as an
optimization problem:
z = arg max f (x )</p>
      <p>
        x ∈X
where x defines parameters related to beamforming, attenuation,
noise reduction, compression, and frequency shaping of gain in an
audiological design space X [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] and the global optimum of the
function f : X → ℜ returns values defining the preferred hearing
aid settings in a given listening scenario.
      </p>
      <p>
        However, while it remains sensible to assume that individual
adjustments would converge when crowdsourcing (i.e. asking crowd
workers to complete the tasks independently) the task of enhancing
an image [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], it is less likely that hearing impaired users would
have similar preferences due to individual diferences in their
sensorineural processing [
        <xref ref-type="bibr" rid="ref16 ref22">16, 22</xref>
        ]. Therefore, at least in the first phase,
we need to ask the same user many times about her preferences,
until her optimal configuration is found. Furthermore, in order to
optimize the device in diferent listening scenarios, we need to ask
the same user to move in the same design space multiple times.
Altering the one-dimensional slider at every step of the evaluation
procedure might make the task dificult, since the user would not
know the trajectory defined by the new slider. We believe that
decoupling the parameters and allowing users to manipulate one
parameter at a time, moving in a one-dimensional space that is
clearly understood, would allow them to better predict the efects
of their actions and hence more efectively assess their preferences.
1.2
      </p>
    </sec>
    <sec id="sec-4">
      <title>Item</title>
      <p>
        In order to enhance the hearing aid user experience, it is important
to appropriately select the parameters that define the hearing aid
configurations evaluated by users. Indeed, not only should the
parameters have a relevant impact on the user listening experience,
but the diferent levels of the parameters should also be discernible
by untrained users. Three parameters have been demonstrated to
be particularly important for the experience of hearing impaired
users:
(1) Noise reduction and directionality. Noise reduction reduces
the efort associated with speech recognition, as indicated by
pupil dilation measurements, an index of processing efort
[
        <xref ref-type="bibr" rid="ref23">23</xref>
        ]. By allowing speedier word identification, noise
reduction also facilitates cognitive processing and thereby frees
up working memory capacity in the brain [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. Moreover,
fast-acting noise reduction proved to increase recognition
performances and reduce peak pupil dilation compared to
slow-acting noise reduction [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ]. Given that the ability of
users to understand speech in noisy environments may vary
by up to 15 dB [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], it is essential to be able to individualize
the threshold levels for the activation of noise reduction.
(2) Brightness. While a lot of research has been focused on
adapting the frequency-specific amplification which compensates
for a hearing loss based on optimized rationales like VAC+
[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], rationales still reflect average preferences across a
population rather than individual ones. Several studies indicate
that some users may benefit from increasing high-frequency
gain in order to enhance speech intelligibility [
        <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
        ].
(3) Soft gain. The perception of soft sounds varies largely among
individuals. Hearing aid users with similar hearing losses
can perceive sounds close to the hearing threshold as being
soft or relatively loud. Thus, proposing a medium setting for
amplification of soft sounds may seem right when
averaging across a population, but would not be representative of
the large diferences in loudness perception found among
individual users [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. For this reason, modern hearing aids
provide the opportunity to fine-tune the soft gain by acting
on a compression threshold trimmer [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ].
      </p>
      <p>Taking a naive approach, treating each parameter independently,
the preferences could subsequently be summed up in a general
hearing aid setting, by simply applying the most frequently preferred
values along each audiological parameter.
1.3</p>
    </sec>
    <sec id="sec-5">
      <title>User</title>
      <p>Hearing aids are often fitted based on a pure tone audiometry, a
test used to identify the hearing threshold of users. However, as
mentioned above, users perceive the sounds diferently and might
benefit from a fully personalized hearing aid configuration. For this
reason, it is essential to fully understand what drives user
preferences and which is the relative importance of users’ characteristics
and context. It is interesting to analyse whether users exhibit similar
preferences when optimizing the hearing aids in several real-world
environments and whether they result into similar configurations.
1.4</p>
    </sec>
    <sec id="sec-6">
      <title>Context</title>
      <p>
        Users often prefer to switch between highly contrasting settings
depending on the context [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. It has been shown that a context-aware
hearing aid needs to combine diferent contextual parameters, such
as location, motion, and soundscape information inferred by
auditory measures (e.g. sound pressure level, noise floor, modulation
envelope, modulation index, signal-to-noise ratio) [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. However,
these contextual parameters might fail to capture the audiological
intent of the user, which depends not only on the characteristics of
the sound environment but also on the situation the user is in. For
this reason, in addition to retrieving the characteristics of the sound
environment and the preferred device settings, it is also important
to capture the contextual intents of users in the varying listening
scenarios. Contextual information, in this exploratory phase, can be
explicitly obtained by directly asking the user to define the situation
she is in. However, in the future, to enable an automatic adaptation
to the needs of users in real-world environments, relevant
contextual information will need to be inferred using a predictive model
that classifies the surrounding environment.
2
2.1
      </p>
    </sec>
    <sec id="sec-7">
      <title>METHOD</title>
    </sec>
    <sec id="sec-8">
      <title>Participants</title>
      <p>
        Seven participants (6 men and 1 woman), from a screened
population provided by Eriksholm Research Centre, participated in the
study. Their average age was 58.3 years (std. 12 years). Five of them
were working, while two were retired. They were sufering from a
binaural hearing loss ranging from mild to moderately severe, as
classified by the American Speech-Language-Hearing Association
[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. The average hearing threshold levels are shown in Figure 1.
They were all experienced hearing aid users, ranging from 5 to 20
years of experience with hearing aids. All test subjects received
information about the study and signed an informed consent before
the beginning of the experiment.
2.2
      </p>
    </sec>
    <sec id="sec-9">
      <title>Apparatus</title>
      <p>
        The participants were fitted according to their individual hearing
loss with a pair of Oticon Opn S 1 miniRITE [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. All had iPhones
with iOS 12 installed and additionally downloaded a custom
smartphone app connected to the hearing aids via Bluetooth. The app
enabled collecting data about the audiological preferences and the
corresponding context.
The experiment was divided into four weeks. As shown in Table 1,
the first three weeks were devoted to optimizing the three
audiological parameters, one at a time. Each of the first three weeks, the
participants were fitted with four levels of the respective parameter,
while the other two parameters were kept neutral at a default level.
For instance, in week 1, each participant could select between four
levels of noise reduction and directionality. The participants were
instructed to compare, using a smartphone app, the four levels of
the parameter in diferent situations during their daily life and to
report their preference. To ensure that the participants would
evaluate the diferent levels in relevant listening situations and when
motivated to optimize their device, they were instructed to perform
the task on a voluntary basis. Moreover, every time they reported
their preference, the participants were asked to specify:
• The environment they were in (e.g. ofice, restaurant, public
space outdoor). Diferent environments are characterised
by diferent soundscapes and pose disparate challenges for
hearing aid users.
• Their motion state (e.g. stationary, walking, driving).
Motion tells more about the activity conducted by the person,
but may also mark the transition to a diferent activity or
environment [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
• Their audiological intent (e.g. conversation, work meeting,
watching TV, listening to music, ignoring speech).
Complementing the contextual information by gathering the intent
of the participants in the specific situation might provide a
deeper insight into how the diferent audiological parameters
help them in coping with diferent sounds.
• The usefulness of the parameter in the specific situation (on a
scale ranging from 1 to 5). This evaluation is important not
only to understand the relative importance of each
preference, but also to assess the perceived benefit of the parameter
in diverse situations.
      </p>
      <p>The fourth week each participant compared two diferent device
configurations in a blind test:
• An individually personalized configuration combining the
most frequently selected preferences of the three
audiological parameters gathered in real-world listening environments
during the previous three weeks.
• A configuration personalized in a standard clinical
worklfow based on questions and on pairwise comparisons of
pre-recorded sound samples capturing diferent listening
scenarios including, for instance, speech with varying levels
of background noise.</p>
      <p>The participants were instructed to compare the two personalized
configurations in diferent listening situations throughout the day
and report their preference, while also labeling the context. At the
end of the week, the participants were asked to select the
configuration they preferred.
3</p>
    </sec>
    <sec id="sec-10">
      <title>RESULTS</title>
      <p>During the four weeks of test, the participants actively interacted
with their devices, changing the hearing aid settings, overall, 4328
times (i.e. the level of the parameter during the first three weeks
or the final configuration during the last week) and submitting 406
preferences. On average, the participants tried the diferent hearing
aid settings 11 times before submitting a preference. Although one
parameter afects the perception of the others, isolating them
allows to analyse their perceived impact on the listening experience.
As illustrated in Figure 2, the brightness parameter was on
average rated higher in perceived usefulness. This result is consistent
among the seven participants. Conversely, the noise reduction and
directionality parameter resulted to have the lowest perceived
usefulness for five participants out of seven. The soft gain parameter
resulted to have an average perceived usefulness between those of
the other two parameters.</p>
      <p>Recording, together with each preference, the perceived
usefulness of the parameter in the specific situation also allows to
understand how much each parameter contributes to the overall
setting of the hearing aid. Figures 3, 4, 5 display the preferences of
test participants for diferent levels of noise reduction and
directionality, brightness, and soft gain, respectively. Only the preferences
5
s 4
s
e
l 3
n
u
f
e
s
U2
1
100%
80%
recorded in situations where the usefulness of the parameter is
rated higher than two out of five are considered.</p>
      <p>Firstly, the results indicate that the participants have widely
different audiological preferences, rather than converging towards
a shared optimal value. As the participants are ordered by age (A
being the youngest), there seem, nevertheless, to be some
common tendencies among younger or older participants across all
parameters.</p>
      <p>Secondly, most participants are not searching for a single
optimum but select diferent values within each parameter. When
adjusting the perceived brightness (Figure 4), six participants out
of seven prefer, most of the time, the two highest levels along this
parameter. Thirdly, the participants frequently prefer highly
contrasting values within each parameter, depending on the context.</p>
      <sec id="sec-10-1">
        <title>Noise Reduction and Directionality</title>
        <p>Level 4
Level 3
Level 2
Level 1</p>
        <p>A B C D E F G
(n=5) (n=9) (n=0) (n=4) (n=13) (n=1) (n=20)
Participant</p>
      </sec>
      <sec id="sec-10-2">
        <title>Brightness</title>
      </sec>
      <sec id="sec-10-3">
        <title>Soft Gain</title>
        <p>A B C D E F G
(n=5) (n=10) (n=7) (n=8) (n=7) (n=12) (n=42)</p>
        <p>Participant</p>
        <p>In order to combine the sequentially learned preferences, we
summed up the most frequently chosen values along each
parameter into a single hearing aid configuration. For each participant,
we subsequently compared it against individually personalized
settings configured in a standard clinical workflow based on questions
and pre-recorded sound samples. After the fourth week, six out of
seven participants responded they appreciated having more than
one general hearing aid setting, as they used both configurations in
diferent situations. They also wished to keep both personalized
conifgurations after the end of the test. However, in a blind comparison
of the two configurations, six out of seven participants preferred
the hearing aid settings personalized by sequentially optimizing
parameters in real-world listening scenarios.
Level 4</p>
      </sec>
    </sec>
    <sec id="sec-11">
      <title>DISCUSSION</title>
      <p>
        Due to the aging population, the number of people afected by
hearing loss will double by 2050 [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] and this will have large implications
for hearing healthcare. Rethinking hearing aids as recommender
systems might enable the implementation of devices that
automatically learn the preferred settings by actively involving hearing
impaired users in the loop. Not only would this enhance the
experience of current hearing aid users, but it could also help overcome
the growing lack of clinical resources. Personalizing hearing aids by
integrating audiological domain-specific recommendations might
even make it feasible to provide scalable solutions for the 80% of
hearing impaired users who currently have no access to hearing
healthcare worldwide [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. The accuracy of the recommendation
primarily depends on the ability of the system to gather user
preferences, while the user explores a highly complex design space. In
this study, we proposed an approach to efectively optimize the
device settings by decoupling three audiological parameters and
allowing the participants to manipulate one parameter at a time,
comparing four discrete levels. The fact that the participants
preferred the hearing aid configuration personalized in real-world
environments suggests that the proposed optimization approach
manages to capture the main individual parameter preferences.
      </p>
      <p>
        Looking into the individual preferences learned when
sequentially adjusting the three parameters, several aspects stand out. The
results suggest that the brightness parameter has the highest
perceived usefulness. This could be due to the fact that enhancing the
gain of high frequencies may increase the contrasts between
consonants and as a result improve speech intelligibility. Likewise, it may
amplify spatial cues reflected from the walls and ceiling, improving
the localization of sounds and thereby facilitating the separation of
voices. The participants seemed to appreciate a brighter sound when
listening to speech or when paying attention to specific sources
in a quiet environment. Despite the advances in technology that
reduce the risk of audio feedback and allow the new instruments
to be fitted to target and deliver the optimal gain [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], in some
situations most of the participants seemed to benefit from even more
brightness. Conversely, users might prefer a more round sound in
noisy situations or when they want to detach themselves.
      </p>
      <p>Adjusting the noise reduction and directionality parameter is
perceived as having the lowest usefulness. Essentially, this parameter
defines how ambient sounds coming from the sides and from behind
are attenuated, while still amplifying signals with speech
characteristics. Although the benefits of directionality and noise reduction
are proven, our results indicate that users find it more dificult to
diferentiate the levels of this parameter if the ambient noise level is
not suficiently challenging. The four levels of the parameter mainly
afect the threshold for when the device should begin to attenuate
ambient sounds. However, these elements of signal processing are
partly triggered automatically based on how noisy the environment
is. Therefore, in some situations, changing the attenuation
thresholds (i.e. the parameter levels) might not make a diference. Thus,
users may feel less empowered to adjust this parameter. On the
other hand, the data also shows that participants actively select the
lowest level of the parameter (level 1), which provides an immersive
omnidirectional experience without attenuation of ambient sounds
in simple listening scenarios. This suggests that, in some contexts,
users express a need for personalizing the directionality settings
and the activation thresholds of noise reduction. Furthermore,
previous studies have shown that the perception of soft sounds varies
largely among individuals. Our results not only confirm that users
have widely diferent audiological preferences, but also suggest
they would benefit from a personalized dynamic adaptation of soft
gain dependent on the context.</p>
      <p>Focusing on the optimization problem in the audiological
design space, some indications can be inferred. The large diferences
among the participants suggest that, in a first phase, users’
interaction is essential to gather individual preferences and thereby
reach the optimum configuration for each single user.
Simplifying the optimization task and ofering a clear explanation of the
one-dimensional slider made the process more transparent and
increased users’ empowerment. Once a recommender system is in
place, this component might also prove useful in enhancing users’
trust in the recommendations provided. Moreover, performing the
optimization task in real-world environments ensured an accurate
assessment and communication of users’ preferences. In the short
term, user preferences collected with this approach could flow into
the standard clinical workflow and help hearing care professionals
to fine-tune the hearing aids. However, a single static
configuration, although personalized, might not fully satisfy the user. Our
results indicate that such recommender systems should not simply
model users as a sole set of optimized audiological parameters,
because the preferred configuration varies depending on the context.
It is therefore essential for these models to likewise classify the
sound environment and motion state in order to infer the intents
of the user. Being fully aware of the intent, by automatically
labeling it, would add further value to the collected preferences and
would allow to ask for user feedback in specific situations. That
would make it feasible to verify hypotheses based on previous data,
and progressively optimize several device configurations for
diferent real-world listening scenarios. Once some configurations are
learned, the hearing aids could automatically recommend them in
specific situations and, by monitoring users’ behavior, continuously
calibrate to the preference of the user.
5</p>
    </sec>
    <sec id="sec-12">
      <title>CONCLUSION</title>
      <p>Internet-connected hearing aids open the opportunity for truly
personalized hearing aids, which adapt to the needs of users in
realworld listening scenarios. This study addressed the main challenges
posed when rethinking hearing aids as recommender systems. It
investigated how to efectively optimize the device settings by
gathering user preferences in real-world environments. A complex
audiological space was simplified by decoupling three audiological
parameters and allowing the participants to manipulate one
parameter at a time, comparing four discrete levels. The participants
sequentially optimized the three audiological parameters, which were
subsequently combined into a personalized device configuration.
This configuration was blindly compared against a configuration
personalized in a standard clinical workflow based on questions
and pre-recorded sound samples, and six out of seven participants
preferred the device settings learned in real-world listening
environments. Thus, the approach seemed to efectively gather the
main individual audiological preferences. The parameters resulted
to have a diferent perceived usefulness, diferently contributing to
the listening experience of hearing aid users. The seven participants
exhibited widely diferent audiological preferences. Furthermore,
our results indicate that hearing aid users do not simply explore the
audiological design space in search of a global optimum. Instead,
most of them select multiple highly contrasting values along each
parameter, depending on the context.</p>
    </sec>
    <sec id="sec-13">
      <title>ACKNOWLEDGMENTS</title>
      <p>We would like to thank Oticon A/S, Eriksholm Research Centre, and
Research Clinician Rikke Rossing for providing hardware, access
to test subjects, clinical approval and clinical resources.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Gediminas</given-names>
            <surname>Adomavicius</surname>
          </string-name>
          and
          <string-name>
            <given-names>Alexander</given-names>
            <surname>Tuzhilin</surname>
          </string-name>
          .
          <year>2011</year>
          .
          <article-title>Context-Aware Recommender Systems</article-title>
          . In Recommender Systems Handbook,
          <string-name>
            <given-names>F.</given-names>
            <surname>Ricci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Rokach</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Shapira</surname>
          </string-name>
          , and P. Kantor (Eds.). Springer, Boston, MA, USA,
          <fpage>217</fpage>
          -
          <lpage>253</lpage>
          . https: //link.springer.com/chapter/10.1007/978-0-
          <fpage>387</fpage>
          -85820-
          <issue>3</issue>
          _
          <fpage>7</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Eric</given-names>
            <surname>Brochu</surname>
          </string-name>
          ,
          <string-name>
            <surname>Vlad M. Cora</surname>
          </string-name>
          , and Nando de Freitas.
          <year>2010</year>
          .
          <article-title>A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning</article-title>
          .
          <source>CoRR abs/1012</source>
          .2599 (
          <year>2010</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Eric</given-names>
            <surname>Brochu</surname>
          </string-name>
          , Nando de Freitas, and
          <string-name>
            <given-names>Abhijeet</given-names>
            <surname>Ghosh</surname>
          </string-name>
          .
          <year>2007</year>
          .
          <article-title>Active Preference Learning with Discrete Choice Data</article-title>
          .
          <source>In Proceedings of the 20th International Conference on Neural Information Processing Systems (NIPS '07)</source>
          .
          <fpage>409</fpage>
          -
          <lpage>416</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Mead</surname>
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Killion</surname>
          </string-name>
          .
          <year>2002</year>
          .
          <article-title>New Thinking on Hearing in Noise: A Generalized Articulation Index</article-title>
          .
          <source>Seminars in Hearing 23 (January</source>
          <year>2002</year>
          ),
          <fpage>057</fpage>
          -
          <lpage>076</lpage>
          . https: //doi.org/10.1055/s-2002
          <source>-24976</source>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Susanna</surname>
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Callaway</surname>
            and
            <given-names>Andreea</given-names>
          </string-name>
          <string-name>
            <surname>Micula</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>Client Target and Real-ear Measurements</article-title>
          .
          <source>Technical Report</source>
          . Oticon A/S, Smørum, Denmark.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>John</given-names>
            <surname>Clark</surname>
          </string-name>
          .
          <year>1981</year>
          .
          <article-title>Uses and Abuses of Hearing Loss Classification. ASHA: a journal of the American Speech-Language-Hearing Association 23</article-title>
          (
          <year>August 1981</year>
          ),
          <fpage>493</fpage>
          -
          <lpage>500</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>European</given-names>
            <surname>Commission</surname>
          </string-name>
          .
          <year>2019</year>
          .
          <article-title>Glossary: Hearing threshold</article-title>
          .
          <source>Retrieved August 16</source>
          ,
          <year>2019</year>
          from https://ec.europa.eu/health/scientific_committees/opinions_ layman/en/hearing-loss
          <article-title>-personal-music-player-mp3/glossary/ghi/hearingthreshold</article-title>
          .htm
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Josefine</surname>
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Jensen</surname>
          </string-name>
          .
          <year>2019</year>
          .
          <article-title>Oticon Opn S Clinical Evidence</article-title>
          .
          <source>Technical Report</source>
          . Oticon A/S, Smørum, Denmark.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Benjamin</given-names>
            <surname>Johansen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Maciej J.</given-names>
            <surname>Korzepa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Michael K.</given-names>
            <surname>Petersen</surname>
          </string-name>
          ,
          <string-name>
            <surname>Niels H. Pontoppidan</surname>
            , and
            <given-names>Jakob E.</given-names>
          </string-name>
          <string-name>
            <surname>Larsen</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Inferring User Intents from Motion in Hearing Healthcare</article-title>
          .
          <source>In Proceedings of the 2018 ACM International Joint Conference</source>
          and 2018 International Symposium. https://doi.org/10.1145/3267305.3267683
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Benjamin</surname>
            <given-names>Johansen</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Michael K.</given-names>
            <surname>Petersen</surname>
          </string-name>
          ,
          <string-name>
            <surname>Niels H. Pontoppidan</surname>
            , and
            <given-names>Jakob E.</given-names>
          </string-name>
          <string-name>
            <surname>Larsen</surname>
          </string-name>
          .
          <year>2019</year>
          .
          <article-title>Modelling User Utterances as Intents in an Audiological Design Space</article-title>
          . In Workshop on Computational Modeling in Human-Computer
          <string-name>
            <surname>Interaction</surname>
          </string-name>
          (CHI
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Benjamin</surname>
            <given-names>Johansen</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Michael K.</given-names>
            <surname>Petersen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Maciej J.</given-names>
            <surname>Korzepa</surname>
          </string-name>
          , Jan Larsen,
          <string-name>
            <surname>Niels H. Pontoppidan</surname>
            , and
            <given-names>Jakob E.</given-names>
          </string-name>
          <string-name>
            <surname>Larsen</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Personalizing the Fitting of Hearing Aids by Learning Contextual Preferences From Internet of Things Data</article-title>
          .
          <source>Computers 7</source>
          ,
          <issue>1</issue>
          (
          <year>2018</year>
          ). https://doi.org/10.3390/computers7010001
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Maciej</surname>
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Korzepa</surname>
            , Benjamin Johansen,
            <given-names>Michael K.</given-names>
          </string-name>
          <string-name>
            <surname>Petersen</surname>
          </string-name>
          , Jan Larsen,
          <string-name>
            <surname>Niels H. Pontoppidan</surname>
            , and
            <given-names>Jakob E.</given-names>
          </string-name>
          <string-name>
            <surname>Larsen</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Learning Preferences and Soundscapes for Augmented Hearing</article-title>
          .
          <source>In IUI Workshops.</source>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Yuki</surname>
            <given-names>Koyama</given-names>
          </string-name>
          , Issei Sato,
          <string-name>
            <given-names>Daisuke</given-names>
            <surname>Sakamoto</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Takeo</given-names>
            <surname>Igarashi</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Sequential Line Search for Eficient Visual Design Optimization by Crowds</article-title>
          .
          <source>ACM Trans. Graph</source>
          .
          <volume>36</volume>
          ,
          <issue>4</issue>
          ,
          <string-name>
            <surname>Article 48</surname>
          </string-name>
          (
          <year>July 2017</year>
          ),
          <volume>11</volume>
          pages. https://doi.org/10.1145/3072959. 3073598
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>Nicolas</given-names>
            <surname>Le Gof</surname>
          </string-name>
          .
          <year>2015</year>
          .
          <article-title>Amplifying Soft Sounds - a Personal Matter</article-title>
          .
          <source>Technical Report</source>
          . Oticon A/S, Smørum, Denmark.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>Nicolas</given-names>
            <surname>Le</surname>
          </string-name>
          <string-name>
            <surname>Gof</surname>
          </string-name>
          , Jesper Jensen, Michael S. Pedersen,
          <string-name>
            <given-names>and Susanna L.</given-names>
            <surname>Callaway</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>An Introduction to OpenSound NavigatorTM</article-title>
          .
          <source>Technical Report</source>
          . Oticon A/S, Smørum, Denmark.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Nicholas</surname>
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Lesica</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Why Do Hearing Aids Fail to Restore Normal Auditory Perception? Trends in Neurosciences 41, 4</article-title>
          (April
          <year>2018</year>
          ),
          <fpage>174</fpage>
          -
          <lpage>185</lpage>
          . https://doi.org/ 10.1016/j.tins.
          <year>2018</year>
          .
          <volume>01</volume>
          .008
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>Jeremy</given-names>
            <surname>Marozeau</surname>
          </string-name>
          and
          <string-name>
            <given-names>Mary</given-names>
            <surname>Florentine</surname>
          </string-name>
          .
          <year>2007</year>
          .
          <article-title>Loudness Growth in Individual Listeners with Hearing Losses: A Review</article-title>
          .
          <source>The Journal of the Acoustical Society of America 122</source>
          ,
          <issue>3</issue>
          (
          <year>2007</year>
          ),
          <fpage>EL81</fpage>
          -
          <lpage>EL87</lpage>
          . https://doi.org/10.1121/1.2761924
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Elaine</surname>
            <given-names>H. N.</given-names>
          </string-name>
          <string-name>
            <surname>Ng</surname>
            , Mary Rudner, Thomas Lunner, Michael Syskind Pedersen, and
            <given-names>Jerker</given-names>
          </string-name>
          <string-name>
            <surname>Rönnberg</surname>
          </string-name>
          .
          <year>2013</year>
          .
          <article-title>Efects of Noise and Working Memory Capacity on Memory Processing of Speech for Hearing-aid Users</article-title>
          .
          <source>International Journal of Audiology 52</source>
          ,
          <issue>7</issue>
          (
          <year>2013</year>
          ),
          <fpage>433</fpage>
          -
          <lpage>441</lpage>
          . https://doi.org/10.3109/14992027.
          <year>2013</year>
          .776181
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19] World Health Organization.
          <year>2013</year>
          .
          <article-title>Multi-Country Assessment of National Capacity to Provide Hearing Care</article-title>
          . Geneva, Switzerland.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20] World Health Organization.
          <year>2019</year>
          . Deafness and
          <string-name>
            <given-names>Hearing</given-names>
            <surname>Loss</surname>
          </string-name>
          .
          <source>Retrieved June 30</source>
          ,
          <year>2019</year>
          from https://www.who.int/news-room/fact-sheets/detail/deafness-andhearing-loss
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <surname>Antti</surname>
            <given-names>Oulasvirta</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Per O. Kristensson</surname>
            , Xiaojun Bi, and
            <given-names>Andrew</given-names>
          </string-name>
          <string-name>
            <surname>Howes</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <string-name>
            <given-names>Computational</given-names>
            <surname>Interaction</surname>
          </string-name>
          . Oxford University Press.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <surname>Jonathan</surname>
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Peele</surname>
            and
            <given-names>Arthur</given-names>
          </string-name>
          <string-name>
            <surname>Wingfield</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>The Neural Consequences of Age Related Hearing Loss</article-title>
          .
          <source>Trends in Neurosciences 39</source>
          ,
          <issue>7</issue>
          (
          <year>2016</year>
          ),
          <fpage>486</fpage>
          -
          <lpage>497</lpage>
          . https: //doi.org/10.1016/j.tins.
          <year>2016</year>
          .
          <volume>05</volume>
          .001
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <surname>Dorothea</surname>
            <given-names>Wendt</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Renskje K. Hietkamp</surname>
            , and
            <given-names>Thomas</given-names>
          </string-name>
          <string-name>
            <surname>Lunner</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <source>Impact of Noise and Noise Reduction on Processing Efort. Ear and Hearing</source>
          <volume>38</volume>
          ,
          <issue>6</issue>
          (
          <year>2017</year>
          ),
          <fpage>690</fpage>
          -
          <lpage>700</lpage>
          . https://doi.org/10.1097/aud.0000000000000454
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>