<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Learning preferences and soundscapes for augmented hearing</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Maciej Jan Korzepa Benjamin Johansen</string-name>
          <email>mjko@dtu.dk benjoh@dtu.dk</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Michael Kai Petersen</string-name>
          <email>mkpe@eriksholm.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jan Larsen Jakob Eg Larsen</string-name>
          <email>janla@dtu.dk jaeg@dtu.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Niels Henrik Pontoppidan</string-name>
          <email>npon@eriksholm.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Eriksholm Research Center</institution>
          ,
          <addr-line>Snekkersten</addr-line>
          ,
          <country country="DK">Denmark</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Technical University of Denmark Technical University of Denmark</institution>
          ,
          <addr-line>Lyngby, Denmark Lyngby</addr-line>
          ,
          <country country="DK">Denmark</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Despite the technological advancement of modern hearing aids (HA), many users abandon their devices due to lack of personalization. This is caused by the limited hearing health care resources resulting in users getting only a default 'one size fits all' setting. However, the emergence of smartphoneconnected HA enables the devices to learn behavioral patterns inferred from user interactions and corresponding soundscape. Such data could enable adaptation of settings to individual user needs dependent on the acoustic environments. In our pilot study, we look into how two test subjects adjust their HA settings, and identify main behavioral patterns that help to explain their needs and preferences in different auditory conditions. Subsequently, we sketch out possibilities and challenges of learning contextual preferences of HA users. Finally, we consider how to encompass these aspects in the design of intelligent interfaces enabling smartphone-connected HA to continuously adapt their settings to context-dependent user needs.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Author Keywords</title>
      <p>personalization; augmented hearing; intelligent interfaces</p>
    </sec>
    <sec id="sec-2">
      <title>INTRODUCTION</title>
      <p>
        Even though hearing loss is one of the leading lifestyle causes
of dementia [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], up to one quarter of users fitted with hearing
aids (HA) have been reported not to use them [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. One of
©2018. Copyright for the individual papers remains with the authors.
Copying permitted for private and academic purposes.
      </p>
      <p>
        HUMANIZE ’18, March 11, 2018, Tokyo, Japan
the reasons behind the prevalence of non-use of fitted HA is
identified by McCormack et al. [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] as users feeling that they
do not get sufficient benefits of HA. However, in the light of
technological advancement of HA as well as the abundance of
research indicating clear benefits of HA usage, we rather seek
the source of the problem in the lack of personalization in the
current clinical approach. The increasing number of
hearingimpaired people [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] and lack of hearing health care resources
often results in users getting a ’one size fits all’ setting and
thus not exploiting the full potential of modern HA.
Furthermore, the current clinical approach to measure hearing
loss is based on pure tone audiogram (PTA). PTA captures the
audible hearing thresholds in frequency bands usually from
250 Hz to 10 kHz. However, PTA does not fully explain a
hearing loss. Killion et al. showed that the ability to understand
speech in noise may vary by up to 15 dB difference in
Signalto-Noise ratio (SNR) for users with a similar hearing loss [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
Likewise, users differ in terms of how they perceive loudness.
Le Goff showed that speech at 50dB can be interpreted either
as moderately soft or slightly loud [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. This means that some
users may perceive soft sounds as noise which they would
rather attenuate than amplify. These aspects are rarely taken
into account in current clinical workflows.
      </p>
      <p>
        Earlier research by Dillon et al. [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] indicated potential benefits
of customization both within and outside the clinic including
fewer visits to clinics, a greater choice of acoustic features
for fitting and end users’ feeling of ownership. Previous
studies that focused on customizing the settings of devices based
on perceptual user feedback [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] or using interactive
tabletops in the fitting session [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] indicate that users prefer such
customization. Aldaz et al. [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] used reinforcement learning
to personalize HA settings based on auditory and geospatial
context by prompting users to perform momentary A/B
listening tests. However, only with the recent introduction of
smartphone connected HA like the Oticon Opn [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], it has
become possible to go beyond ecological momentary assessment
by continuously tracking the users’ interactions with the HA
and thereby learn individual coping strategies from data [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
Such inferred behavioral patterns may provide a foundation for
correlating user preferences with the corresponding auditory
environment and potentially enable continuous adaptation of
HA settings to the context.
      </p>
      <p>
        When interpreting user preferences, one needs to consider how
the brain interprets speech. Auditory streams are bottom-up
processes fused into auditory objects, based on spatial cues
related to binaural intensity and time difference [
        <xref ref-type="bibr" rid="ref10 ref14 ref16 ref4">4, 10, 14, 16</xref>
        ].
However, separating competing voices is a top-down process,
applying selective attention to amplify one talker and
attenuate others. HA may mimic this top-down process by either
1) increasing the brightness to enhance spatial cues
facilitating focusing on specific sounds or 2) improve the signal to
noise ratio by attenuating ambient sounds to facilitate better
separation of voices. Incorporating these aspects into our
experimental design, we hypothesize we could learn top-down
preferences for brightness or noise reduction based on HA
program and volume adjustments combined with bottom-up
sampling of how HA perceive the auditory environment in
terms of sound pressure level, modulation and signal to noise
ratio. This allows us to assess in which listening scenarios the
user relies on enhanced spatial cues provided by
omnidirectionality with more high frequency gain to separate sounds and
in which environments the user instead reduces background
noise to selectively allocate attention to specific sounds.
In our pilot study, we give two subjects HA programmed with
four contrasting programs in terms of brightness and noise
reduction, and register how they interact with programs and
volume over a period of 6-7 weeks. The purpose of this work
is to:
show how the subjects interact with HA settings in real
environments without any intervention,
discover basic contextual preferences for the subjects,
identify possibilities and challenges of learning contextual
preferences of HA users,
suggest application of intelligent user interfaces that would
continuously support users in optimizing their HA not only
by learning and adjusting to individual preferences but also
exploiting crowd-sourced patterns.
      </p>
    </sec>
    <sec id="sec-3">
      <title>METHOD</title>
    </sec>
    <sec id="sec-4">
      <title>Participants</title>
      <p>
        Two male participants (from a screened population provided
by Eriksholm Research Centre) volunteered for the study
(Table 1). The participants suffer from a symmetrical hearing
loss, ranging from moderate to moderate-severe as described
by the WHO[
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. All test subject signed an informed consent
before the beginning of the experiment.
      </p>
      <p>Subject Age group
1 65
2 76</p>
    </sec>
    <sec id="sec-5">
      <title>Apparatus</title>
      <p>The subjects were fitted with a pair of research prototype HA
EVOTION extending Oticon Opn. The subjects used Android
6.0 or iOS 10, connected via Bluetooth. Data was logged
using the nRF connect app and shared via Google Drive.
1
2</p>
      <p>P1
P2
P3
P4
P1
P2
P3
P4
omnidirectional
omnidirectional
low noise reduction
high noise reduction
omnidirectional
low noise reduction
medium noise reduction
high noise reduction</p>
    </sec>
    <sec id="sec-6">
      <title>Procedure</title>
      <p>Based on the individual hearing loss, the subjects were fitted
with 4 programs as shown in Table 2. For all programs, HA
volume could be adjusted to one of the levels from 8 + 4,
where 0 is the default volume. The subjects were instructed
to explore different settings using HA buttons over a period
of 6-7 weeks. In the experimental setup, the HA always start
up in the default program and volume. The default program
for subject 1 was P2 in the first five weeks which was then
switched to P1 for the last two weeks at the subject’s request.
Subject 2 used P2 as the default program.</p>
    </sec>
    <sec id="sec-7">
      <title>Soundscape data</title>
      <p>To create an interpretable representation of the auditory
features defining the context, we applied k-means clustering to the
acoustic context data collected from HA. The values comprise
auditory features defining how the HA perceive the acoustic
environment:
sound pressure level measure of estimated loudness,
noise floor tracking the lower bound of the signal,
modulation envelope tracking the peaks in the signal,
modulation index estimated as difference between
modulation envelope and noise floor,
signal to noise ratio estimated as difference between sound
pressure level and noise floor.</p>
      <p>The above parameters are captured as a snapshot across
multiple frequency bands once per minute. Additionally, the HA
perform a rough classification of the auditory environment and
represent it as a categorical variable with one of the
following values: ’quiet’, ’noise’, ’speech in quiet’, and ’speech in
noise’. These labels are used as ground truth for evaluating the
performance of the clustering by means of normalized mutual
information (NMI) score. The optimal number of clusters K
was estimated to be 4 with NMI = 0:35.
Tue 00:00</p>
      <p>Wed 00:00</p>
      <p>Thu 00:00</p>
      <p>Fri 00:00</p>
      <p>Sat 00:00</p>
      <p>Sun 00:00</p>
      <p>P1
P2
P3
P4
OFF
The resulting four soundscape clusters were labeled
according to the proportion of samples with different ground-truth
labels within each cluster ( Figure 1) while ambiguities were
solved by examination of the cluster centroids. The first
cluster mainly captured the ’quiet’ class which is also validated by
the cluster centroid having very low values of sound pressure
level and noise floor. Thus, the environments assigned to this
cluster will be represented as ’quiet’. The second cluster
captured both ’speech in noise’ and ’noise’ classes which suggests
that the numerical representations of these environments are
similar. For simplicity, we label them as ’speech in noise’. The
third and fourth cluster both captured mainly ’speech in quiet’
with a small addition of other classes. As the third cluster
captured samples with much higher sound pressure level and
signal to noise ratio, it will be labeled as ’clear speech’, while
the fourth cluster with attributes of the samples closer to mean
will be represented as ’normal speech’.</p>
    </sec>
    <sec id="sec-8">
      <title>RESULTS</title>
      <p>We refer to the user’s selected volume and program choice
as user preferences, and to the corresponding auditory
environment as the context. Juxtaposing user preferences and the
context allows us to learn which HA settings are selected in
specific listening scenarios. To facilitate interpretation we
assign each cluster a color from white to green gradient, in
which increasing darkness correspond to increased noise in
the context (quiet ! clean speech ! normal speech ! speech
in noise). Likewise, we assign each program a color from
yellow to red gradient. Lighter colors define programs with
an omnidirectional focus and added brightness. Darker colors
indicate increasing attenuation of noise. This coloring scheme
will apply throughout the paper.</p>
    </sec>
    <sec id="sec-9">
      <title>Contextual user preferences</title>
      <p>Figure 2 shows the user preference and context changes for
both subjects, plotted across the hours of the day over the
weeks constituting the full experimental period. Subject 1
most frequently selects programs which provide an
omnidi0.8
0.6
rectional focus with added brightness (the default program
was changed from P2 to P1 after week 43). However, the
default program is occasionally complemented with programs
suppressing noise. This suggests that the user benefits from
changing programs dependent on the context.</p>
      <p>Subject 2 mainly selects two programs; P1 offering an
omnidirectional focus with added soft gain and brightness, and
P2 (default) providing slight attenuation of ambient sounds.
Compared to subject 1, this user spends more time in ’quiet’
context. Comparing weekdays to weekends, the latter seem to
contain a larger contribution of ’normal speech’ and ’speech
in noise’ auditory environments.</p>
      <p>Figure 3, illustrates subjects’ average usage of their HA and
which programs are used most throughout the day. Days
without any HA usage are excluded from the average. The
HA usage for subject 1 steadily increases in the morning and
early afternoon and peaks at around 4pm. P1 and P2 are the
most used programs throughout the day. Interestingly, in the
evening, P3 is used more frequently reaching similar usage
level as P1 and P2 between 11pm and midnight. P4 is used
very rarely yet consistently throughout the day. The HA usage
of test subject 2 is shifted towards the morning with peak
activity around 2pm. The default P2 is the most commonly
used program throughout the whole day. However, during
afternoon, P1 seems to be chosen more often.</p>
      <p>Figure 4 shows which contexts the subjects use their HA
at different times of the day. The HA usage for subject 1 is
dominated by speech-related contexts most of the day. Only
after 5pm, the context has more ’quiet’ and ’clear speech’ and
less ’speech in noise’ contribution. From 9pm, the ’quiet’
context rapidly overtakes context containing speech. Subject 2
appears to be exposed to different contextual patterns. Normal
and noisy speech contexts seem to be dominated by ’quiet’
soundscapes in the morning. Subsequently, their
contributions increase and peak around 7pm. Afterwards, the ’quiet’
40
context gradually increases. Both subjects seem exposed to
more ’speech in noise’ around midday which is likely due to
lunchtime activities.</p>
    </sec>
    <sec id="sec-10">
      <title>Behavioral patterns</title>
      <p>We quantify the relationship between program/volume
interaction and context by assuming that the settings are preferred
in the corresponding context only at the time when they are
being selected. Under this assumption, we count how often
programs are selected in different contexts. Table 3 shows the
counts of program changes for both subjects. The total
number of changes was 52 and 46 for subject 1 and 2 respectively.
Considering the small number of changes, we outline only the
most apparent behavioral patterns.</p>
      <p>Subject 1 switches to P4 mainly in ’speech in noise’ context
(twice as often as in ’normal speech’). The fact that ’speech
in noise’ is a less common environment than ’normal speech’
strengthens this behavioral pattern. This suggests that subject
1 seems to cope by suppressing noise in challenging listening
scenarios. Examples of this behavioral pattern are illustrated
in Figure 5. Likewise, a clear behavioral pattern can be seen
for subject 2. P1 is the preferred program in ’speech in noise’
environments. Considering that P1 offers maximum
brightness and omnidirectionality with reduced attenuation and noise
reduction, this behavioral pattern suggests the user
compensates by enhancing high frequency gain as a coping strategy
in complex auditory environments (examples in Figure 6).
Table 4 shows the number of volume changes for subject
2 (subject 1 rarely changes volume). All increases beyond
21 Oct
16 Nov
22 Nov
12:00 15:00 18:00</p>
      <p>05 Nov
12:00 15:00 18:00 21:00
the default volume level (0) were made in ’speech in noise’
context. On the other hand, changes to the default volume
were evenly distributed across all contexts. This suggests that
increasing the volume is another coping strategy for subject 2
in more challenging listening scenarios.</p>
      <p>t QUIET
tex CLEAN SPEECH
on NORMAL SPEECH
C SPEECH IN NOISE</p>
      <p>Subject 2
0 +1
2 0
2 0
2 0
2 12
+2
0
0
0
1</p>
      <p>Figure 7 shows a behavioral pattern that might be more
difficult to interpret based on the auditory context alone.
Occasionally, subject 1 selects P3 in a ’quiet’ environment late in
the evening. The test subject subsequently reported that these
situations occur when going out for a walk and wanting to
be immersed in subtle sounds such as rustling leaves or the
surf of the ocean. The preference for P3 thus implies both
increasing the intensity of soft sounds as well as the perceived
brightness.</p>
    </sec>
    <sec id="sec-11">
      <title>DISCUSSION</title>
    </sec>
    <sec id="sec-12">
      <title>Inferring user needs from interaction data</title>
      <p>Empowering users to switch between alternative settings on
internet connected HA’s, while simultaneously capturing their
auditory context allows us to infer how users cope in real life
listening scenarios. To the best of our knowledge, this has not
been reported before.
15:00 18:00 21:00 00:00
15:00 18:00 21:00 00:00</p>
      <p>Learning the mapping between preferences and context is a
non-trivial task, as the chosen settings might not be the optimal
ones in the context they appear in. For example, looking into
the soundscape data, it is clear that the environment
soundscape frequently changes without the user responding with
an adjustment of the settings. Conversely, the auditory
environment may remain stable whereas the user changes settings.
We need to take into consideration not only the auditory
environment but also the user’s cognitive state due to fatigue or
intents related to a specific task. Essentially, the user cannot
be expected to exhibit clear preferences or consistent coping
strategies at all times. We hypothesize that many reasons could
explain why the user does not select an alternative program
although the context changes:
being too busy to search for the optimal settings,
too high effort is required to change programs manually,
accepting the current program as sufficient for the task at
hand,
cognitive fatigue caused by constantly adapting to different
programs.</p>
      <p>Similarly, we observe situations in which user changes settings
even though the auditory environment remain stable, which
could be caused by:
the user trying out the benefits of different settings,
cognitive fatigue due to prolonged exposure to challenging
soundscapes
the auditory environment not being classified correctly
In our pilot study, the context classification was limited to the
auditory features which are used for HA signal processing.
However, smartphone connectivity offers almost unlimited
possibilities of acquisition of contextual data. Applying
machine learning methods such as deep learning might facilitate
higher level classification of auditory environments. Different
types of listening scenarios might be classified as ’speech in
noise’ when limited to parameters such as signal to noise ratio
or modulation index. In fact, these could encompass very
different listening scenarios such as an office or a party where
the user’s intents would presumably not be the same. Here
the acoustic scene classification could be supported by motion
data, geotagging or activities inferred from the user’s calendar
to provide a more accurate understanding of needs and intents.
Nevertheless, in some situations as illustrated in Figure 6, the
behavioral patterns seem very consistent; the user preferences
appear to change simultaneously with the context, remain
unchanged as long as the context remains stable, and change
back when the context changes again. Identifying such
behaviors could allow to reliably detect user preferences with
limited amount of user interaction data. Furthermore, time as
a parameter also highlights patterns as illustrated in Figure 6
related to activities around lunch time, or late in the evening
( Figure 7), as well as the contrasting behavior in weekends
versus specific weekdays.</p>
      <p>Even though our study was limited to only two users, we
identified evident differences in the HA usage patterns. Subject 1
tends to use the HA mostly in environments involving speech,
whereas subject 2 spends substantial amount of time in quiet
non-speech environments. This might translate into
different expectations among HA users. Furthermore, our analysis
suggests that users apply unique coping strategies in different
listening scenarios, particularly for complex ’speech in noise’
environments. Subject 1 relies on suppression of background
noise to increase the signal to noise ratio in challenging
scenarios. Subject 2 responds to speech in noise in a completely
different way - he chooses maximum omnidirectionality with
added brightness and increased volume to enhance spatial cues
to separate sounds. These preferences are not limited to
challenging environments but extends to the ambience and overall
quality of sound, as subject 1 reported that he enhances
brightness and amplification of quiet sounds to feel immersed in the
subtle sounds of nature. We find this of particular importance
as it indicates that users expect their HA not only to improve
speech intelligibility, but in a broader sense to provide aspects
of augmented hearing which might even go beyond what is
experienced by normal hearing people.</p>
    </sec>
    <sec id="sec-13">
      <title>Translating user needs into augmented hearing interfaces</title>
      <p>We propose that learning and addressing user needs could be
conceptualized as an adaptive augmented hearing interface
that incorporates a simplified model reflecting the bottom-up
and top-down processes in the auditory system. We believe
that such an intelligent auditory interface should:
continuously learn and adapt to user preferences,
relieve users of manually adjusting the settings by taking
over control whenever possible,
recommend coping strategies inferred from the preferences
of other users,
actively assist users in finding the optimal settings based on
crowdsourced data,
engage the user to be an active part in their hearing care.
Such an interface would infer top-down preferences based on
the bottom-up defined context and continuously adapt the HA
settings accordingly. This would offer immense value to users
by providing the optimal settings at the right time, dependent
on the dynamically changing context. However, the system
should not be limited to passively inferring intents, but rather
incorporate a feedback loop providing user input. We see a
tremendous potential in conversational audio interfaces as HAs
resemble miniature wearable smartspeakers which would
allow the user to directly interact with the device, e.g. by means
of a chatbot or voice AI. First of all, such an interface might
resolve ambiguities in order to interpret behavioral patterns.
In a situation when user manually changes the settings in a
way that is not recognized by the learned model, the system
could ask for a reason in order to update its beliefs. Ideally,
questions would be formulated in a way allowing the system
to directly learn and update the underlying parameters. This
could be accomplished by validating specific hypotheses that
refer to the momentary context as well as the characteristics
captured in the HA user model, incorporating needs, behavior
and intents; e.g.’Did you choose this program because the
environment got noisy / you are tired / you are in a train?
Secondly, a voice interface could recommend new settings
based on collaborative filtering methods. Users typically stick
to their preferences and may be reluctant to explore available
alternatives although they might provide additional value.
Similarly, in the case of HA users, preferred settings might not
necessarily be the optimal ones. Applying clustering analysis
based on behavioral patterns, we could encourage users to
explore the available settings space by proposing preferences
inferred on the basis of ’users like me, in soundscapes like
this’. For instance, the inteface could say: ’Many users which
share your preferences seem to benefit from these settings in a
similar context - would you like to try them out?’ This would
encourage users to continuously exploit the potential of their
HA to the fullest. Additionally, behavioral patterns shared
among users, related to demographics (e.g. age, gender) and
audiology (e.g. audiogram) data, could alleviate the cold start
problem in this recommender system, thus enabling
personalisation to kick in earlier even when little or even no HA usage
data is available.</p>
      <p>Lastly, users should be able to communicate their intents,
as the preferences inferred by the system might differ from
the actual ones. In such scenarios, users could express their
intents along certain rules easily interpreted by the system
(e.g. ’I need more brightness.’) or indicate the problem in the
given situation (e.g. ’The wind noise bothers me.’). Naturally,
translating the user’s descriptive feedback into new settings
is more challenging, but could potentially offer huge value
by relieving users of the need to understand how multiple
underlying audiological parameters influence the perceived
outcome.</p>
      <p>Combining learned preferences and soundscapes into
intelligent augmented hearing interfaces would be a radical
paradigm shift in hearing health care. Instead of a single
default setting, users may navigate a multidimensional
continuum of settings. The system could be optimized in real-time
by combining learned preferences with crowdsourced
behavioral patterns. With growing numbers of people suffering from
hearing loss we need to make users an active part of
hearing health care. Conversational augmented hearing interfaces
may not only provide a scalable sustainable solution but also
actively engage users and thereby improve their quality of life.</p>
    </sec>
    <sec id="sec-14">
      <title>ACKNOWLEDGEMENTS</title>
      <p>This work is supported by the Technical University of
Denmark and the Oticon Foundation. Oticon EVOTION HAs are
partly funded by European Union’s Horizon 2020 research and
innovation programme under Grant Agreement 727521
EVOTION. We would like to thank Eriksholm Research Centre
and Oticon A/S for providing hardware, access to test subjects,
clinical approval and clinical resources.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>Gabriel</given-names>
            <surname>Aldaz</surname>
          </string-name>
          , Sunil Puria, and
          <string-name>
            <given-names>Larry J.</given-names>
            <surname>Leifer</surname>
          </string-name>
          .
          <year>2016</year>
          .
          <article-title>Smartphone-Based System for Learning and Inferring Hearing Aid Settings</article-title>
          .
          <source>Journal of the American Academy of Audiology 27</source>
          ,
          <issue>9</issue>
          (
          <year>2016</year>
          ), "
          <fpage>732</fpage>
          -
          <lpage>749</lpage>
          ". DOI: http://dx.doi.org/doi:10.3766/jaaa.15099
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>Yngve</given-names>
            <surname>Dahl</surname>
          </string-name>
          and Geir Kjetil Hanssen.
          <year>2016</year>
          .
          <article-title>Breaking the Sound Barrier: Designing for Patient Participation in Audiological Consultations</article-title>
          .
          <source>In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16)</source>
          . ACM, New York, NY, USA,
          <fpage>3079</fpage>
          -
          <lpage>3090</lpage>
          . DOI: http://dx.doi.org/10.1145/2858036.2858126
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>Harvey</given-names>
            <surname>Dillon</surname>
          </string-name>
          , Justin A.
          <string-name>
            <surname>Zakis</surname>
          </string-name>
          ,
          <string-name>
            <surname>Hugh</surname>
            <given-names>McDermott</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Gitte</given-names>
            <surname>Keidser</surname>
          </string-name>
          , Wouter Dreschler, and
          <string-name>
            <given-names>Elizabeth</given-names>
            <surname>Convery</surname>
          </string-name>
          .
          <year>2006</year>
          .
          <article-title>The trainable hearing aid: What will it do for clients</article-title>
          and clinicians?
          <volume>59</volume>
          (04
          <year>2006</year>
          ),
          <fpage>30</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>Mounya</given-names>
            <surname>Elhilali</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Modeling the Cocktail Party Problem</article-title>
          .
          <source>Auditory System at the Cocktail Party</source>
          <volume>60</volume>
          (
          <year>2017</year>
          ),
          <fpage>111</fpage>
          -
          <lpage>135</lpage>
          . DOI:http://dx.doi.org/10.1007/ 978-3-
          <fpage>319</fpage>
          -51662-
          <issue>2</issue>
          _
          <fpage>5</fpage>
          ,
          <fpage>10</fpage>
          .1007/978-3-
          <fpage>319</fpage>
          -51662-2
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>David</given-names>
            <surname>Hartley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Elena</given-names>
            <surname>Rochtchina</surname>
          </string-name>
          , Philip Newall, Maryanne Golding, and Paul Mitchell.
          <year>2010</year>
          .
          <article-title>Use of Hearing Aids and Assistive Listening Devices in an Older Australian Population</article-title>
          .
          <source>Journal of the American Academy of Audiology</source>
          <volume>21</volume>
          ,
          <issue>10</issue>
          (
          <year>2010</year>
          ),
          <fpage>642</fpage>
          -
          <lpage>653</lpage>
          . DOI: http://dx.doi.org/10.3766/jaaa.21.10.4
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>Hearing</given-names>
            <surname>Review</surname>
          </string-name>
          .
          <year>2011</year>
          .
          <article-title>35 million Americans suffering from hearing loss</article-title>
          . (
          <year>2011</year>
          ). https://www.hear-it.org/ 35-million
          <article-title>-Americans-suffering-from-hearing-loss [Online; accessed</article-title>
          <year>2017</year>
          -
          <volume>01</volume>
          -29].
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>Benjamin</given-names>
            <surname>Johansen</surname>
          </string-name>
          , Yannis Paul Raymond Flet-Berliac, Maciej Jan Korzepa, Per Sandholm, Niels Henrik Pontoppidan,
          <source>Michael Kai Petersen, and Jakob Eg Larsen</source>
          .
          <year>2017</year>
          .
          <article-title>Hearables in Hearing Care: Discovering Usage Patterns Through IoT Devices</article-title>
          . Springer International Publishing, Cham,
          <fpage>39</fpage>
          -
          <lpage>49</lpage>
          . DOI: http://dx.doi.org/10.1007/978-3-
          <fpage>319</fpage>
          -58700-
          <issue>4</issue>
          _
          <fpage>4</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Mead</surname>
            <given-names>C</given-names>
          </string-name>
          <string-name>
            <surname>Killion</surname>
          </string-name>
          .
          <year>2002</year>
          .
          <article-title>New thinking on hearing in noise: a generalized articulation index</article-title>
          . (
          <year>2002</year>
          ). DOI: http://dx.doi.org/10.1055/s-2002
          <source>-24976</source>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>Nicolas</given-names>
            <surname>Le Goff</surname>
          </string-name>
          .
          <year>2015</year>
          .
          <article-title>Amplifying soft sounds - a personal matter</article-title>
          .
          <source>Technical Report February.</source>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <given-names>Y.</given-names>
            <surname>Litovsky</surname>
          </string-name>
          , Ruth,
          <string-name>
            <given-names>J.</given-names>
            <surname>Goupell</surname>
          </string-name>
          , Matthew,
          <string-name>
            <given-names>M.</given-names>
            <surname>Misurelli</surname>
          </string-name>
          , Sara, and
          <string-name>
            <given-names>Alan</given-names>
            <surname>Kan</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Hearing with Cochlear Implants and Hearing Aids in Complex Auditory Scenes</article-title>
          .
          <source>Auditory System at the Cocktail Party</source>
          <volume>60</volume>
          (
          <year>2017</year>
          ),
          <fpage>261</fpage>
          -
          <lpage>291</lpage>
          . DOI:http://dx.doi.org/10.1007/978-3-
          <fpage>319</fpage>
          -51662-2_
          <fpage>10</fpage>
          ,
          <fpage>10</fpage>
          .1007/978-3-
          <fpage>319</fpage>
          -51662-2
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Gill</surname>
            <given-names>Livingston</given-names>
          </string-name>
          , Andrew Sommerlad, Vasiliki Orgeta, Sergi G Costafreda,
          <article-title>Jonathan Huntley</article-title>
          , David Ames,
          <string-name>
            <given-names>Clive</given-names>
            <surname>Ballard</surname>
          </string-name>
          , Sube Banerjee, Alistair Burns,
          <string-name>
            <surname>Jiska</surname>
            Cohen-mansfield, Claudia Cooper, Nick Fox, Laura N Gitlin, Robert Howard, Helen C Kales, Eric B Larson, Karen Ritchie,
            <given-names>Kenneth</given-names>
          </string-name>
          <string-name>
            <surname>Rockwood</surname>
            , Elizabeth L Sampson, Quincy Samus, Lon S Schneider, Geir Selbaek, Linda Teri, and
            <given-names>Naaheed</given-names>
          </string-name>
          <string-name>
            <surname>Mukadam</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Dementia prevention, intervention, and care</article-title>
          .
          <source>The Lancet</source>
          (
          <year>2017</year>
          ). DOI:http://dx.doi.org/10.1016/S0140-
          <volume>6736</volume>
          (
          <issue>17</issue>
          )
          <fpage>31363</fpage>
          -
          <lpage>6</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Abby</surname>
            <given-names>McCormack</given-names>
          </string-name>
          and
          <string-name>
            <given-names>Heather</given-names>
            <surname>Fortnum</surname>
          </string-name>
          .
          <year>2013</year>
          .
          <article-title>Why do people fitted with hearing aids not wear them</article-title>
          ?
          <source>International Journal of Audiology 52</source>
          ,
          <issue>5</issue>
          (
          <year>2013</year>
          ),
          <fpage>360</fpage>
          -
          <lpage>368</lpage>
          . DOI:http://dx.doi.org/10.3109/14992027.
          <year>2013</year>
          .769066
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>J. B. B. Nielsen</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Nielsen</surname>
            , and
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Larsen</surname>
          </string-name>
          .
          <year>2015</year>
          .
          <article-title>Perception-Based Personalization of Hearing Aids Using Gaussian Processes and Active Learning</article-title>
          .
          <source>IEEE/ACM Transactions on Audio, Speech, and Language Processing</source>
          <volume>23</volume>
          ,
          <issue>1</issue>
          (Jan
          <year>2015</year>
          ),
          <fpage>162</fpage>
          -
          <lpage>173</lpage>
          . DOI: http://dx.doi.org/10.1109/TASLP.
          <year>2014</year>
          .2377581
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <given-names>D</given-names>
            <surname>Oertel</surname>
          </string-name>
          and ED Young.
          <year>2004</year>
          .
          <article-title>What's a cerebellar circuit doing in the auditory system?</article-title>
          <source>Trends in Neurosciences 27</source>
          ,
          <issue>2</issue>
          (
          <year>2004</year>
          ),
          <fpage>104</fpage>
          -
          <lpage>110</lpage>
          . DOI: http://dx.doi.org/10.1016/j.tins.
          <year>2003</year>
          .
          <volume>12</volume>
          .001
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Oticon</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Oticon Opn product guide</article-title>
          . (
          <year>2017</year>
          ). https: //www.oticon.co.za/-/media/oticon/main/pdf/master/opn/ pbr/177406uk_
          <article-title>pbr_opn_product_guide_17_1</article-title>
          .pdf [Online; accessed 2017-
          <volume>12</volume>
          -17].
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>David</surname>
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Ryugo</surname>
          </string-name>
          .
          <year>2011</year>
          . Introduction to Efferent
          <source>Systems. Springer Handbook of Auditory Research</source>
          <volume>38</volume>
          (
          <year>2011</year>
          ),
          <fpage>1</fpage>
          -
          <lpage>15</lpage>
          ,
          <fpage>1</fpage>
          -
          <lpage>15</lpage>
          . DOI: http://dx.doi.org/10.1007/978-1-
          <fpage>4419</fpage>
          -7070-
          <issue>1</issue>
          _
          <fpage>1</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17. World Health Organization.
          <year>2011</year>
          .
          <article-title>Grades of hearing impairment</article-title>
          . (
          <year>2011</year>
          ). http://www.who.int/pbd/deafness/hearing [Online; accessed 2017-
          <volume>12</volume>
          -17].
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>