Our ears guide us in our everday environment, warn us of dangers, let us communicate with others, and allow us the pleasure of music. Like blindness, deafness and impairment are debilitating. The degree of impairment varies between individuals, from a moderate loss of sensitivity in certain frequency bands (presbicusis), to cochlear dead regions or complete loss of hearing. However many people complain that they cannot understand conversations, despite a normal or almost normal audiogram.
According to the World Health Organization, 5.3% of the population worldwide has debilitating hearing impairment, about 4.7% in Europe according to the WHO definition. The prevalence is greater in poor countries. It also increases rapidly with age: According to the NIDCD, the prevalence of debilitating hearing loss jumps from 2% of adults in the 45-54 year range, to 8.5% for the 55-64 year range, 25% for the 65-74 year range, and 50% for adults over 75 years of age in the US. See also this longitudinal study. Available figures for Europe are similar, despite differences in criteria. Given the current trend towards aging of the population (life expectancy is currently 80 years or more in Europe, and rising), this dismal fate awaits most of us… Hearing impairment in turn may be a factor in other forms of health decline such as dementia, or depression.
A common complaint after middle age is that we “cannot understand a conversation” in a noisy or reverberant environment (such as a pub, a restaurant, or a cocktail party). A visit to a specialist may reveal no objective loss of sensitivity: the issue is not that we cannot hear a soft voice, but rather that we cannot hear a voice, however loud, if its Signal-to-Noise Ratio (SNR) is insufficient, in other words, if the target is weak relative to the competing clatter.
While the “normal” person can function in a noisy environment (near 0dB SNR), the “impaired” person cannot (from Killion 2002).
Our hearing is no longer resiliant to noise, despite nominally normal thresholds. Such hidden hearing loss has important perceptual consequences and it can accelerate a decline of cognitive function. Loss of communication impacts social skills and promotes loss of confidence, and isolation, particularly among the elderly. The population of normal-but-with-hidden-hearing loss adds to the population with objective hearing loss (presumably also suffering): hearing impairment is a major health problem.
Hearing aids are an amazing technological advance. A hearing aid amplifies sound so as to compensate for reduced sensitivity; the gain can be frequency-dependent (e.g. more at high than to low frequencies) so as to compensate for a frequency dependent hearing loss (e.g. reduced sensitivity at high frequencies, typical of presbycusis). Modern digital hearing aids include sophisticated circuitry to avoid acoustic feedback (that occurs when the amplified sound is picked up by the microphone), provide compression (to reduce the gain at high amplitude to compensate for the reduced dynamic range of an impaired ear), and so on. Modern hearing aids are highly miniaturized, can talk to smart phones, include noise-reduction algorithms, and so on.
A hearing aid can really improve your life… (Sivantos-Press Image)
Our partner Oticon is deeply involved in the development of hearing aid technology.
Paradoxically, only a small proportion of people who need a hearing aid use one: a common observation of health providers and caretakers is that the device remains in a drawer. Multiple factors conspire against acceptation, such as cost, stigma, vanity, denial, or a difficulty to adjust to the unfamiliar enhanced perception. One major factor stands out, however: amplification does not restore the ability of a healthy ear to hear out a target within noise. In a noisy environment, the noise is amplified together with the target. Before amplification the sound scene is confused, after amplification it sounds just as confused, only louder. Signal-to-noise ratio, rather than just amplification, is the goal to aim for.
Modern hearing aids attempt to address the problem of insufficient SNR by the means of noise reduction algorithms, and acoustic signal processing. Noise reduction is reported to reduce listening effort, but proof of objective benefits for intelligibility has been elusive. A difficulty is that target and noise are intimately mixed, and so noise reduction algorithms have only weak cues to work with.
The microphone array can be tuned to enhance one source while attenuating others (from The Lab Book pages).
Acoustic signal processing can be more effective, for example using directional microphones, or microphone arrays to create a directional pattern of sensitivity that be tuned to enhance a particular speaker, or to suppress a particular noise source.
The principle of a microphone array is very simple: the signal of one microphone is added to that of the others, with slight delays to compensate for the difference in time of arrival from the desired source. This increases the amplitude of the desired source relative to noise sources in other directions. Alternatively, the signals can be subtracted to zero-out an unwanted noise. Combining microphones in this way is called beamforming.
Acoustic signal processing can be extremely effective, especially if some of the microphones are close the target or the main interferers. The more microphones available, the more noise can be canceled while preserving the target, and large arrays of microphones have been built to test these ideas.
Recent developments in microphone technology (for example MEMS microphone arrays), wireless transmission technology (for example Bluetooth and specialized low-latency protocols), processing power, and processing algorithms, together with the rapid spread of intelligent devices (for example smart phones or tablets) are leading to a new technological landscape, in which the idea of harnessing acoustic signal processing to deliver extremely high-quality processed sound to hearing aid users may become practical. Today, constraints of cost, power consumption, geometry, usability, user acceptability, and marketing make such a solution seem utopic, at least for a marketable device. Tomorrow, the landscape may change, as has happened so many times in the history of technology.
Remains one crucial obstacle. Acoustic signal processing can enhance any source relative to its competitors, but how does the device know which source to enhance? Enhancing noise instead of the target obviously would be disastrous. The healthy auditory system is adept at making the right choice, thanks to the fact that all stages of processing from ear to brain are under top-down control of the subject’s attention (via what is known as the efferent system, a network of massive neural connections that feed down from the brain to the ear). A device external to the brain lacks this control.
There exist technical solutions to this problem. Directional microphones on hearing aids usually point forwards, so the user can focus on a target by turning the head. The directional pattern can be switched between directional and omnidirectional, either manually (a switch on the hearing aid, or a hand-held device that controls it) or automatically based on how some algorithm interprets the ongoing sound scene. None of these is completely satisfactory: a user might want to attend to sources to the side or behind, or might not want to have to fumble with a hand-held device, or might become annoyed that the device makes its own decisions, changing the “rules of the game” on its own initiative.
This is where the COCOHA project comes in. We will investigate means by which a users brain can directly control a device, effectively extending the user’s top-down efferent system beyond the ear, to the device outside, thus adding to the wide range of coping strategies that the hearing impaired deploy to function and be happy in their daily life.