The COCOHA Project

The COCOHA project addresses a need: to allow a hearing aid user to control the sophisticated processing offered by state-of-the art and future technology. In particular we  investigate how this control might be implemented on the basis of signals measured from the brain.

A useful working assumption is that the problem of acoustic scene analysis has been solved.  This is not quite yet the case: intense efforts are still underway to address the hardware, algorithmic, industrial and marketability  issues involved. Part of the COCOHA project is devoted to dealing with these constraints, and interfacing with research and technology efforts elsewhere.  Solving them is not however the main thrust of our project.

Assuming perfect acoustic scene analysis, for N acoustic sources the acoustic front-end will produce N clean signals, each corresponding to a source.  If acoustic scene analysis is not perfect, the hypothesis remains useful as an approximation. Another simplifying hypothesis, is that the acoustic front-end does its job autonomously, and does not require top-down aid to do its job.  This hypothesis, which may require revision, allows us to formulate the problem in a clear bottom-up way. The problem to solve now is, how to allow the brain to choose among N clean acoustic streams?

tournesol

COCOHA’s working hypothesis. The acoustic front end takes care of cleaning up the sound scene, providing as many clean streams as sources in the environment. The cognitively controlled back end takes care of selecting which source the user wishes to attend to, based on the user’s brain signals.

In accordance with project management practices, work within the COCOHA project is broken into “Work Packages”, each devoted to an aspect of the problem, each spearheaded by one of our partners.  These are:

  1. Definition of acoustic scenarii (for the collection of audio and EEG data). This is led by our partner DTU.
  2. Signal analysis methods to decode sound streams and EEG (electroencephalography) signals, to allow the system to make its choice of which stream to attend. This is led by our partner ENS.
  3. Experimental investigation of auditory attentional processes in normal listeners, and recording of EEG data to support the other workpackages. This is led by our partner UCL.
  4. Auditory attention and EEG recording in hearing impaired listeners. This is led by our partner DTU.
  5. Advanced hardware and signal processing for acoustic scene analysis and decoding.  This is led by our partner UZH.
  6. Integration into a real, usable, and marketable hearing aid.  This is led by our partner Oticon.

This project is scientifically and technologically ambitious, in the spirit of the European H2020 initiative. Our ambition is to do good science, addressing the underlying scientific issues rigorously, and good engineering, to overcome the technological obstacles and deliver a workable solution as needed by the widening population of hearing impaired.

To succeed, we must address a number of scientific and technological challenges.

–>Challenges

One thought on “The COCOHA Project

  1. Pingback: Would you get your hearing devices from Apple? - Aston Hearing Services

Comments are closed.