Paris – Ecole Normale Supérieure, Laboratoire des Systèmes Perceptifs, Audition team
|Alain de Cheveigné – Senior Scientist, project coordinator
My research interests are in Hearing (pitch perception, auditory scene analysis), Audio Processing (fundamental frequency estimation, voice separation) and Multichannel Signal Processing for electrophysiology. I see the COCOHA project as an exciting opportunity to develop our expertise and understanding of brain processes, and apply this knowledge to address one of the most widespread hurdles to a satisfying life: hearing impairment.
|Daniel Pressnitzer – Senior Scientist
My research is focused on hearing, with a special interest in time: the temporal structure of sound and sound scenes, and the neural bases of their perception.Current projects include investigations of auditory memory, rapid adaptive plasticity, perceptual bistability in the auditory and visual modalities, music perception, recognition of natural sound sources, computational models of hearing based on spike timing information, comparisons of auditory change detection and visual change blindness, etc…
|Shihab Shamma – Professor
Shihab Shamma received his B.S. degree in 1976 from Imperial College, in London, U.K. He received his M.S. and Ph.D. degrees in Electrical Engineering from Stanford University in 1977 and 1980, respectively. Dr. Shamma received his M.A. in Slavic Languages and Literature in 1980 from the same institution.Dr. Shamma has been a member of the University of Maryland faculty since 1984 when he started as an Assistant Professor in the Electrical Engineering Department. He has been associated with the Institute for Systems Research since its inception in 1985, and received a joint appointment in 1990. He is a fellow of the Acoustical Society of America.He is interested in the representation of the acoustic signal at various levels in mammalian auditory systems. Ranges from theoretical models of auditory processing in early and central auditory stages, to neurophysiological investigations of the auditory cortex, to psychoacoustical experiments of human perception of acoustic spectral profiles.
|Daniel D.E. Wong – Postdoctoral Researcher
I am a postdoctoral researcher working on signal processing and machine learning techniques to decode the target of auditory attention from EEG data in realtime. I am interested in understanding the neurophysiological mechanisms of sound processing, and in translating this knowledge into hearing aid technology that provides better functional integration with the wearer.
I received my PhD in Biomedical Engineering from the University of Toronto in 2012, where I developed EEG signal processing methods for neuroimaging of cochlear implant patients. This allowed us to study the effects of cochlear implantation on the auditory pathways. I continued this line of research in my previous postdoctoral position at the University of Konstanz in addition to improving neural source localization methods. Some of this work is available as part of the Nutmeg and FieldTrip toolboxes.
|Hadrien Jean – Doctoral Student
I am a doctoral student and my focus is on perceptual auditory training. My aim is to design and test new tools to train and evaluate auditory perception. These tools are based on on earlier findings that show that frequency discrimination can be enhanced with perceptual training, and are inspired by music practice and the acquisition of video-gaming skills.
|Dorothée Arzounian – Doctoral Student
My initial training is in physics and I am now a PhD candidate in cognitive neuroscience. My thesis focuses on the spontaneous fluctuations of brain states and the way they impact our perception, with a particular interest in auditory perception. The tools I use include signal processing for the characterization of brain states from EEG signals, but also psychoacoustics and modelling for the study of context effects and of fluctuations of auditory performance over time.
|Giovanni Di Liberto – Postdoctoral Researcher
I received the M.E. degree in Computer Engineering from University of Padova, Italy in 2013. After a period working at my M.E. thesis at University College Cork (UCC, Ireland), I joined Edmund Lalor’s research lab in Dublin where I pursued a PhD in auditory neuroscience. I am currently a postdoctoral researcher at the ENS team. My research is focused on the study of how humans process natural continuous stimuli and, in particular, I am interested in speech and language perception.
London – University College London, Ear Institute, Chait Lab :
|Maria Chait – Reader, project PI
I am a Reader in auditory cognitive neuroscience at the Ear Institute, UCL. I moved to UCL in 2007, as a Marie Curie research fellow, following a short post-doc at Equipe Audition in Ecole normale supérieure, Paris. My PhD research (2006) was conducted at the Neuroscience and Cognitive Science program, university of Maryland College Park under the supervision of Jonathan Simon and David Poeppel. My bachelor’s background is in Computer Science, Economics, and East Asian Studies.Among my UCL responsibilities is the co-direction of the Sensory Systems, Technologies and Therapies (SenSyT) PhD program.
|Ulrich Pomper – Postdoctoral researcher [alumnus]|
|Daniel Bates – Doctoral student|
Lingby – Technical University Denmark, Hearing Systems
|Torsten Dau – Professor, project PI
Torsten Dau is researching in hearing technology, and he is dealing with different technical solutions in this area, such as hearing aids. He is specifically concerned with human sound perception in complex acoustic environments and quantitative modeling of the auditory signal processing in the hearing process.
|Marton Marschall – Postdoctoral Student [alumnus]
| Jens Hjortkjær – Senior Researcher
I am a senior researcher in the DTU Hearing Systems group and at the Danish Research Centre for Magnetic Resonance (DRCMR) at Copenhagen University Hospital Hvidovre. I use functional MRI and EEG to understand how auditory attention modulates the processing of sounds in human cortex. In COCOHA, I work on understanding how well we can decode attention from EEG signals in everyday acoustic situations where many sound sources are competing for attention. I am also focusing on understanding how hearing impairment influences the ability of the brain to track an attended speech signal.
|Søren Fuglsang – Doctoral Student
I am a doctoral student working in the Hearing Systems lab at the Technical University of Denmark. In my work, I use linear and nonlinear signal processing techniques to study how information about auditory stimuli and listeners’ cognitive state is represented in human cortical EEG activity. My research interests include speech perception, cognitive neuroscience, auditory attention and machine hearing.
|Jonatan Märcher-Rørsted – Research Assistant
|Søren Vørnle Nielsen – MSc student
Zurich – University of Zurich, Institute of Neuroinformatics, Sensors
|Shih-Chii Liu – Group Leader, project PI
PD Dr. Shih-Chii Liu runs the Sensors group together with Tobias Delbruck at the Institute of Neuroinformatics, University of Zurich and ETH Zurich. The group develops neuromorphic sensor circuits including the Dynamic Vision Sensor and the Dynamic Audio Sensor, event-driven deep networks and algorithms, and sensory-motor systems.Our vision is to develop a portable low-power low-latency person-held device which pre-computes acoustic information of incoming sounds on demand. Information such as the location of sound sources will be transmitted wirelessly to a central computer. The information will be computed only if there are sounds within the surroundings or on command as communicated from the computer. The information from the device will be incorporated into the algorithms for computing the attended source of the subject.
|Tobi Delbruck – Professor
I studied physics and applied math as an undergraduate in San Diego and then Computation in Neural Systems as a graduate student at Caltech. After projects with Christof Koch and David van Essen, I settled on building neuromorphic vision sensors with Carver Mead for my PhD work. After my graduate work, I went to Silicon Valley to work as a consultant on electronic imaging, first with Synaptics, and then with Arithmos, National Semiconductor and Foveon. I joined INI in 1998, together with Dr. Shih-Chii Liu. Aside from a major detour to work on the luminous tactile floor of INI’s “Ada” contribution to the 2003 Swiss national exhibition, I have been working on developing neuromorphic vision sensors for real-world applications in machine vision. Together with my doctoral student Patrick Lichtsteiner, we achieved a breakthrough with the first highly functional Dynamic Vision Sensor event-based silicon retina in 2006. Since then, the Sensors group has been evolving this technology and exploring its manifold applications in event-driven vision. Our most recent breakthrough is our flagship sensor called DAVIS. In my spare time, I also really enjoy reading and like to play tennis and basketball. I like riding my bike to work and sometimes I practice some card magic and I even rejoined the Magic Castle in Hollywood recently. For holidays we often travel back to our home in Pasadena where I grew up as a faculty child.
|Enea Ceolini – doctoral student
The brain is extremely efficient in dealing with real-time input streams of information. It can process them in a way that is both robust and efficient. In my work I focus on understanding the principles of this computation and in translating some of these principles into algorithms that can process the dynamical information from event-driven sensors such as the Dynamic Audio Sensor and Dynamic Vision Sensor. This form of processing along with the sensors brings many advantages such as speed, low power consumption and compressed information within a system. Specifically I am interested in developing a robust and fast algorithm that can extract single sources from a audio mixture collected by sensors in an auditory scene. I believe that event-driven computation can not only help us to reach goals that traditional methods cannot achieve, but also bring insights on how the brain might be solving these difficult tasks.
| Adrian Huber – doctoral student
My interests lie primarily in signal processing and the mathematical theory thereof. Within the scope of the COCOHA project I am particularly interested in the question how signal mixtures can be effectively separated into their constituent components. Geometrical questions interest me as well. Research questions intricately tied to geometry are finding optimal placements of sensors in spatial configurations for localizing sound sources and for distributed beamforming. My aim is to develop a system for online source separation/streaming of data captured with multiple distributed microphones. In this context it is then necessary to study not just the correct behaviour of algorithms but also the computational burden they impose on hardware.
| Jithendar Anumula – doctoral student
I am interested in understanding the main working principles of the
Snekkersten (Danmark)- Oticon, Eriksholm Research Centre, Cognition and Hearing
|Thomas Lunner – Senior Scientist & Professor
I was part of developing the world’s first digital hearing aid. Now my focus is on hearing aids adapted to a person’s cognitive abilities. I have also contributed to the understanding of the interplay between the signal processing in the hearing aid and the user’s cognitive ability; through modeling work, experiments carried out in laboratory and in the field as well as brain depictions. I am one of the inventors of the EarEEG concept, which is to pick up EEG signals from the surface of an ear-mould typically used in hearing instruments and other listening devices.Whilst working at Eriksholm Research Centre, I am also part-time employed at Linköping University. I see myself a bridge-builder and I find it very advantageous being in two worlds. My own research has more and more become aimed at precisely the connection between hearing aids and cognitive functions. The aim is to show during what circumstances hearing becomes strenuous and how the hearing aid can relieve this pressure, and how this can be boosted by disruptive innovations.My vision of COCOHA is to be able to show that we can determine the focus of attention for hearing impaired subjects in a variety of listening scenarios with one target source and a number of competing sources. The vision is to that we can determine this focus of attention information so quickly so we can steer hearing aid signal processing beamformers and hearing aid accessories like remote microphones towards the source of attention. Then we have shown that it is possible to help the hearing impaired person by directing amplification only to the sources they want to listen to and not amplifying all sounds as is done today, which not helps them in difficult listening situations. You could say that the vision is ‘artificial attention’ help for the hearing impaired.My role is to develop hearing aid systems and systems that simulates hearing aids including EarEEG systems that can be incorporated in future hearing aids, with the help of the insights we get from the collaboration between the COCOHA partners.
|Carina Graversen – Doctoral Student
I obtained a bachelor degree in electronic engineering in January 2000 with several courses in software development and signal processing. After the bachelor, I worked for 4.5 years as a software developer in a private company, before I went back to University to obtain a master degree in biomedical engineering. I did my master thesis at Aalborg University Hospital in Denmark in EEG analysis and machine learning related to pain and pharmacology research. Afterwards, I was offered to stay and do a PhD on the same topic. After the PhD, I continued working for several years at the hospital as a member of a local steering group and supervisor of several master and PhD students on a large project on prediction of optimal pain treatment in individual patients based on machine learning. In November 2014 I was employed at Eriksholm Research Centre, and have been involved in COCOHA since January 1st 2015. In COCOHA, I mainly work on the technical development of the EarEEG concept and how to improve electrodes and integrate the sensors into a future hearing aid. I will also be highly involved in the final demonstrator.
|Emina Alickovic – Postdoctoral Student
|Antoine Faivre – Doctoral student
My PhD project focuses on using in-ear electrodes (EarEEG) to measure the eye-gaze in order to steer a hearing aid device. The front of the eyes is charged positively and the back of the eyes is charged negatively. Therefore when the eyes look to the left, the left side of the head becomes charged positively and the right side becomes negative. The measure of this difference of potentials is called electrooculography (EOG). The EOG signal is usually measured with electrodes placed around the eyes, but it is also possible to retrieve it using EarEEG devices. The goal is to use this signal, combined with a head-tracking system to determine the absolute gaze of a user and apply this information to a hearing aid.
|Alejandro Lopez Valdes -Post-doctoral student
Alejandro Lopez Valdes is a member of the Cognitive Hearing Science group at Eriksholm Research Centre in Denmark. He holds a MSc in Biomedical Engineering from the University of Gröningen in the Netherlands and a PhD in Neural Engineering from Trinity College Dublin in Ireland. His previous research efforts have focused on cochlear implant signal processing, and looking at different ways to develop objective measures of sound discrimination in cochlear implant users. LinkedIn, Researchgate.
|Tanveer Bhuiyan – Research Engineer
I am working as a research engineer at the Cognitive Hearing Science (CHS) group headed by Prof. Thomas Lunner in Eriksholm research centre, Denmark. Previously, I have completed my Bachelor and Master degree in Electrical and Electronic Engineering from Bangladesh University of Engineering and Technology (BUET). In addition to this, I have a PhD in Biomedical Engineering from Aalborg University (AAU), Denmark, where my research was developing ECG parameter to identify drug-induced cardiac repolarization changes. Currently, as a part of the COCOHA project, I am working with Extended Kalman Filter (EKF)-based sensor fusion algorithm. The aim is to use accelerometer, gyroscope and magnetometer signals from 9DOF sensors to translate the orientation of the head which is required to determine the absolute gaze of a person and use this information to steer a hearing aid.
|Sergi Rotger Griful – eHealth Researcher
Sergi Rotger Griful is a member of the eHealth group at Eriksholm Research Centre in Denmark. Sergi received the Industrial Engineering degree specialized in Automation and Control from Technical University of Catalonia, Barcelona, Spain in 2012. Sergi completed his PhD studies in computer engineering at the Department of Engineering at Aarhus University, Aarhus, Denmark in 2016. During his PhD, Sergi worked with the system design and evaluation of demand response provision in residential homes to support the integration of renewable energies. Sergi research interests are, among others, in smart grid, demand response, electricity markets, eHealth and eAudiology. For more information about Sergi, you can check his personal website or his LinkedIn profile