The human ear, and information treatment by our brain is capable of wonders. Not only simply hearing, filtering and selecting important sounds over non important ones (noise), but also spatially situate the origin of sounds. A social robot, like ARI-SPRING, which needs to evolve in a real-world environment with humans, requires similar hearing capacities, especially for conversation. Speech enhancement tries to provide the robot with noise-reduced, spatially resolved hearing using an aray of several microphones. Let’s learn how in this SPRING Technical Seminar #2, entitled “Multi-Microphone Speech Enhancement”, by Prof. Sharon Gannot from Bar-Ilan University on 8 June 2020 (part 1/2) and on 18 June 2020 (part 2).

For privacy reasons YouTube needs your permission to be loaded. For more details, please see our Privacy policy.
I Accept
For privacy reasons YouTube needs your permission to be loaded. For more details, please see our Privacy policy.
I Accept