Overview

  • Project funded by ASEM DUO.
  • Project duration- Jan 2020 to Dec 2021

Description

Originating Institute (Country): IIT Kanpur (India)                

Destination Institute (Country): University of Surrey (UK) 

Exchange Period: 22 September 2022 to 16 October 2022                    

Major: Machine Learning for Audio Signals             

Purpose of Exchange: Research collaboration with Prof. Wang, CVSSP            

The goal of the program was to explore deep embeddings for auditory scene analysis. Different simultaneously occurring sounds add to each other while still retaining their individual identity in the auditory space. Thus, identifying them and separating them from a mixture is mostly possible for human ears (but difficult for machines) because the information is present in a superposed form. This contrasts with visual space where objects mostly occlude with each other thereby losing all the information. 

Publications

  • Talk on “Model Adaptation for Learning from Small Data
    Video: https://www.youtube.com/watch?v=-_AS8_NNtWw
    Oct 2022 Queen Mary University London

  • Talk on “Learning with Little Data by Model Adaptation: Applications in Music, Sensors and Generative ML
    Oct 2022 CVSSP, University of Surrey