Computational Analysis of Sound Scenes and Events

Computational Analysis of Sound Scenes and Events

Hardcover(1st ed. 2018)

Choose Expedited Shipping at checkout for guaranteed delivery by Monday, January 21

Product Details

ISBN-13: 9783319634494
Publisher: Springer International Publishing
Publication date: 09/22/2017
Edition description: 1st ed. 2018
Pages: 422
Product dimensions: 6.10(w) x 9.25(h) x (d)

About the Author

Tuomas Virtanenis Professor at Laboratory of Signal Processing, Tampere University of Technology (TUT), Finland, where he is leading the Audio Research Group. He received the M.Sc. and Doctor of Science degrees in information technology from TUT in 2001 and 2006, respectively. He has also been working as a research associate at Cambridge University Engineering Department, UK. He is known for his pioneering work on single-channel sound source separation using non-negative matrix factorization based techniques, and their application to noise-robust speech recognition, music content analysis and audio event detection. In addition to the above topics, his research interests include content analysis of audio signals in general and machine learning. He has authored more than 100 scientific publications on the above topics, which have been cited more than 5000 times. He has received the IEEE Signal Processing Society 2012 best paper award for his article "Monaural Sound Source Separation by Nonnegative Matrix Factorization with Temporal Continuity and Sparseness Criteria" as well as three other best paper awards. He is an IEEE Senior Member, a member of the Audio and Acoustic Signal Processing Technical Committee of IEEE Signal Processing Society, Associate Editor of IEEE/ACM Transaction on Audio, Speech, and Language Processing and recipient of the ERC 2014 Starting Grant.

Mark Plumbleyis Professor of Signal Processing at the Centre for Vision, Speech and Signal Processing (CVSSP) at the University of Surrey, in Guildford, UK. After receiving his Ph.D. degree in neural networks in 1991, he became a Lecturer at King's College London, before moving to Queen Mary University of London in 2002. He subsequently became Professor and Director of the Centre for Digital Music, before joining the University of Surrey in 2015. He is known for his work on analysis and processing of audio and music, using a wide range of signal processing techniques, including independent component analysis, sparse representations, and deep learning. He has also a keen to promote the importance of research software and data in audio and music research, including training researchers to follow the principles of reproducible research, and he led the 2013 D-CASE data challenge on Detection and Classification of Acoustic Scenes and Events. He currently leads two EU-funded research training networks in sparse representations, compressed sensing and machine sensing, and leads two major UK-funded projects on audio source separation and making sense of everyday sounds. He is a Fellow of the IET and IEEE.

Dan Ellisjoined Google Inc., in 2015 as a Research Scientist after spending 15 years as a tenured professor in the Electrical Engineering department of Columbia University, where he founded and led the Laboratory for Recognition and Organization of Speech and Audio (LabROSA) which conducted research into all aspects of extracting information from sound. He is also an External Fellow of the International Computer Science Institute in Berkeley, CA, where he researched approaches to robust speech recognition. He is known for his contributions to Computational Auditory Scene Analysis, and for developing and transferring techniques between all different kinds of audio processing including speech, music, and environmental sounds. He has a long track record of supporting the community through public releases of code and data, including the Million Song Dataset of features and metadata for one million pop music tracks, which has become the standard large-scale research set in the Music Information Retrieval field.

Table of Contents

1 Introduction to sound scene and event analysis
2 The Machine Learning Approach for Analysis of Sound Scenes and Events
3 Acoustics and psychacoustics of sound scenes and events
4 Acoustic features for environmental sound analysis
5 Statistical Methods for Scene and Event Classification
6 Datasets and evaluation
7 Everyday Sound Categorization
8 Approaches to complex sound scene analysis
9 Multiview approaches to event detection and scene analysis
10 Sound sharing and retrieval
11 Computational bioacoustic scene analysis
12 Audio Event Recognition in the Smart Home
13 Sound Analysis in Smart Cities
14 Future Perspective

Customer Reviews

Most Helpful Customer Reviews

See All Customer Reviews