Multimodal Scene Understanding: Algorithms, Applications and Deep Learning
Multimodal Scene Understanding: Algorithms, Applications and Deep Learning presents recent advances in multi-modal computing, with a focus on computer vision and photogrammetry. It provides the latest algorithms and applications that involve combining multiple sources of information and describes the role and approaches of multi-sensory data and multi-modal deep learning. The book is ideal for researchers from the fields of computer vision, remote sensing, robotics, and photogrammetry, thus helping foster interdisciplinary interaction and collaboration between these realms. Researchers collecting and analyzing multi-sensory data collections – for example, KITTI benchmark (stereo+laser) - from different platforms, such as autonomous vehicles, surveillance cameras, UAVs, planes and satellites will find this book to be very useful. - Contains state-of-the-art developments on multi-modal computing - Shines a focus on algorithms and applications - Presents novel deep learning topics on multi-sensor fusion and multi-modal deep learning
1132566771
Multimodal Scene Understanding: Algorithms, Applications and Deep Learning
Multimodal Scene Understanding: Algorithms, Applications and Deep Learning presents recent advances in multi-modal computing, with a focus on computer vision and photogrammetry. It provides the latest algorithms and applications that involve combining multiple sources of information and describes the role and approaches of multi-sensory data and multi-modal deep learning. The book is ideal for researchers from the fields of computer vision, remote sensing, robotics, and photogrammetry, thus helping foster interdisciplinary interaction and collaboration between these realms. Researchers collecting and analyzing multi-sensory data collections – for example, KITTI benchmark (stereo+laser) - from different platforms, such as autonomous vehicles, surveillance cameras, UAVs, planes and satellites will find this book to be very useful. - Contains state-of-the-art developments on multi-modal computing - Shines a focus on algorithms and applications - Presents novel deep learning topics on multi-sensor fusion and multi-modal deep learning
140.0 In Stock
Multimodal Scene Understanding: Algorithms, Applications and Deep Learning

Multimodal Scene Understanding: Algorithms, Applications and Deep Learning

Multimodal Scene Understanding: Algorithms, Applications and Deep Learning

Multimodal Scene Understanding: Algorithms, Applications and Deep Learning

eBook

$140.00 

Available on Compatible NOOK devices, the free NOOK App and in My Digital Library.
WANT A NOOK?  Explore Now

Related collections and offers


Overview

Multimodal Scene Understanding: Algorithms, Applications and Deep Learning presents recent advances in multi-modal computing, with a focus on computer vision and photogrammetry. It provides the latest algorithms and applications that involve combining multiple sources of information and describes the role and approaches of multi-sensory data and multi-modal deep learning. The book is ideal for researchers from the fields of computer vision, remote sensing, robotics, and photogrammetry, thus helping foster interdisciplinary interaction and collaboration between these realms. Researchers collecting and analyzing multi-sensory data collections – for example, KITTI benchmark (stereo+laser) - from different platforms, such as autonomous vehicles, surveillance cameras, UAVs, planes and satellites will find this book to be very useful. - Contains state-of-the-art developments on multi-modal computing - Shines a focus on algorithms and applications - Presents novel deep learning topics on multi-sensor fusion and multi-modal deep learning

Product Details

ISBN-13: 9780128173596
Publisher: Elsevier Science & Technology Books
Publication date: 07/16/2019
Sold by: Barnes & Noble
Format: eBook
Pages: 422
File size: 87 MB
Note: This product may take a few minutes to download.

About the Author

He is Assistant Professor with University of Twente (the Netherlands), heading a group working on scene understanding. He received the PhD degree (summa cum laude) from University of Bonn (Germany) in 2011. His research interests are in the fields of computer vision and photogrammetry with specialization on scene understanding, deep learning, UAV vision, and multi-sensor fusion. He published over 90 articles in international journals and conference proceedings. He serves as co-chair of ISPRS working group II/5 Dynamic Scene Analysis, and recipient of the ISPRS President's Honorary Citation (2016) and Best Science Paper Award at BMVC 2016. Since 2016, he is a Senior Member of IEEE. He is regularly serving as program committee member of conferences and reviewer for international journals.
His works received several awards, including a DAGM-Prize 2002 , Dr.-Ing. Siegfried Werth Prize 2003, DAGM-Main Prize 2005, IVCNZ best student paper award , DAGM-Main Prize 2007, Olympus-Prize 2007, ICPRAM Best student paper award 2014, ICMC Best student paper award 2014, the WACV 2015 Challenge Award, the Günter Enderle Award (Eurographics) 2017 and the CVPR 2017 Multi-Object Tracking Challenge. In 2011, the European Commission awarded Bodo Rosenhahn with a 1.43 million Euros ERC-Starting Grant and in 2013 with a POC Grant. He published more than 180 research papers, journal articles and book chapters, holds more than 10 patents and edited several books.
Full professor at the University of Verona, Italy, and director of the PAVIS (Pattern Analysis and Computer Vision) department at the Istituto Italiano di Tecnologia. He took the Laurea degree in Electronic Engineering in 1989 and a Ph.D. in Electronic Engineering and Computer Science in 1993 at the University of Genova, Italy.His main research interests include: computer vision and pattern recognition/machine learning, in particular, probabilistic techniques for image and video processing, with applications on video surveillance, biomedical image analysis and bioinformatics.

Table of Contents

1. Introduction to Multimodal Scene UnderstandingMichael Ying Yang, Bodo Rosenhahn and Vittorio Murino2. Multi-modal Deep Learning for Multi-sensory Data FusionAsako Kanezaki, Ryohei Kuga, Yusuke Sugano and Yasuyuki Matsushita3. Multi-Modal Semantic Segmentation: Fusion of RGB and Depth Data in Convolutional Neural NetworksZoltan Koppanyi, Dorota Iwaszczuk, Bing Zha, Can Jozef Saul, Charles K. Toth and Alper Yilmaz4. Learning Convolutional Neural Networks for Object Detection with very little Training DataChristoph Reinders, Hanno Ackermann, Michael Ying Yang and Bodo Rosenhahn5. Multi-modal Fusion Architectures for Pedestrian DetectionDayan Guan, Jiangxin Yang, Yanlong Cao, Michael Ying Yang and Yanpeng Cao6. ThermalGAN: Multimodal Color-to-Thermal Image Translation for Person Re-Identification in Multispectral DatasetVladimir A. Knyaz and Vladimir V. Kniaz7. A Review and Quantitative Evaluation of Direct Visual-Inertia OdometryLukas von Stumberg, Vladyslav Usenko and Daniel Cremers8. Multimodal Localization for Embedded Systems: A SurveyImane Salhi, Martyna Poreba, Erwan Piriou, Valerie Gouet-Brunet and Maroun Ojail9. Self-Supervised Learning from Web Data for Multimodal RetrievalRaul Gomez, Lluis Gomez, Jaume Gibert and Dimosthenis Karatzas10. 3D Urban Scene Reconstruction and Interpretation from Multi-sensor ImageryHai Huang, Andreas Kuhn, Mario Michelini, Matthais Schmitz and Helmut Mayer11. Decision Fusion of Remote Sensing Data for Land Cover ClassificationArnaud Le Bris, Nesrine Chehata, Walid Ouerghemmi, Cyril Wendl, Clement Mallet, Tristan Postadjian and Anne Puissant12. Cross-modal learning by hallucinating missing modalities in RGB-D visionNuno Garcia, Pietro Morerio and Vittorio Murino

What People are Saying About This

From the Publisher

A unique presentation of multi-sensory data and multi-modal deep learning

From the B&N Reads Blog

Customer Reviews