Cantitate/Preț
Produs

Multimodal Scene Understanding: Algorithms, Applications and Deep Learning

Editat de Michael Ying Yang, Bodo Rosenhahn, Vittorio Murino
en Limba Engleză Paperback – 17 iul 2019
Multimodal Scene Understanding: Algorithms, Applications and Deep Learning presents recent advances in multi-modal computing, with a focus on computer vision and photogrammetry. It provides the latest algorithms and applications that involve combining multiple sources of information and describes the role and approaches of multi-sensory data and multi-modal deep learning. The book is ideal for researchers from the fields of computer vision, remote sensing, robotics, and photogrammetry, thus helping foster interdisciplinary interaction and collaboration between these realms.
Researchers collecting and analyzing multi-sensory data collections – for example, KITTI benchmark (stereo+laser) - from different platforms, such as autonomous vehicles, surveillance cameras, UAVs, planes and satellites will find this book to be very useful.


  • Contains state-of-the-art developments on multi-modal computing
  • Shines a focus on algorithms and applications
  • Presents novel deep learning topics on multi-sensor fusion and multi-modal deep learning
Citește tot Restrânge

Preț: 60373 lei

Preț vechi: 94108 lei
-36% Nou

Puncte Express: 906

Preț estimativ în valută:
11554 11920$ 9779£

Carte tipărită la comandă

Livrare economică 25 februarie-11 martie

Preluare comenzi: 021 569.72.76

Specificații

ISBN-13: 9780128173589
ISBN-10: 0128173580
Pagini: 422
Dimensiuni: 191 x 235 mm
Greutate: 0.73 kg
Editura: ELSEVIER SCIENCE

Cuprins

1. Introduction to Multimodal Scene Understanding
Michael Ying Yang, Bodo Rosenhahn and Vittorio Murino
2. Multi-modal Deep Learning for Multi-sensory Data Fusion
Asako Kanezaki, Ryohei Kuga, Yusuke Sugano and Yasuyuki Matsushita
3. Multi-Modal Semantic Segmentation: Fusion of RGB and Depth Data in Convolutional Neural Networks
Zoltan Koppanyi, Dorota Iwaszczuk, Bing Zha, Can Jozef Saul, Charles K. Toth and Alper Yilmaz
4. Learning Convolutional Neural Networks for Object Detection with very little Training Data
Christoph Reinders, Hanno Ackermann, Michael Ying Yang and Bodo Rosenhahn
5. Multi-modal Fusion Architectures for Pedestrian Detection
Dayan Guan, Jiangxin Yang, Yanlong Cao, Michael Ying Yang and Yanpeng Cao
6. ThermalGAN: Multimodal Color-to-Thermal Image Translation for Person Re-Identification in Multispectral Dataset
Vladimir A. Knyaz and Vladimir V. Kniaz
7. A Review and Quantitative Evaluation of Direct Visual-Inertia Odometry
Lukas von Stumberg, Vladyslav Usenko and Daniel Cremers
8. Multimodal Localization for Embedded Systems: A Survey
Imane Salhi, Martyna Poreba, Erwan Piriou, Valerie Gouet-Brunet and Maroun Ojail
9. Self-Supervised Learning from Web Data for Multimodal Retrieval
Raul Gomez, Lluis Gomez, Jaume Gibert and Dimosthenis Karatzas
10. 3D Urban Scene Reconstruction and Interpretation from Multi-sensor Imagery
Hai Huang, Andreas Kuhn, Mario Michelini, Matthais Schmitz and Helmut Mayer
11. Decision Fusion of Remote Sensing Data for Land Cover Classification
Arnaud Le Bris, Nesrine Chehata, Walid Ouerghemmi, Cyril Wendl, Clement Mallet, Tristan Postadjian and Anne Puissant
12. Cross-modal learning by hallucinating missing modalities in RGB-D vision
Nuno Garcia, Pietro Morerio and Vittorio Murino