Atnaujinkite slapukų nuostatas

Multimodal Analysis of User-Generated Multimedia Content Softcover reprint of the original 1st ed. 2017 [Minkštas viršelis]

  • Formatas: Paperback / softback, 263 pages, aukštis x plotis: 235x155 mm, weight: 599 g, 42 Illustrations, color; 21 Illustrations, black and white; XXII, 263 p. 63 illus., 42 illus. in color., 1 Paperback / softback
  • Serija: Socio-Affective Computing 6
  • Išleidimo metai: 09-Sep-2018
  • Leidėjas: Springer International Publishing AG
  • ISBN-10: 3319871684
  • ISBN-13: 9783319871684
Kitos knygos pagal šią temą:
  • Formatas: Paperback / softback, 263 pages, aukštis x plotis: 235x155 mm, weight: 599 g, 42 Illustrations, color; 21 Illustrations, black and white; XXII, 263 p. 63 illus., 42 illus. in color., 1 Paperback / softback
  • Serija: Socio-Affective Computing 6
  • Išleidimo metai: 09-Sep-2018
  • Leidėjas: Springer International Publishing AG
  • ISBN-10: 3319871684
  • ISBN-13: 9783319871684
Kitos knygos pagal šią temą:
This book presents a summary of the multimodal analysis of user-generated multimedia content (UGC). Several multimedia systems and their proposed frameworks are also discussed. First, improved tag recommendation and ranking systems for social media photos, leveraging both content and contextual information, are presented. Next, we discuss the challenges in determining semantics and sentics information from UGC to obtain multimedia summaries. Subsequently, we present a personalized music video generation system for outdoor user-generated videos. Finally, we discuss approaches for multimodal lecture video segmentation techniques. This book also explores the extension of these multimedia system with the use of heterogeneous continuous streams.
1 Introduction 1.1 Background and Motivation 1.2 Overview 1.3 Acronyms and Notations 1.4 Roadmap 2 Literature Review 2.1 Event Understanding 2.2 Tag Recommendation and Ranking 2.3 Soundtrack Recommendation for UGVs 2.4 Lecture Video Segmentation 3 Event Understanding 3.1 Introduction 3.2 System Overview 3.2.1 EventBuilder 3.2.2 EventSensor 3.3 Evaluation 3.3.1 EventBuilder 3.3.2 EventSensor 3.4 Summary 4 Tag Recommendation and Ranking 4.1 Introduction 4.1.1 Tag Recommendation 4.1.2 Tag Ranking 4.2 System Overview 4.2.1 Tag Recommendation 4.2.2 Random Walk based Relevance Scores 4.2.3 Fusion of Different Tag Recommendation Approaches 4.2.4 Tag Ranking 4.3 Evaluation 4.3.1 Tag Recommendation 4.3.2 Tag Ranking 4.4 Summary 5 Soundtrack Recommendation for UGVs 5.1 Introduction 5.1.1 Increasing Popularity of User-Generated Videos 5.1.2 Challenges with User-Generated Videos in Viewing and Sharing 5.1.3 Motivation for Generating Music Videos for Outdoor User-Generated Videos 5.2 Music Video Generation 5.2.1 Scene Moods Prediction Models 5.2.2 Music Retrieval Techniques 5.2.3 Automatic Music Video Generation Model 5.3 Evaluation 5.3.1 Dataset and Experimental Settings 5.3.2 Evaluation Metrics 5.3.3 Objective Evaluation 5.3.4 Subjective Evaluation 5.3.5 Experimental Results 5.3.6 Comparison with State-of-the-arts 5.3.7 Discussion of Results 5.4 Summary 6 Lecture Video Segmentation 6.1 Introduction 6.2 Lecture Video Segmentation 6.2.1 Prediction of Video Transition Cues using Supervised Learning 6.2.2 Computation of Text Transition Cues using N-gram based Language Model 6.2.3 Computation of SRT Segment Boundaries using the state-of-the-art 6.2.4 Computation of Wikipedia Segment Boundaries 6.2.5 Transition File Generation 6.3.1 Dataset and Experimental Settings 6.3.2 Results from the ATLAS System 6.3.3 Results from the TRACE System 6.4 Summary 7 Conclusions and future work
Rajiv Ratn Shah received his B.Sc. with honors in Mathematics from Banaras Hindu University, India in 2005. He received his M.Tech. in Computer Technology and Applications from Delhi Technological University, India in 2010. Prior joining Indraprastha Institute of Information Technology Delhi (IIIT Delhi), India as an assistant professor, Dr Shah has received his Ph.D. in Computer Science from the National University of Singapore, Singapore. Currently, he is also working as a research fellow in living analytics research centre (LARC) at the Singapore Management University, Singapore. His research interests include the multimodal analysis of user-generated multimedia content in the support of social media applications, multimodal event detection and recommendation, and multimedia analysis, search, and retrieval. Dr Shah is the recipient of several awards, including the runner-up in the Grand Challenge competition of ACM International Conference on Multimedia. He is involved in reviewingof many top-tier international conferences and journals. He has published several research work in top-tier conferences and journals such as Springer MultiMedia Modeling, ACM International Conference on Multimedia, IEEE International Symposium on Multimedia, and Elsevier Knowledge-Based Systems.