In linguistic, "semantics" stands for the intended meaning in natural language, such as in words, phrases and sentences. In this dissertation, the concept "semantics" is defined more generally: the intended meaning of information in all multimedia forms. The multimedia forms include language domain text, as well as vision domain stationary images and dynamic videos. Specifically, semantics in multimedia are the media content of cognitive information, knowledge and idea that can be represented in text, images and video clips. A narrative story, for example, can be semantics summary of a novel book, or semantics summary of the movie originated from that book. Thus, semantic is a high level abstract knowledge that is independent from multimedia forms.
Indeed, the same amount of semantics can be represented either redundantly or concisely, due to diversified levels of expression ability of multimedia. The process of a redundantly represented semantics evolving into a concisely represented one is called "semantic distillation". And this evolving process can happen either in between different multimedia forms, or within the same form.
The booming growth of unorganized and unfiltered information is bringing to people an unwanted issue, information overload, where techniques of semantic distillation are in high demand. However, as opportunities always be side with challenges, machine learning and Artificial Intelligence (AI) today become far more advanced than that in the past, and provide with us powerful tools and techniques. Large varieties of learning methods has made countless of impossible tasks come to reality. Thus in this dissertation, we take advantages of machine learning techniques, with both supervised learning and unsupervised learning, to empower the solving of semantics distillation problems.
Despite the promising future and powerful machine learning techniques, the heterogeneous forms of multimedia involving many domains still impose challenges to semantics distillation approaches. A major challenge is the definition of "semantics" and the related processing techniques can be entirely different from one problem to another. Varying types of multimedia resources can introduce varying kinds of domain-specific limitations and constraints, where the obtaining of semantics also becomes domain-specific. Therefore, in this dissertation, with text language and vision as the two major domains, we approach four problems of all combinations of the two domains: • Language to Vision Domain: In this study, Presentation Storytelling is formulated as a problem that suggesting the most appropriate images from online sources for storytelling purpose given a text query. Particularly, we approach the problem with a two-step semantics processing method, where the semantics from a simple query is first expanded to a diverse semantic graph, and then distilled from a large number of searched web photos to a few representative ones. This two-step method is empowered by Conditional Random Field (CRF) model, and learned in supervised manner with human-labeled examples. • Vision to Language Domain: The second study, Visual Storytelling, formulates a problem of generating a coherent paragraph from a photo stream. Different from presentation storytelling, visual storytelling goes in opposite way: the semantics extracted from a handful photos are distilled into text. In this dissertation, we address this problem by revealing the semantics relationships in visual domain, and distilled into language domain with a new designed Bidirectional Attention Recurrent Neural Network (BARNN) model. Particularly, an attention model is embedded to the RNN so that the coherence can be preserved in language domain at the output being a human-like story. The model is trained with deep learning and supervised learning with public datasets. • Dedicated Vision Domain: To directly approach the information overload issue in vision domain, Image Semantic Extraction formulates a problem that selects a subset from multimedia user's photo albums. In the literature, this problem has mostly been approached with unsupervised learning process. However, in this dissertation, we develop a novel supervised learning method to attach the same problem. We specify visual semantics as a quantizable variables and can be measured, and build an encoding-decoding pipeline with Long-Short-Term-Memory (LSTM) to model this quantization process. The intuition of encoding-decoding pipeline is to imitate human: read-think-and-retell. That is, the pipeline first includes an LSTM encoder scanning all photos for "reading" comprised semantics, then concatenates with an LSTM decoder selecting the most representative ones for "thinking" the gist semantics, finally adds a dedicated residual layer revisiting the unselected ones for "verifying" if the semantics are complete enough. • Dedicated Language Domain: Distinct from above issues, in this part, we introduce a different genre of machine learning method, unsupervised learning. We will address a semantics distillation problem in language domain, Text Semantic Extraction, where the semantics in a letter sequence are extracted from printed images. (Abstract shortened by ProQuest.)
|Advisor:||Chen, Chang Wen|
|Commitee:||Govindaraju, Venugopal, Qiao, Chunming|
|School:||State University of New York at Buffalo|
|Department:||Computer Science and Engineering|
|School Location:||United States -- New York|
|Source:||DAI-B 80/02(E), Dissertation Abstracts International|
|Subjects:||Computer Engineering, Computer science|
|Keywords:||Artificial intelligence, Deep learning, Machine learning, Multimedia processing, Semantics distillation, Unsupervised learning|
Copyright in each Dissertation and Thesis is retained by the author. All Rights Reserved
The supplemental file or files you are about to download were provided to ProQuest by the author as part of a
dissertation or thesis. The supplemental files are provided "AS IS" without warranty. ProQuest is not responsible for the
content, format or impact on the supplemental file(s) on our system. in some cases, the file type may be unknown or
may be a .exe file. We recommend caution as you open such files.
Copyright of the original materials contained in the supplemental file is retained by the author and your access to the
supplemental files is subject to the ProQuest Terms and Conditions of use.
Depending on the size of the file(s) you are downloading, the system may take some time to download them. Please be