Low-quality multimedia data (including images/videos with low resolution, low illumination, defects, blurriness, etc.) often poses a challenge for content understanding, since many visual task algorithms are developed on a clear image or video data under high resolution and good visibility. Taking the light condition as an example, early research focused on high-quality images or daytime scenes with better illumination and recognition algorithms or other specific tasks. Nevertheless, in practice, more than 90% of criminal activity occurs in low-quality nighttime scenarios, in which the image/video data collected by the surveillance system has low contrast and is of poor quality.
To alleviate this problem, data enhancement techniques (super-resolution, low-light enhancement, derain, and inpainting) have been developed to restore low-quality multimedia data. Efforts are also being made to develop robust content understanding algorithms in adverse weather and lighting conditions. There has also been the development of tools for visual data quality assessment. Even though these topics are mostly studied independently, they are tightly related in terms of ensuring a robust understanding of multimedia content. Therefore, this special session will inspire readers from both academia and industry and facilitate research in computer vision and multimedia for a robust understanding of low-quality data in the broader context of multimedia applications. The aim of this special session is to: 1) bring together leading experts from academia and industry to discuss the current state of the art, challenges, and future steps in quality enhancement and assessment for low-quality multimedia data understanding; 2) call for a coordinated effort to understand the opportunities and challenges emerging in quality enhancement and assessment of low-quality multimedia data; 3) identify key tasks and evaluate the state-of-the-art methods; 4) present innovative methodologies and ideas; 5) propose new real-world low-quality multimedia datasets and discuss future directions. To this purpose, we seek original research papers on the following areas, but not limited to:
Submission website: https://cmt3.research.microsoft.com/ICME2023
After signing in the ICME 2023 submission site as the author, please choose our Special Session name to submit the paper. Papers must be no longer than 6 pages, including all text, figures, and references. ICME 2023 reviewing is double blind, which means that authors cannot know the names of the reviewers of their papers, and reviewers cannot know the names of the authors. Information that may identify the authors anywhere in the submitted materials must be avoided. In particular, in the submitted pdf paper, the usual list of authors, their institutions, and their contact information must be replaced by the phrase, "Anonymous ICME Submission." Identifying information in the acknowledgments (e.g., co-workers and grant IDs), supplemental materials (e.g., titles in the videos, or attached papers), and links to the authors’ or their institutions’ websites must also be avoided.
Please refer to the main conference site for more submission policies on blinding, supplemental material, presentation guarantee, etc.
liang.liao AT ntu.edu.sg
ldli AT xidian.edu.cn
mkastner AT i.kyoto-u.ac.jp
chaofeng.chen AT ntu.edu.sg
satoh AT nii.ac.jp
wslin AT ntu.edu.sg