New paper accepted in ICPR 2014 – “Compact Signature-based Compressed Video Matching Using Dominant Colour Profiles (DCP)”

The paper “Compact Signature-based Compressed Video Matching Using Dominant Colour Profiles (DCP)” has been accepted in the ICPR 2014 conference, and will be presented in August 2014, Stockholm, Sweden.

Abstract— This paper presents a technique for efficient and generic matching of compressed video shots, through compact signatures extracted directly without decompression. The compact signature is based on the Dominant Colour Profile (DCP); a sequence of dominant colours extracted and arranged as a sequence of spikes, in analogy to the human retinal representation of a scene. The proposed signature represents a given video shot with ~490 integer values, facilitating for real-time processing to retrieve a maximum set of matching videos. The technique is able to work directly on MPEG compressed videos, without full decompression, as it is utilizing the DC-image as a base for extracting colour features. The DC-image has a highly reduced size, while retaining most of visual aspects, and provides high performance compared to the full I-frame. The experiments and results on various standard datasets show the promising performance, both the accuracy and the efficient computation complexity, of the proposed technique.

Congratulations and well done for Saddam.

Analysis and experimentation results of using DC-image, and comparisons with full image (I-Frame), can be found in  Video matching using DC-image and local features   (



“DC-Image for Real Time Compressed Video Matching” published in Springer Transactions on Engineering Technologies

New chapter titled “DC-Image for Real Time Compressed Video Matching” is published in Springer Transactions on Engineering Technologies 2014 .

Well done and congratulations to Saddam Bekhet.

Amjad Altadmri – PhD

Amjad Altadmri has passed his PhD viva, subject to minor amendments, earlier today.

Thesis Title:  “Semantic Video Annotation in Domain-Independent Videos Utilising Similarity and Commonsense Knowledgebases

Thanks to the external, Dr John Wood from the University of Essex, the internal Dr Bashir Al-Diri and the viva chair, Dr Kun Guo.

Congratulations and Well done.

All colleagues are invited to join Amjad on celebrating his achievement, tomorrow (Thursday 28th Feb) at 12:00noon, in our meeting room MC3108, with some drinks and light refreshments available.

Best wishes.

New Journal paper Accepted to the “Multimedia Tools and Applications”

New Journal paper accepted for publishing in the Journal of “Multimedia Tools and Applications“.

The paper title is “A Framework for Automatic Semantic Video Annotation utilising Similarity and Commonsense Knowledgebases


The rapidly increasing quantity of publicly available videos has driven research into developing automatic tools for indexing, rating, searching and retrieval. Textual semantic representations, such as tagging, labelling and annotation, are often important factors in the process of indexing any video, because of their user-friendly way of representing the semantics appropriate for search and retrieval. Ideally, this annotation should be inspired by the human cognitive way of perceiving and of describing videos. The difference between the low-level visual contents and the corresponding human perception is referred to as the ‘semantic gap’. Tackling this gap is even harder in the case of unconstrained videos, mainly due to the lack of any previous information about the analyzed video on the one hand, and the huge amount of generic knowledge required on the other.

This paper introduces a framework for the Automatic Semantic Annotation of unconstrained videos. The proposed framework utilizes two non-domain-specific layers: low-level visual similarity matching, and an annotation analysis that employs commonsense knowledgebases. Commonsense ontology is created by incorporating multiple-structured semantic relationships. Experiments and black-box tests are carried out on standard video databases for
action recognition and video information retrieval. White-box tests examine the performance of the individual intermediate layers of the framework, and the evaluation of the results and the statistical analysis show that integrating visual similarity matching with commonsense semantic relationships provides an effective approach to automated video annotation.


Well done and congratulations to Amjad Altadmri .

Amr attended the ECM & workshop of the SUS-IT project

Amr Ahmed attended the ECM (Executive Committee Meeting) of his SUS-IT project ( at Loughborough, and participated in the 2 days workshop for the project. This is one of the very important meetings, especially towards the late stage of the project, with all workpackages represented.

SUS-IT ECM and Workshop in Loughborough - Nov2011

WP3: SUS-IT ECM and Workshop in Loughborough - Nov2011

SUS-IT ECM and Workshop in Loughborough - Nov2011