Amjad Altadmri – PhD

Amjad Altadmri has passed his PhD viva, subject to minor amendments, earlier today.

Thesis Title:  “Semantic Video Annotation in Domain-Independent Videos Utilising Similarity and Commonsense Knowledgebases

Thanks to the external, Dr John Wood from the University of Essex, the internal Dr Bashir Al-Diri and the viva chair, Dr Kun Guo.

Congratulations and Well done.

All colleagues are invited to join Amjad on celebrating his achievement, tomorrow (Thursday 28th Feb) at 12:00noon, in our meeting room MC3108, with some drinks and light refreshments available.

Best wishes.

New Journal paper Accepted to the “Multimedia Tools and Applications”

New Journal paper accepted for publishing in the Journal of “Multimedia Tools and Applications“.

The paper title is “A Framework for Automatic Semantic Video Annotation utilising Similarity and Commonsense Knowledgebases

Abstract:

The rapidly increasing quantity of publicly available videos has driven research into developing automatic tools for indexing, rating, searching and retrieval. Textual semantic representations, such as tagging, labelling and annotation, are often important factors in the process of indexing any video, because of their user-friendly way of representing the semantics appropriate for search and retrieval. Ideally, this annotation should be inspired by the human cognitive way of perceiving and of describing videos. The difference between the low-level visual contents and the corresponding human perception is referred to as the ‘semantic gap’. Tackling this gap is even harder in the case of unconstrained videos, mainly due to the lack of any previous information about the analyzed video on the one hand, and the huge amount of generic knowledge required on the other.

This paper introduces a framework for the Automatic Semantic Annotation of unconstrained videos. The proposed framework utilizes two non-domain-specific layers: low-level visual similarity matching, and an annotation analysis that employs commonsense knowledgebases. Commonsense ontology is created by incorporating multiple-structured semantic relationships. Experiments and black-box tests are carried out on standard video databases for
action recognition and video information retrieval. White-box tests examine the performance of the individual intermediate layers of the framework, and the evaluation of the results and the statistical analysis show that integrating visual similarity matching with commonsense semantic relationships provides an effective approach to automated video annotation.

 

Well done and congratulations to Amjad Altadmri .

Two Presentations and Posters in the Vision & Language Network workshop

Three members of the Lincoln School of Computer Science, and the DCAPI group, have attended the Vision & Language (V&L) Network workshop, 13-14th Dec. 2012 in Sheffield, UK.

Amr Ahmed, Amjad Al-tadmri and Deema AbdalHafeth attended the event, where Amjad and Deema delivered  2 oral presentations and 2 posters about their research work:

1. VisualNet: Semantic Commonsense Knowledgebase for Visual Applications
2. Investigating text analysis of user-generated contents for health related applications

Abstracts are available on ( http://www.vlnet.org.uk/VLW12/VLW-2012-Accepted-Abstracts.html)

Congratulations for all involved.

Amjad Altadmri and Amr Ahmed around their poster at the Vision & Language Net workshop, 13-14th Dec 2012, Sheffield, UK.
Amjad Altadmri and Amr Ahmed around their poster at the Vision & Language Net workshop, 13-14th Dec 2012, Sheffield, UK.
Deema AbdalHafeth and Amr Ahmed at the Vision & Language Net workshop, 13-14th Dec 2012, Sheffield, UK.
Deema AbdalHafeth and Amr Ahmed at the Vision & Language Net workshop, 13-14th Dec 2012, Sheffield, UK.

 

 

 

 

 

 

 

 

 

 

The event included tutorial sessions (Vision for language people, and language for vision people). We had an increased presence this year.

Last year, we had a good presence in the last year’s workshop (https://amrahmed.blogs.lincoln.ac.uk/2011/09/19/vl-network-workshop-brighton/), had good discussions and useful feedback on the presented work.

Looking forward for similar, if not even better, experience this year.

Best wishes for the presentations.

Two posters & presentations accepted for the V&L Net Workshop, Dec. 2012

We just had 2 posters and oral presentations accepted for the coming Vision & Language (V&L) Network workshop, 13-14th Dec. 2012 in Sheffield, UK. This is a good representation from Lincoln (and from the DCAPI group).

1. VisualNet: Semantic Commonsense Knowledgebase for Visual Applications
2. Investigating text analysis of user-generated contents for health related applications

Congratulations for all involved.

We had a good presence in the last year’s workshop (https://amrahmed.blogs.lincoln.ac.uk/2011/09/19/vl-network-workshop-brighton/), had good discussions and useful feedback on the presented work.

Looking forward for similar, if not even better, experience this year.

Best wishes for the presentations.