Showcase Event 2015

Another successful end of year annual Showcase event, organised by Amr Ahmed for the 4th year.

At the University of Lincoln, School of Computer Science. (6th and 7th May): Annual Showcase Event for the School of Computer Science,

Several companies attended the event, including Google UK, Siemens, QinetiQ, mass, Artsgraphica, Heritage Lincolnshire, to name a few.

They were impressed by the level and quality of the presented work, as well as how students presented and engaged with questions.

DSC_0041SAM_0464       DSC_0206SAM_0470 DSC_0391 DSC_0399 DSC_0408

 

 

 

 

 

 

 

 

 

 

 

 

 

Also, DCAPI group participated in the event and Saddam won a prize:   http://dcapi.blogs.lincoln.ac.uk/2015/05/13/pgrs-showcase-event-2015/ 

Saddam next to his poster.

Saddam next to his poster.

 

Hussein Presenting his work.
Hussein Presenting his work about computer aided classification of liver diseases.

 

Annual Showcase Event for the School of Computer Science,

PhD Studentship – “Object and Action Recognition”

A PhD studentship is advertised at

Object and Action Recognition Assisted by Computational Linguistics

The research is collaboration between University of Kingston (Digital Imaging Research Centre) and University of Lincoln (DCAPI group)

For details of the applications process:  http://sec.kingston.ac.uk/research/research-degrees/current-research-opportunities/funded/

Deadline: 20th June

 

Summary:  “Object and Action Recognition Assisted by Computational Linguistics”.

The aim of this project is to investigate how computer vision methods such as object and
action recognition may be assisted by computational linguistic models, such as WordNet.
The main challenge of object and action recognition is the scalability of methods from
dealing with a dozen of categories (e.g. PASCAL VOC) to thousands of concepts (e.g.
ImageNet ILSVRC). This project is expected to contribute to the application of automated
visual content annotation and more widely to bridging the semantic gap between
computational approaches of vision and language.

Deadline: 20th June.

Video Matching

Video Matching Using DC-image and Local Features Saddam Bekhet, Amr Ahmed, Andrew Hunter

Paper (PDF): Video Matching Using DC-image and Local Features – WCE2013_pp2209-2214

Video Demos:  … soon..

Poster for the WCE'13 paper that was awarded "Best Student Paper" award,
Poster for the WCE’13 paper that was awarded “Best Student Paper” award,

INTRODUCTION:

Videos, especially the compressed ones, became a major part of our daily life. With the amount of videos growing exponentially, Scientists are being pushed to develop robust tools that could efficiently index and retrieve videos in a way similar to human perception of similarity.

*Based on http://www.youtube.com/yt/press/statistics.html

 

PROBLEM

§Manual annotation is a hard work and annotations are not always available for utilization.
§We need more smarter tagging process for videos.
§With the increasing of compressed videos, more efficient techniques are required to work directly on compressed files, without need for decompression.
AIM
Our aim is to build a framework that will operate on compressed videos (utilizing the DC-images sequence),
CONCLUSION

•DC-IMAGE is suitable for cheaper computations and could be used as basic building block for real-time processing.
•Local features proved to be effective on DC-image, after our introduced modification.

Automatic Semantic Video Annotation

Automatic Semantic Video Annotation

Amjad Altadmri, Amr Ahmed*, Andrew Hunter

Poster - Click here to download PDF.
Poster – see link below to download PDF.

 

 

 

(Click Semantic Video Annotation-with Knowledge ” https://amrahmed.blogs.lincoln.ac.uk/files/2013/03/Semantic-Video-Annotation-with-Knowledge.pdf  ,  to download the pdf)

INTRODUCTION

The volume of video data is growing exponentially. This data need to be annotated to facilitate search and retrieval, so that we can quickly find a video whenever needed.

Manual Annotation, especially for such volume, is time consuming and would be expensive. Hence, automated annotation systems are required.

 

AIM

Automated Semantic Annotation of wide-domain videos (i.e. no domain restrictions). This is an important step towards bridging the “Semantic Gap” in video understanding.

 

METHOD

1. Extracting “Video Signature” for each  video.
2. Match signatures to find most similar  videos, with annotations
3. Analyse and process obtained annotations, in consultation with Common-sense knowledge-bases
4. Produce the suggested annotation.

EVALUATION

• Two standard, and challenging  Datasets  were used. TRECVID BBC Rush and UCF.
• Black-box and White-box testing carried out.
•Measures include: Precision, Confusion Matrix.

CONCLUSION

•Developed an Automatic Semantic Video Annotation framework.
•Not restricted to a specific domain videos.
•Utilising Common-sense Knowledge enhances scene understanding and improve semantic annotation.
Publications
  1. A framework for automatic semantic video annotation 
    Altadmri, Amjad and Ahmed, Amr (2013) A framework for automatic semantic video annotation. Multimedia Tools and Applications, 64 (2). ISSN 1380-7501.
  2. Semantic levels of domain-independent commonsense knowledgebase for visual indexing and retrieval applications 
    Altadmri, Amjad and Ahmed, Amr and Mohtasseb Billah, Haytham (2012) Semantic levels of domain-independent commonsense knowledgebase for visual indexing and retrieval applications. Neural Information Processing. Lecture Notes in Computer Science, 7663 . pp. 640-647. ISSN 0302-9743
  3. VisualNet: commonsense knowledgebase for video and image indexing and retrieval application 
    Alabdullah Altadmri, Amjad and Ahmed, Amr (2009) VisualNet: commonsense knowledgebase for video and image indexing and retrieval application. In: IEEE International Conference on Intelligent Computing and Intelligent Systems, 21-22 November 2009, Shanghai, China..
  4. Automatic semantic video annotation in wide domain videos based on similarity and commonsense knowledgebases 
    Altadmri, Amjad and Ahmed, Amr (2009) Automatic semantic video annotation in wide domain videos based on similarity and commonsense knowledgebases. In: The IEEE International Conference on Signal and Image Processing Applications (ICSIPA 2009), 18-19th November 2009, Malaysia.
  5. Video databases annotation enhancing using commonsense knowledgebases for indexing and retrieval 
    Altadmri, Amjad and Ahmed, Amr (2009) Video databases annotation enhancing using commonsense knowledgebases for indexing and retrieval. In: The 13th IASTED International Conference on Artificial Intelligence and Soft Computing., September 7 � 9, 2009, Palma de Mallorca, Spain.