Inthispaper,wedescribeanovelsystemthatidentiﬁesrelationsbetweentheobjectsextractedfromanimage. We started from the idea that in addition to the geometric and visual properties of the image objects, we could exploit lexical and semantic information from the text accompanying the image. As experimental set up, we gathered a corpus of images from Wikipedia as well as their associated articles. We extracted two types of objects: human beings and horses and we considered three relations that could hold between them: Ride, Lead, or None. We used geometric features as a baseline to identify the relations between the entities and we describe the improvements brought by the addition of bag-of-wordf eatures and predicate–arguments tructures we derived from the text. The best semantic model resulted in a relative error reduction of more than 18% over the baseline.
|Titel på värdpublikation||Proceedings of the 3rd International Conference on Pattern Recognition Applications and Methods|
|Status||Published - 2014|
|Evenemang||3rd International Conference on Pattern Recognition Applications an Methods (ICPRAM 2014) - Angers, Angers, Frankrike|
Varaktighet: 2014 mars 6 → 2014 mars 8
|Konferens||3rd International Conference on Pattern Recognition Applications an Methods (ICPRAM 2014)|
|Period||2014/03/06 → 2014/03/08|
- Datavetenskap (datalogi)