Indexed by:
Abstract:
Automatic image annotation is a significant and challenging problem in pattern recognition and computer vision. Existing models did not describe the visual representations of corresponding keywords, which would lead to appearing plenty of irrelevant annotations in final annotation results. These annotations did not relate to any part of images considering visual contents. We propose a new automatic image annotation model (NAVK) based on relevant visual keywords to overcome above problems. Our model focuses on non-abstract words. First, we establish visual keyword seeds of each non-abstract word, and then a new method is proposed to extract visual keyword collections by using corresponding seeds. Second, we propose adaptive parameter method and fast solution algorithm to determine similarity thresholds of each keyword. Finally, the combinations of above methods are used to improve annotation performance. Experimental results verify the effectiveness of proposed image annotation model. © 2015 Taylor & Francis Group, London.
Keyword:
Reprint 's Address:
Email:
Version:
Source :
Year: 2015
Page: 243-248
Language: English
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 1
Affiliated Colleges: