Loading...
Loading...

Go to the content (press return)

A visual embedding for the unsupervised extraction of abstract semantics

Author
Garcia-Gasulla, D.; Ayguade, E.; Labarta, J.; Bejar, J.; Cortes, U.; Suzumura, T.; Chen, R.
Type of activity
Journal article
Journal
Cognitive systems research
Date of publication
2017-05-01
Volume
42
First page
73
Last page
81
DOI
https://doi.org/10.1016/j.cogsys.2016.11.008 Open in new window
Repository
http://hdl.handle.net/2117/100191 Open in new window
URL
http://www.sciencedirect.com/science/article/pii/S1389041716300444 Open in new window
Abstract
Vector-space word representations obtained from neural network models have been shown to enable semantic operations based on vector arithmetic. In this paper, we explore the existence of similar information on vector representations of images. For that purpose we define a methodology to obtain large, sparse vector representations of image classes, and generate vectors through the state-of-the-art deep learning architecture GoogLeNet for 20 K images obtained from ImageNet. We first evaluate the r...
Citation
García-Gasulla, D., Ayguade, E., Labarta, J., Bejar, J., Cortes, C., Suzumura, T., Chen, R. A visual embedding for the unsupervised extraction of abstract semantics. "Cognitive systems research", 1 Maig 2017, vol. 42, p. 73-81.
Keywords
Artificial image cognition, Deep learning embeddings, Visual reasoning
Group of research
CAP - High Performace Computing Group
IDEAI-UPC - Intelligent Data Science and Artificial Intelligence Research Center
KEMLG - Knowledge Engineering and Machine Learning Group

Participants

Attachments