Carregant...
Carregant...

Vés al contingut (premeu Retorn)

A visual embedding for the unsupervised extraction of abstract semantics

Autor
García, D.; Ayguade, E.; Labarta, J.; Bejar, J.; Cortes, U.; Suzumura, T.; Chen, R.
Tipus d'activitat
Article en revista
Revista
Cognitive systems research
Data de publicació
2017-05-01
Volum
42
Pàgina inicial
73
Pàgina final
81
DOI
https://doi.org/10.1016/j.cogsys.2016.11.008 Obrir en finestra nova
Repositori
http://hdl.handle.net/2117/100191 Obrir en finestra nova
URL
http://www.sciencedirect.com/science/article/pii/S1389041716300444 Obrir en finestra nova
Resum
Vector-space word representations obtained from neural network models have been shown to enable semantic operations based on vector arithmetic. In this paper, we explore the existence of similar information on vector representations of images. For that purpose we define a methodology to obtain large, sparse vector representations of image classes, and generate vectors through the state-of-the-art deep learning architecture GoogLeNet for 20 K images obtained from ImageNet. We first evaluate the r...
Citació
García-Gasulla, D., Ayguade, E., Labarta, J., Bejar, J., Cortes, C., Suzumura, T., Chen, R. A visual embedding for the unsupervised extraction of abstract semantics. "Cognitive systems research", 1 Maig 2017, vol. 42, p. 73-81.
Paraules clau
Artificial image cognition, Deep learning embeddings, Visual reasoning
Grup de recerca
CAP - Grup de Computació d'Altes Prestacions
IDEAI-UPC Intelligent Data Science and Artificial Intelligence
KEMLG - Grup d´Enginyeria del Coneixement i Aprenentatge Automàtic

Participants