Loading...
Loading...

Go to the content (press return)

A differentiable BLEU loss. Analysis and first results

Author
Casas, N.; Fonollosa, José A. R.; Ruiz, M.
Type of activity
Presentation of work at congresses
Name of edition
6th International Conference on Learning Representations
Date of publication
2018
Presentation's date
2018-05-02
Book of congress proceedings
Sixth International Conference on Learning Representations: Monday April 30-Thursday May 03, 2018, Vancouver Convention Center, Vancouver: [proceedings]
Project funding
Autonomous lifelong learning intelligent systems
Tecnologías de aprendizaje profundo aplicadas al procesado de voz y audio
Repository
http://hdl.handle.net/2117/117201 Open in new window
URL
https://openreview.net/forum?id=HkG7hzyvf Open in new window
Abstract
In natural language generation tasks, like neural machine translation and image captioning, there is usually a mismatch between the optimized loss and the de facto evaluation criterion, namely token-level maximum likelihood and corpus-level BLEU score. This article tries to reduce this gap by defining differentiable computations of the BLEU and GLEU scores. We test this approach on simple tasks, obtaining valuable lessons on its potential applications but also its pitfalls, mainly that these los...
Citation
Casas, N., Fonollosa, José A. R., Ruiz, M. A differentiable BLEU loss. Analysis and first results. A: International Conference on Learning Representations. "ICLR 2018 Workshop Track: 6th International Conference on Learning Representations: Vancouver Convention Center, Vancouver, BC, Canada: April 30-May 3, 2018". 2018.
Keywords
BLEU, Differentiable, GLEU, NMT, Seq2seq
Group of research
IDEAI-UPC - Intelligent Data Science and Artificial Intelligence Research Center
TALP - Centre for Language and Speech Technologies and Applications
VEU - Speech Processing Group

Participants

Attachments