Loading...
Loading...

Go to the content (press return)

E-PUR: an energy-efficient processing unit for recurrent neural networks

Author
Silfa, F.A.; Dot, G.; Arnau, J.; Gonzalez, A.
Type of activity
Presentation of work at congresses
Name of edition
27th International Conference on Parallel Architectures and Compilation
Date of publication
2018
Presentation's date
2018-11-01
Book of congress proceedings
Proceedings of the 27th International Conference on Parallel Processing
First page
1
Last page
12
DOI
https://doi.org/10.1145/3243176.3243184 Open in new window
Project funding
Intelligent, Ubiquitous and Energy-Efficient Computing Systems
Repository
http://hdl.handle.net/2117/127819 Open in new window
URL
https://dl.acm.org/citation.cfm?id=3243184 Open in new window
Abstract
Recurrent Neural Networks (RNNs) are a key technology for emerging applications such as automatic speech recognition, machine translation or image description. Long Short Term Memory (LSTM) networks are the most successful RNN implementation, as they can learn long term dependencies to achieve high accuracy. Unfortunately, the recurrent nature of LSTM networks significantly constrains the amount of parallelism and, hence, multicore CPUs and many-core GPUs exhibit poor efficiency for RNN inferenc...
Citation
Silfa, F.A. [et al.]. E-PUR: an energy-efficient processing unit for recurrent neural networks. A: International Conference on Parallel Architectures and Compilation. "Proceedings of the 27th International Conference on Parallel Processing". 2018, p. 1-12.
Keywords
Accelerators, Long short term memory, Recurrent neural networks
Group of research
ARCO - Microarchitecture and Compilers

Participants