Compartir
From cnn to dnn Hardware Accelerators: A Survey on Design, Exploration, Simulation, and Frameworks (Foundations and Trends(R) in Electronic Design Automation) (en Inglés)
Leonardo Rezende Juracy; Rafael Garibotti; Fernando Gehm Moraes (Autor)
·
Now Publishers
· Tapa Blanda
From cnn to dnn Hardware Accelerators: A Survey on Design, Exploration, Simulation, and Frameworks (Foundations and Trends(R) in Electronic Design Automation) (en Inglés) - Leonardo Rezende Juracy; Rafael Garibotti; Fernando Gehm Moraes
$ 54.74
$ 65.00
Ahorras: $ 10.26
Elige la lista en la que quieres agregar tu producto o crea una nueva lista
✓ Producto agregado correctamente a la lista de deseos.
Ir a Mis ListasSe enviará desde nuestra bodega entre el
Viernes 31 de Mayo y el
Lunes 03 de Junio.
Lo recibirás en cualquier lugar de Estados Unidos entre 1 y 3 días hábiles luego del envío.
Reseña del libro "From cnn to dnn Hardware Accelerators: A Survey on Design, Exploration, Simulation, and Frameworks (Foundations and Trends(R) in Electronic Design Automation) (en Inglés)"
The past decade has witnessed the consolidation of Artificial Intelligence technology, thanks to the popularization of Machine Learning (ML) models. The technological boom of ML models started in 2012 when the world was stunned by the record-breaking classification performance achieved by combining an ML model with a high computational performance graphic processing unit (GPU). Since then, ML models received ever-increasing attention, being applied in different areas such as computational vision, virtual reality, voice assistants, chatbots, and self-driving vehicles. The most popular ML models are brain-inspired models such as Neural Networks (NNs), including Convolutional Neural Networks (CNNs) and, more recently, Deep Neural Networks (DNNs). They are characterized by resembling the human brain, performing data processing by mimicking synapses using thousands of interconnected neurons in a network. In this growing environment, GPUs have become the de facto reference platform for the training and inference phases of CNNs and DNNs, due to their high processing parallelism and memory bandwidth. However, GPUs are power-hungry architectures. To enable the deployment of CNN and DNN applications on energy-constrained devices (e.g., IoT devices), industry and academic research have moved towards hardware accelerators. Following the evolution of neural networks from CNNs to DNNs, this monograph sheds light on the impact of this architectural shift and discusses hardware accelerator trends in terms of design, exploration, simulation, and frameworks developed in both academia and industry.