Convolutional neural networks (CNNs) are becoming a preferred implementation for complex image recognition and artificial intelligence (AI) applications. Unfortunately, traditional processor architectures often struggle to compute CNN layers given their varying resolution requirements and the large number of multiply-accumulate operations involved. Now Intel® Arria® 10 FPGAs allow CNN developers to leverage programmable logic fabric for accelerated execution of lower-resolution network layers, while integrated floating DSP blocks capable of fixed-point operations can be applied to more demanding calculations.
This white paper will help vision system and neural network engineers understand:
How a deep learning image categorization network (ImageNet) can be ported to an FPGA using the Caffe framework
How features of Intel® Arria® 10 FPGAs help accelerate the computation, accuracy, and efficiency of ImageNet network layers
About Arria 10-based FPGA accelerator cards that provide orders of magnitude better performance per watt than competing GPGPU-based solutions
Slim Down Smart Displays with Modular Design
Smart displays need to be sleek yet powerful to meet the evolving requirements of signage, kiosks, whiteboa...
Next White Paper
The 4K Revolution Driving Advances in Playback Technology