Apache Kafka is quickly becoming the streaming data pipeline of choice for big data systems, thanks to its high-throughput, low-latency, fault-tolerant messaging capabilities and flexible data architecture. But making the most out of this emerging messaging platform depends on the underlying network processing infrastructure to accommodate for variability in data formats, data rates, and so on. Find out how Intel® FPGA-based solutions provide the low latency and determinism required for optimized Kafka-based big data systems, while reducing cost, power consumption, and data center footprint.
In this white paper, network and data center engineers will learn:
How the Apache Kafka architecture benefits big data streaming systems
About a variety of use cases in which FPGAs optimize Kafka instances versus competing processing solutions
How Intel® FPGA-based network interface cards (NICs) enable these performance gains within tight size, cost, and power budgets
FPGA Acceleration of Convolutional Neural Networks
Next White Paper
Predictions for 2018 and Beyond: The Internet of Things