45 Minute Webinar

Building a real-time data pipeline with Enterprise Fluentd and Apache Kafka™

On Demand

About the Webinar

As companies shift assets into the cloud and continue to use data to deliver end business products, capturing and unifying that exponentially growing data is key. In this webinar join Treasure Data, creators of Fluentd, and Confluent, founded by the original creators of Apache Kafka™, to discuss how Fluentd and Kafka can fit together to form a modern, real-time, scalable data pipeline.

We’ll take a look at a real-world example – pulling Kubernetes log data through the pipeline, using Confluent’s new, open source streaming SQL engine, KSQL, to perform stream processing tasks to create necessary alerts and turn the logs into action.

You’ll leave with an understanding of:

  • Introduction to Fluentd and Kafka
  • An introduction to KSQL, Confluent’s streaming SQL engine for Kafka
  • How Fluentd and Kafka can work together to form a data pipeline
  • A streaming ETL example collecting and processing Kubernetes log data

This webinar is for:

Beginner to intermediate users that want an understanding of how these technologies can fit together to create a simple, yet scalable, real-time data pipeline.

Get the Webinar Here:


Gehrig Kunz – Gehrig is a Technical Product Marketing Manager at Confluent and has spent the past several years evangelizing distributed systems like Apache Kafka and Cassandra. His roles range from fostering open source communities, market research, use case analysis, content creation, and messaging. Outside of the office, you can find Gehrig at a San Francisco food truck or likely spending way too much time on a miscellaneous Raspberry Pi project.



Anurag Gupta – Anurag is a Product Manager at Treasure Data driving the development of the unified logging layer, Fluentd Enterprise. Anurag has worked on large data technologies including Azure Log Analytics, and enterprise IT services such as Microsoft System Center.