Join us for an Apache Kafka meetup on September 14th from 7:00pm, hosted by ResearchGate in Berlin. The agenda and speaker information can be found below. See you there!
7:00pm: Doors open - (Drinks and Snacks)
7:30pm - 7:45pm: Intro by ResearchGate
7:45pm - 8:20pm: Presentation #1: Highly Scalable Machine Learning in Real Time with Apache Kafka’s Streams API, Kai Waehner, Confluent
8:20pm - 8:55pm: Presentation #2: Consistent settings for consistent data and performance, Serge Travin, ResearchGate
8:55pm - 9:30pm: Additional Q&A, Networking, Pizza and Drinks
Kai Waehner works as Technology Evangelist at Confluent. Kai’s main area of expertise lies within the fields of Big Data Analytics, Machine Learning, Integration, Microservices, Internet of Things, Stream Processing and Blockchain. He is regular speaker at international conferences such as JavaOne, O’Reilly Software Architecture or ApacheCon, writes articles for professional journals, and shares his experiences with new technologies on his blog (www.kai-waehner.de/blog). Contact and references: [masked] / @KaiWaehner / www.kai-waehner.de
Highly Scalable Machine Learning in Real Time with Apache Kafka’s Streams API
Intelligent real time applications are a game changer in any industry. This session explains how companies from different industries build intelligent real time applications. The first part of this session explains how to build analytic models with R, Python or Scala leveraging open source machine learning / deep learning frameworks like TensorFlow or H2O. The second part discusses the deployment of these built analytic models to your own applications or microservices by leveraging the Apache Kafka cluster and Kafka’s Streams API instead of setting up a new, complex stream processing cluster. The session focuses on live demos and teaches lessons learned for executing analytic models in a highly scalable, mission-critical and performant way.
Senior Software Engineer at ResearchGate
Originally from Saint Petersburg, I moved to Berlin two years ago to help ResearchGate achieve their mission of connecting the world of science and making research open to all. I mostly work on Big Data infrastructure, making sure that our teams can build amazing products using consistent data for both realtime streaming solutions and batch computations.
Consistent settings for consistent data and performance
Having realtime data to deliver the best product to users has become mandatory in the modern world. With Kafka it's easy, scalable and performant. At ResearchGate we use Kafka for both change data capture (CDC) pipeline and time-series data. However, ensuring that the data which powers your business is consistent and your pipeline is fault tolerant and performant at the same time is harder that it seems. In this talk we'll discuss settings for the Kafka brokers, producers, consumers and performance tips that we've learned while building our pipeline.
Special thanks to ResearchGate who are hosting us for this event.
Don't forget to join our Community Slack Team!
If you would like to speak or host our next event please let us know! [masked]
NOTE: We are unable to cater for any attendees under the age of 18. Please do not sign up for this event if you are under 18.
Claim the event and start manage its content.I am the organizer