It’s time for another devops meetup at the ODI Node in Leeds on Tuesday 17th April.
The evening will be:
6:45 - 7:15 : Doors open, have a chat, make some new devops friends
7:15 - 7:25 : Group updates/news/announcements
7:25 - 8:05 : Look Ma, no Code! Apache Kafka and KSQL
8:05 - 8:20 : Intermission
8:20 - 9:00 : Ansible Enterprise Grade
9:00 - late : Decamp to the Wardrobe
For those of you who squeezed into the ODI for our February meetup you will have heard from William Hill about their use of Kafka, we’ve invited Robin Moffat from Confluent to come and give us a deep dive in all things Kafka and streaming data. Robin is a Partner Technology Evangelist at Confluent, the company founded by the creators of Apache Kafka, as well as an Oracle ACE Director and Developer Champion. His career has always involved data, from the old worlds of COBOL and DB2, through the worlds of Oracle and Hadoop, and into the current world with Kafka. His particular interests are analytics, systems architecture, performance testing and optimization. He blogs at https://www.confluent.io/blog/author/robin/ and http://rmoff.net/ (and previously http://ritt.md/rmoff) and can be found tweeting grumpy geek thoughts as @rmoff. Outside of work he enjoys drinking good beer and eating fried breakfasts, although generally not at the same time. Robin’s talk us “Look Ma, no Code! Building Streaming Data Pipelines with Apache Kafka and KSQL”
Have you ever thought that you needed to be a programmer to do stream processing and build streaming data pipelines? Think again! Companies new and old are all recognising the importance of a low-latency, scalable, fault-tolerant data backbone, in the form of the Apache Kafka streaming platform. With Kafka, developers can integrate multiple sources and systems, which enables low latency analytics, event driven architectures and the population of multiple downstream systems. These data pipelines can be built using configuration alone. In this talk, we’ll see how easy it is to stream data from a database such as MySQL into Kafka using the Kafka Connect API. In addition, we’ll use KSQL to filter, aggregate and join it to other data, and then stream this from Kafka out into multiple targets such as Elasticsearch and MySQL. All of this can be accomplished without a single line of code!
Our second talk of the evening is from Phil Cornelius who is an Ansible Specialist for EMEA. He joined Red Hat in 2016 from Credit Suisse where he was responsible for the developer tools and services for 3000 applications, developed by 8000 developers globally. Phil’s background is primarily application development, specifically Enterprise Java. Phil brings over 18 years of experience in what is now commonly called DevOps. As you introduce Ansible into your organisation there are additional requirements to make running Ansible ‘Enterprise Grade’. This session is a live demo of some of key use cases for Ansible in the Enterprise. You will get to see Ansible Tower with a specific focus on Application Lifecycle Management.