Data stream processing is redefining what’s possible in the world of data-driven applications and services. Apache Flink is one of the systems at the forefront of this development, pushing the boundaries of what can be achieved with data stream processing.
Apache Flink currently powers some of the largest data stream processing pipelines in the world, with users such as Alibaba, Uber, ING, Netflix, and more running Flink in production. Flink is also one of the most active and fastest-growing open source projects in the Apache Software Foundation.
One of Flink’s most popular APIs is SQL. Unlike many other systems, Flink features SQL as a unified API for batch and stream processing, meaning that a query computes the same results regardless whether it is executed on static data sets or data streams. Flink’s SQL support is the foundation for company-internal as well as publicly available data analytics services of enterprises, such as Alibaba, Huawei, and Uber.
data Artisans was founded in 2014 by the original creators of the Apache Flink project, and we’re building the next-generation platform for real-time data applications. We are tackling some of today’s biggest challenges in big data and data streaming.
- In this role, you will be working on “Streaming SQL”, one of the hottest topics in stream processing as a member of the team at data Artisans that develops Flink’s relational APIs.
- Flink’s SQL support receives a lot of attention, both from users as well as contributors. You will be working closely with the Apache Flink community to extend the support for ANSI SQL features, tune the performance, both on the level of the query optimizer as well as the query runtime, and implement connectors to ingest from and emit data to external storage systems.
- When you are not coding or discussing feature designs, you’ll have plenty of opportunities to help evangelizing Flink’s SQL API by writing blog posts or speaking at meetups and conferences around the world
- Please note: In this role, you will design and implement a system that optimizes and executes SQL queries. Think of it as building (not using!) a database system like Oracle or SQL Server yourself.
What you’ll do all day:
- Design and implement new features for Flink’s SQL API
- Tune the performance of SQL queries by tweaking the query optimizer, improving generated code, and removing bottlenecks.
- Implement connectors for external stream sources and storage systems.
- Work with external contributors, discuss their designs, and review their code.
- Write blog posts and present Flink at high-impact conferences around the world.
- Become an Apache Flink and stream processing expert
You will love this job if you …
… are familiar with the design of distributed data processing systems (e.g. Hadoop, Kafka, Flink, Spark)
… you know how to design and implement a relational database or query processor.
… you have a good command of Java and/or Scala and of course SQL.
… you like working together with an awesome open source community to tackle challenging problems.
… have great English skills and like to get in touch with users from around the world
… have at least Master’s level degree in Computer Science, mathematics, engineering or similar field
What we offer:
- Competitive salary and stock options
- Tech gear of your choice
- International team environment (10 nationalities so far)
- Flexible working arrangements (home office, flexible working hours)
- Unlimited vacation policy, so take time off when you need it
- Spacious office space in the Kreuzberg district of Berlin
- Snacks, coffee and beverages in the office
- Relocation assistance if needed
- Hackathons and weekly technical Lunch Talks to keep your head full of inspirations and ideas!