Your tasks:
- Conception and development of Big Data / Analytics platforms
- Selection of suitable technologies for the implementation of Data Analytics Use Cases
- Development, execution and scaling of data pipelines on cluster architectures
- Design and implementation of data models
- Deliver data to appropriate storage technologies for a variety of applications, including interactive dashboards and statistical analysis.
- Assessment of data quality and implementation of transformations for data cleansing
- Support of the Data Science Team in the development of statistical models
- Operationalization of models and results of the Data Scientists in order to make them available for productive use or to automate them.
Your profile:
- Degree in computer science or comparable qualification
- Experience in the conceptual design and implementation of (Big) Data projects or professional experience as a software developer
- Very good knowledge of at least one of the following programming languages (Scala, Java)
- Experience with databases, data models and ETL processes and basic understanding of distributed systems
- Sound knowledge and experience in dealing with Hadoop ecosystems
- Know-how in one of the Hadoop distributions MapR, Cloudera or Hortonworks is an advantage
- Preferably first experience in Apache Spark
- Basic knowledge of at least one cloud platform (AWS, Azure or Google Cloud) and ideally knowledge of Git
- Experience in Data Warehouse is a plus point
We offer:
- Shaping the digital future - Join our Data Journey
- Diverse range of tasks with high implementation speed and exciting challenges in setting up a rapidly expanding company
- Direct communication channels, flat hierarchies up to the CEO
- Authentic and value-oriented corporate culture with a special team spirit
- Sports facilities, e.g. in our own fitness and relaxation room or at the Bundesliga soccer table
- Joint celebrations: summer party, Christmas party and visit to the meadow
- Drinks and fruit are available free of charge.