What you will do
You will work with cutting-edge Big Data technologies like Spark, Spark Streaming, Kafka, Hadoop, Impala, Amazon WS. You influence the architecture of our platform and create the future of data-driven solutions at Bonial.com.
Furthermore you will…
- Implement different Spark-jobs to guarantee executable routines
- Develop and enhance our dashboard applications and data crunching algorithms
- Build a platform for data science, BI and analysts to get data, run their queries and analyze data
- Help data science on data modeling, performance and scalability problems
- Apply or learn machine learning algorithm to improve our product and drive decisions
What you bring to the table
You have a degree in Computer Science or a similar degree and you know Java at its best. Additionally you have a deep understanding of OOP concepts and know which patterns serves the need.
- Have several years of experience working in Big Data
- Have good knowledge of Java, preferably also Scala and/or Python
- Have good knowledge of concepts of Big Data or willing to go from developer to machine learning engineer
- Experience with Big data Technology Stack (Hadoop, Spark, HDFS, Hive, Impala, HBase, Redshift…)
- Ideally have experience with AWS products stack (Amazon EMR, Amazon Redshift, Amazon Dynamo DB)
- Ideally some experience on Ops tasks on Amazon or any other cloud solution
- Bring passion, experience and problem solving abilities to the team.
You can expect…
- To be the next member of a young and international team with people from 35+ countries.
- To contribute and learn new things, every day.
- To cover the needs of our German, French and US-market.
- Room to grow: you’ll have the opportunity to bring your ideas to the table and take on real responsibilities
- A friendly, laid-back work atmosphere with lots of after-work activities and events