ABOUT THE TEAM
Department: Zalon Data
Reports to: Product Owner Zalon Data
Team Size: >10
Recruiter Name, E-mail: Rebecca Moore, rebecca.moore@zalando.de
Are you looking for a greenfield opportunity to build a cutting-edge data infrastructure (mostly) from scratch? We are ramping up our data infrastructure team for all of Zalon’s machine learning, analytics and business needs.
WHERE YOUR EXPERTISE IS NEEDED
Design and expand our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teamsAssemble and Data Tiering of large, complex data sets that meet functional/non-functional business requirementsSupport our Data Scientists by preparing easily consumable data sets to let them focus on Data Science instead of data cleansing and transformation as well as bringing their products into productionContribute in developing Zalon’s overall technical capabilities by working one-on-one with your data scientists and engineersWHAT WE’RE LOOKING FOR
Join our Identity team and help improve trivago user’s experience through ensuring coverage, quality and accuracy of accommodation content & advertiser matching. We are looking for someone who is interested in leveraging machine learning techniques to help ensure continual improvements in our complex matching process. In addition, you will drive the definition of consolidation algorithms amalgamating and ranking data from multiple big-data sources.
You will work in a cross-functional team to create and optimize a matching and content consolidation service.
WERDEN SIE TEIL DES REWE SYSTEMS ENTWICKLUNGSTEAMS!
Wir entwickeln effiziente IT-Systeme für das Handelsumfeld, lösen Herausforderungen mit innovativen Technologien und optimieren die bestehende Anwendungslandschaft der REWE Group. Zukunft und Standard treffen sich bei uns – jeden Tag.
Sie widmen sich gerne ganzheitlich herausfordernden IT-Projekten, Sie sind an neuen Technologien und Trends interessiert, Sie haben Freude an agiler und klassischer Teamarbeit und möchten die Ergebnisse Ihrer Arbeit im echten Leben verfolgen können?
Data Engineer (m/w/d) | future demand | Jetzt Bewerben
Zur Unterstützung bei der Weiterentwicklung suchen wir zum nächstmöglichen Zeitpunkt am Standort Berlin einen motivierten und engagierten Data Engineer (m/w/d).
Ihre Aufgaben
Sie sind verantwortlich für eine skalierbare Infrastruktur zwecks der Analyse großer Datenmengen unter Verwendung von Data-Mining, Predictive-Analytics und Machine-Learning.Sie binden Daten aus verschiedensten Quellen in die Analyseprozesse ein und arbeiten dabei eng mit den Data Scientisten des Teams zusammen.Sie stellen die Datenqualität entlang der gesamten Kette der Datenverarbeitung sicher.
The product: A sophisticated query engine
We are building a sophisticated OLAP/SQL query engine for advanced analytics, accessing data from many different data sources (RDBMS’s, Hive, Impala, Druid, Elasticsearch and others). The query engine is written in Scala and builds on Apache Calcite. We are planning to open-source it during the course of this year with the goal of making it an Apache project eventually.
The stack
Scala, Java, Apache Calcite, Apache Spark
About the team:
We are responsible for building the data platform. We are highly focused on simplicity and ease of use by all data-oriented users. We are a strong enabler of our data-strategy: make the right data easily available to everybody for high-quality decisions.
Our tech stack: Scala, Java, everything with Kafka (Streams, Connect, KSQL,…), Akka, Spark, Flink, Docker, K8s, Play, Slick, AWS Services (eg. EMR,S3), NewRelic, JUnit, ScalaTest, ScalaSpec
ABOUT THE TEAM
Department: Zalon Data
Reports to: Product Owner Zalon Data
Team Size: >10
Recruiter Name, E-mail: Taryn Bonugli
Are you looking for an opportunity to build a cutting-edge data products? We are ramping up our data team to support and scale Zalon’s machine learning, analytics and business needs. To do so we are looking for an analytical problem solver that likes to work in an agile and cross-functional team with the challenge to shape, build and refine the future of the company’s data quality for our services together with the integration of machine learning algorithms.
Über uns:
Die „Süddeutsche Zeitung Digitale Medien“ erweitert ihr Team. Arbeite mit uns an der digitalen Zukunft der Zeitung!
Als Tochtergesellschaft des Süddeutschen Verlages ist die Süddeutsche Zeitung Digitale Medien GmbH das digitale Kreativzentrum von Deutschlands größter überregionaler Qualitätstageszeitung. Viele engagierte Köpfe entwickeln bei uns SZ.de im Browser und als App weiter. Auch die digitale Ausgabe der SZ mit allen Sonderpublikationen, SZ-Magazin.de und jetzt.de entstehen hier. Mit Services wie Newslettern, Messengern, Chatbots und Inhalten für alle Sinne, wie DasReze.
Big Data Software Engineer
Lead your own development team and our customers to success! Ultra Tendency is looking for someone who convinces not just by writing excellent code, but also through strong presence and leadership.
At Ultra Tendency you would:
Work in our office in Berlin/Magdeburg and on-site at our customer’s officesMake Big Data useful (build program code, test and deploy to various environments, design and optimize data processing algorithms for our customers)Develop outstanding Big Data application following the latest trends and methodologiesBe a role model and strong leader for your team and oversee the big picturePrioritize tasks efficiently, evaluating and balancing the needs of all stakeholdersIdeally you have:
As Data Engineer in the Merchant Operations team you will build the data warehouse for marketplace and processing big amounts of raw data. You work closely with key stakeholders in product, engineering and operations to form deep understanding of marketplace dynamics
WHERE YOUR EXPERTISE IS NEEDED
Own, design and organise all data flows from scratchInvolve product and engineering to integrate various sources of dataDevelop rigorous data science models to aggregate inconsistent real-time signals into strong predictors of market trendsAutomate and own the end-to-end process of modelling and data visualization.
You love writing high quality code? You enjoy designing algorithms for large-scale Hadoop clusters? Spark is your daily business? We have new challenges for you!
Your Responsibilities:
Solve Big Data problems for our customers in all phases of the project life cycleBuild program code, test and deploy to various environments (Cloudera, Hortonworks, etc.)Enjoy being challenged and solve complex data problems on a daily basisBe part of our newly formed team in Berlin and help driving its culture and work attitudeJob Requirements
Your new role – challenging and future-oriented
Create and maintain optimal data pipeline architecture.Assemble large, complex data sets that meet functional/non-functional use-cases requirementsBuild the infrastructure and procedures required for optimal Extraction, Transformation, and Loading (ETL) of data from a wide variety of data sources.Build the data pipelines that support the data analytics aiming to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.Work alongside Data Scientists to productionalize advanced computer science algorithms based on statistical and machine learning models.
Lead your own development team and our customers to success! Ultra Tendency is looking for someone who convinces not just by writing excellent code, but also through strong presence and leadership.
At Ultra Tendency you would:
Work in our office in Berlin/Magdeburg and on-site at our customer’s officesMake Big Data useful (build program code, test and deploy to various environments, design and optimize data processing algorithms for our customers)Develop outstanding Big Data application following the latest trends and methodologiesBe a role model and strong leader for your team and oversee the big picturePrioritize tasks efficiently, evaluating and balancing the needs of all stakeholdersIdeally you have:
The position provides the opportunity to work on a wide range of interesting topics from operationalizing deep learning models to training recommender systems on petabytes of data. As part of the data science team, you will be also given a lot of responsibilities to shape the direction of the team. If you would like to become part of this success story, please send your application.
About your new roleYou will be part of the data science team and work closely with our data scientists to operationalize machine learning pipelinesYou will develop and implement effective data processing architecturesYou will also collaborate a lot with the data warehouse and data platform teamYou will participate at meetups, conferences and the research community and apply what you’ve learned back at your daily workSkills & RequirementsA deep understanding of distributed computing frameworks such as Spark (particularly SparkML, SparkSQL, tune/optimize and debug Spark jobs), Hadoop and/or FlinkExperience with big data at AWS, in particular using EMR and S3Experience with Docker and container orchestration like Kubernetes, Swarm or similarExperience with pipeline management tools like Airflow, Luigi or NiFiExperience with programming languages such as Python, Go and/or ScalaGood knowledge of SQL/RDBMSExperience with the command line, shell scripting and version control (Git)Excellent communication skills in English, both oral and written; German is nice to havePreferably experience with automatic configuration management like Terraform and PuppetPreferably experience with modern agile software development practices like microservices, test-driven development, pair programming, CI/CD etc.
Lead your own development team and our customers to success! Ultra Tendency is looking for someone who convinces not just by writing excellent code, but also through strong presence and leadership.
At Ultra Tendency you would:
Work in our office in Berlin/Magdeburg and on-site at our customer’s officesMake Big Data useful (build program code, test and deploy to various environments, design and optimize data processing algorithms for our customers)Develop outstanding Big Data application following the latest trends and methodologiesBe a role model and strong leader for your team and oversee the big picturePrioritize tasks efficiently, evaluating and balancing the needs of all stakeholdersIdeally you have:
About the team:
We are responsible for building the data platform. We are highly focused on simplicity and ease of use by all data-oriented users. We are a strong enabler of our data-strategy: make the right data easily available to everybody for high-quality decisions.
Our tech stack: Scala, Java, everything with Kafka (Streams, Connect, KSQL,…), Akka, Spark, Flink, Docker, K8s, Play, Slick, AWS Services (eg. EMR,S3), NewRelic, JUnit, ScalaTest, ScalaSpec
Aufgaben
Konzeption und Umsetzung (Rapid Prototyping) von Datenflüssen, -modellen und -schnittstellen im Analytics- sowie im Machine Learning-UmfeldUse Case-getriebene und agile Zusammenarbeit in kleinen Teams mit Fachexperten, Data Scientists und Data EngineersDefinition, Kommunikation und kontinuierliche Weiterentwicklung interner DatenstandardsErarbeitung sowie Beschreibung von Datenkatalogen und -lebenszyklenDefinition und technische Umsetzung von Maßnahmen zur Sicherung der Datenqualität über den gesamten DatenlebenszyklusEnge Zusammenarbeit mit dem Cyber Security Team bei der Anwendungsfall-basierten Betrachtung von DatensicherungsmaßnahmenVerfolgung von Trends im Analytics- und Data Governance-UmfeldMitwirkung bei Definition der Datenstrategie des TÜV SÜDQualifikation
Data stream processing is redefining what’s possible in the world of data-driven applications and services. data Artisans with its dA Platform product and its major contributions to open-source Apache Flink is at the forefront of this development.
Our teams are pushing the boundaries of what can be achieved with data stream processing, allowing our users and customers to gain more insights into their data in real-time.
Apache Flink currently powers some of the largest data stream processing pipelines in the world, with users such as Alibaba, Uber, ING, Netflix, and more running Flink in production.
FlixMobility ist ein Mobilitätsanbieter mit Standorten in Europa und in den USA. Seit 2013 bieten die grünen FlixBusse für Millionen von Menschen eine neue Alternative, um bequem, preiswert und umweltfreundlich zu reisen. Dank eines einzigartigen Geschäftsmodells und innovativer Technologie hat das Start-up innerhalb kürzester Zeit Europas größtes Fernbusnetz etabliert. 2018 startete FlixMobility die ersten grünen FlixTrains in Deutschland und expandiert an die US Westküste.
Deine Aufgaben - Du kannst etwas bewegen
At Numbrs, our engineers don’t just develop things – we have an impact. We change the way how people are managing their finances by building the best products and services for our users.
Numbrs engineers are innovators, problem-solvers, and hard-workers who are building solutions in big data, mobile technology and much more. We look for professional, highly skilled engineers who evolve, adapt to change and thrive in a fast-paced, value-driven environment.