Your role:
Design, build, test and package components to ingest, transform and store large volumes of streaming data that are composable into reliable data pipelinesConduct requirements engineering and map requests to a suitable data architectureOrchestrate and instrument data pipelines so they are scalable and maintainableMaintain exiting code base and take care of automated building, packaging and deployingEvaluate and benchmark technology options. Run PoCs and estimate operations cost.Align with Backend Engineers, define requirements and request optimizationsYour profile:
At KI labs we design and build state of the art software and data products and solutions for the major brands of Germany and Europe. We aim to push the status quo of technology in corporations, with special focus areas of software, data, and culture. Inside, we are a team of software developers, designers, product managers and data scientists, who are passionate about building the products of the future, today. We believe in open-source and independent teams, follow Agile practices, lean startup method and aim to share this culture with our clients.
We’re looking for a motivated and driven (Senior) Data Engineer (m/f/d) who will help us shape our team, drive the company to the next level, and have the most direct influence on our success.
Your Tasks – Paint the world green
You will be responsible for building a data platform for running big data workloads at scale, collecting and combining data from various sources and help data consumers to consume data in our data lake.
As a Data Engineer you will be working on our large scale analytical databases and the surrounding ingestion pipeline. Your job will involve constant feature development, performance improvements and assure platform stability. The mission of our analytics team is “data driven decisions at your fingertips”. You own and provide the system that all business decisions will be based on. Precision and high quality results are essential in this role.
You can expect an international team of Developers who are based in Hamburg.
Your role
Data engineers are the central productive force of PriceHubble. As a Lead data engineer, your mission will be to lead our Data Engineers across the 3 offices and to build and maintain our extract-transform-load infrastructure which consumes raw data and transforms it to valuable real estate insights. Your daily challenges will be to mine a wide range and variety of new datasets of all sort, build new datasets, extract and create new features.
About SumUp
Sumup is a successful and fast growing company that operates in many countries and empowers merchants to accept card payments in an easy and convenient way and wake up the entrepreneur within anyone. At the beginning of 2018 we were named the fastest growing company in Europe and we won’t stop there. Already we operate in over 30 countries, with new countries added every year on our path to become the first global card acceptance company in the world.
Your Tasks – Paint the world green
Holistic cloud-based infrastructure automationDistributed data processing clusters as well as data streaming platforms based on Kafka, Flink and SparkMicroservice platforms based on DockerDevelopment infrastructure and QA automationContinuous Integration/Delivery/DeploymentYour Profile – Ready to hop on board
Experience in building and operating complex infrastructureExpert-level: Linux, System AdministrationExperience with Cloud Services, Expert-Level with either AWS or GCP Experience server and operation-system-level virtualization is a strong plus, in particular practical experience with Docker and cluster technologies like Kubernetes, AWS ECS, OpenShiftMindset: “Automate Everything”, “Infrastructure as Code”, “Pipelines as Code”, “Everything as Code”Hands-on experience with “Infrastructure as Code” tools: TerraForm, CloudFormation, PackerExperience with a provisioning / configuration management tools (Ansible, Chef, Puppet, Salt)Experience designing, building and integrating systems for instrumentation, metrics/log collection, and monitoring: CloudWatch, Prometheus, Grafana, DataDog, ELKAt least basic knowledge in designing and implementing Service Level AgreementsSolid knowledge of Network and general Security EngineeringAt least basic experience with systems and approaches for Test, Build and Deployment automation (CI/CD): Jenkins, TravisCI, BambooAt least basic hands-on DBA experience, experience with data backup and recoveryExperience with JVM-based build automation is a plus: Maven, Gradle, Nexus, JFrog Artifactory Link: https://www.
Your Tasks – Paint the world green
Holistic cloud-based infrastructure automationDistributed data processing clusters as well as data streaming platforms based on Kafka, Flink and SparkMicroservice platforms based on DockerDevelopment infrastructure and QA automationContinuous Integration/Delivery/DeploymentYour Profile – Ready to hop on board
Experience in building and operating complex infrastructureExpert-level: Linux, System AdministrationExperience with Cloud Services, Expert-Level with either AWS or GCP Experience server and operation-system-level virtualization is a strong plus, in particular practical experience with Docker and cluster technologies like Kubernetes, AWS ECS, OpenShiftMindset: “Automate Everything”, “Infrastructure as Code”, “Pipelines as Code”, “Everything as Code”Hands-on experience with “Infrastructure as Code” tools: TerraForm, CloudFormation, PackerExperience with a provisioning / configuration management tools (Ansible, Chef, Puppet, Salt)Experience designing, building and integrating systems for instrumentation, metrics/log collection, and monitoring: CloudWatch, Prometheus, Grafana, DataDog, ELKAt least basic knowledge in designing and implementing Service Level AgreementsSolid knowledge of Network and general Security EngineeringAt least basic experience with systems and approaches for Test, Build and Deployment automation (CI/CD): Jenkins, TravisCI, BambooAt least basic hands-on DBA experience, experience with data backup and recoveryExperience with JVM-based build automation is a plus: Maven, Gradle, Nexus, JFrog Artifactory Link: https://www.
These tasks are waiting for you:
Be the professional lead for our data team Obtain actionable business-oriented insights from processed IoT data Identify and explore opportunities to apply machine learning algorithms and guide their development from initial testing to their final user-facing application Communicate and collaborate with the digital marketing, business development and product development teams in a high pace environment Processing and aggregation of terabytes of raw near real time/real time IoT data Maintenance, expansion, and optimization of existing data pipelines (ETL/ELT)Qualifications:
ABOUT THE TEAM
Department: Personalized Marketing
Reports to: Data Engineering Lead
Team Size: 5-10
Recruiter Name, E-mail: Rebecca Moore, rebecca.moore@zalando.de
You’ll join the Data Engineering team within Personalized Marketing, a team which builds advanced data products focused on measuring the performance of our marketing activities with a direct impact on millions of Euro spent.
WHERE YOUR EXPERTISE IS NEEDED
Implement and maintain sustainable, high-performance, growth-ready data-processing and data integration pipelines to measure the performance (Return on Investment) of our marketing activitiesBuild applications and services to integrate data from internal and external sources, especially to ensure a better cross-device recognition of customers or to increase the tracking data quality, an essential source for our marketing campaign performance measurementClose collaboration with our product managers to define realistic roadmaps and products that will be successful in productionSupport our Data Scientists by preparing easily consumable data sets to let them focus on Data Science instead of data cleansing and transformation as well as bringing their products into productionWHAT WE’RE LOOKING FOR
About the job
The Big Data Developer works on our SaaS platform, and brings passionate inquisitiveness, primary research, and forward thinking to every assignment. Through shared responsibility for all team deliverables, and communication with Product Owners as well as other stakeholders within the company, the Big Data Developer builds software to pass automated acceptance tests and to deliver sprint commitments.
You can expect a very international team of Developers who are based in Hamburg.
WERDEN SIE TEIL DES REWE SYSTEMS ENTWICKLUNGSTEAMS!
Wir entwickeln effiziente IT-Systeme für das Handelsumfeld, lösen Herausforderungen mit innovativen Technologien und optimieren die bestehende Anwendungslandschaft der REWE Group. Zukunft und Standard treffen sich bei uns – jeden Tag.
Sie widmen sich gerne ganzheitlich herausfordernden IT-Projekten, Sie sind an neuen Technologien und Trends interessiert, Sie haben Freude an agiler und klassischer Teamarbeit und möchten die Ergebnisse Ihrer Arbeit im echten Leben verfolgen können?
About the job
As a Data Engineer at HelloFresh, you will be collaborating to build a robust and highly performant data platform using cutting-edge technologies. You will develop distributed services that process data in batch and real-time with focus on scalability, data quality and business requirements. You will have the opportunity to work on challenging data related problems, like:
Design and build a world-class self-service data platformTaking end-to-end responsibility for design, build and maintenance of batch and real-time data pipelines.
Data is core to everything we do at PriceHubble. We rely on a wide variety of data from multiple sources. As a data scraper, your mission will be to source, capture and extract new datasets that help us develop cutting edge valuation and forecasting tools for the real estate market.
Responsibilities:
Identify and analyse new data sourcesAccess new data with whatever strategy is suitable, e.g. write web-scraping and parsing scripts,…Create new datasets and integrate them in top shape into the data pipelineDefine and implement data acquisition strategiesRequirements:
As a data engineer, your mission will be to build and maintain our extract-transform-load infrastructure which consumes raw data and transforms it to valuable real estate insights. Your daily challenges will be to mine a wide range and variety of new datasets of all sort, build new datasets, extract and create new features. These features and insights are either directly used as part of our product or as a signal in our machine learning algorithms.
About the team:
We are responsible for building the data platform. We are highly focused on simplicity and ease of use by all data-oriented users. We are a strong enabler of our data-strategy: make the right data easily available to everybody for high-quality decisions.
Our tech stack: Scala, Java, everything with Kafka (Streams, Connect, KSQL,…), Akka, Spark, Flink, Docker, K8s, Play, Slick, AWS Services (eg. EMR,S3), NewRelic, JUnit, ScalaTest, ScalaSpec
Yelp’s data mining engineers are a passionate and diverse group of engineers who can work across disciplines to build incredible data-driven products. We are responsible for the whole stack: scoping the problem by digging through data with Redshift and Jupyter, researching and developing potential algorithms and approaches, training and tuning a model, and finally scaling it to millions of users, businesses, and advertisers.
On the User Location Intelligence team, our mission is to reliably handle the industry’s best user location data.
Department: Merchant Operations
Reports to: Team Lead Strategy Development
Team Size: >10
The purpose of the role is to enable Merchant Operations teams through the creation and maintenance of a data pipeline to perform data integration at scale, as well as the maintenance of our historical and near real-time data warehouse.
WHERE YOUR EXPERTISE IS NEEDED
You are experienced in building, evaluating, maintaining and improving large-scale data-driven productsCreate and maintain a flexible high-performance data processing and integration pipeline at scale, providing high quality datasets for our internal users and applications.
Department: Merchant Operations
Reports to: Team Lead Strategy Development
Team Size: >10
The purpose of the role is to enable Merchant Operations teams through the creation and maintenance of a data pipeline to perform data integration at scale, as well as the maintenance of our historical and near real-time data warehouse.
WHERE YOUR EXPERTISE IS NEEDED
You are experienced in building, evaluating, maintaining and improving large-scale data-driven productsCreate and maintain a flexible high-performance data processing and integration pipeline at scale, providing high quality datasets for our internal users and applications.
Market research is the original data driven business. Incubated and spun-off from a university, we have earned the trust of the world´s biggest companies and leading brands - for more than 80 years. Today, everything at GfK starts and ends with Data and Science
We are proud of our heritage – and our future: Currently we’re on a transformational journey from a traditional market research company to a trusted provider of prescriptive data analytics powered by cutting edge technology.