Posts List

Lead Data Engineer at PriceHubble AG (Zürich, Switzerland)

Your role Data engineers are the central productive force of PriceHubble. As a Lead data engineer, your mission will be to lead our Data Engineers across the 3 offices and to build and maintain our extract-transform-load infrastructure which consumes raw data and transforms it to valuable real estate insights. Your daily challenges will be to mine a wide range and variety of new datasets of all sort, build new datasets, extract and create new features.

Data Infrastructure Engineer (m/f/d) at FlixBus (Berlin, Germany)

Your Tasks – Paint the world green Holistic cloud-based infrastructure automationDistributed data processing clusters as well as data streaming platforms based on Kafka, Flink and SparkMicroservice platforms based on DockerDevelopment infrastructure and QA automationContinuous Integration/Delivery/DeploymentYour Profile – Ready to hop on board Experience in building and operating complex infrastructureExpert-level: Linux, System AdministrationExperience with Cloud Services, Expert-Level with either AWS or GCP  Experience server and operation-system-level virtualization is a strong plus, in particular practical experience with Docker and cluster technologies like Kubernetes, AWS ECS, OpenShiftMindset: “Automate Everything”, “Infrastructure as Code”, “Pipelines as Code”, “Everything as Code”Hands-on experience with “Infrastructure as Code” tools: TerraForm, CloudFormation, PackerExperience with a provisioning / configuration management tools (Ansible, Chef, Puppet, Salt)Experience designing, building and integrating systems for instrumentation, metrics/log collection, and monitoring: CloudWatch, Prometheus, Grafana, DataDog, ELKAt least basic knowledge in designing and implementing Service Level AgreementsSolid knowledge of Network and general Security EngineeringAt least basic experience with systems and approaches for Test, Build and Deployment automation (CI/CD): Jenkins, TravisCI, BambooAt least basic hands-on DBA experience, experience with data backup and recoveryExperience with JVM-based build automation is a plus: Maven, Gradle, Nexus, JFrog Artifactory Link: https://www.

Data Infrastructure Engineer (Linux, AWS, Kafka, Kubernetes) - (m/f/d) at FlixBus (Berlin, Germany)

Your Tasks – Paint the world green Holistic cloud-based infrastructure automationDistributed data processing clusters as well as data streaming platforms based on Kafka, Flink and SparkMicroservice platforms based on DockerDevelopment infrastructure and QA automationContinuous Integration/Delivery/DeploymentYour Profile – Ready to hop on board Experience in building and operating complex infrastructureExpert-level: Linux, System AdministrationExperience with Cloud Services, Expert-Level with either AWS or GCP  Experience server and operation-system-level virtualization is a strong plus, in particular practical experience with Docker and cluster technologies like Kubernetes, AWS ECS, OpenShiftMindset: “Automate Everything”, “Infrastructure as Code”, “Pipelines as Code”, “Everything as Code”Hands-on experience with “Infrastructure as Code” tools: TerraForm, CloudFormation, PackerExperience with a provisioning / configuration management tools (Ansible, Chef, Puppet, Salt)Experience designing, building and integrating systems for instrumentation, metrics/log collection, and monitoring: CloudWatch, Prometheus, Grafana, DataDog, ELKAt least basic knowledge in designing and implementing Service Level AgreementsSolid knowledge of Network and general Security EngineeringAt least basic experience with systems and approaches for Test, Build and Deployment automation (CI/CD): Jenkins, TravisCI, BambooAt least basic hands-on DBA experience, experience with data backup and recoveryExperience with JVM-based build automation is a plus: Maven, Gradle, Nexus, JFrog Artifactory Link: https://www.

Data Privacy and Software Security Engineer at LCA Lab at EFPL (Lausanne, Switzerland)

Your mission : As Data Privacy and Software Security Engineer member of the LCA1 Lab, you will contribute in developing technology standards and best practices for protecting genomic and health data and services consistent with the Global Alliance for Genomics and Health (GA4GH) policy framework. You will have two main responsibilities: contribute to ongoing software developments and provide direct support to the leaders of the Data Security Work Stream at the GA4GH.

Senior Data Engineer at Datawallet (Berlin, Germany)

The Role: We are looking for a full-time data engineer. You will be at the core of making people’s data work for them. You will design and maintain the ETL data pipeline—from pulling and parsing data from various APIs and downloaded data stores to populating normalized RDBs and calculating cached views (usually in a NoSQL form) to power our various data products and services. While you are not constrained in your tools, our current stack involves Python, js/node, PostgreSQL, Airflow, AWS Lambda and hosting.

Site Reliability Engineer | Data Science Platform at Contiamo (Berlin, Germany)

The product: A containerized data science environment Our ambition is to create a platform that gives data scientists a flexible, consistent, and simple environment based on Docker containers, where their code can be written in a large variety of languages (Python, R, Go, Scala). This tool then turns their code into stateless functions that can be easily deployed into powerful data pipelines. The stack Kubernetes, OpenFaas, Docker The challenge Having great DevOps engineering support is crucial in order to guarantee that our micro-service based platform runs smoothly and reliably, no matter where it is deployed (we support cloud and on-premise deployments).

Senior Data Scraper (Paris, Zurich, Berlin) at PriceHubble AG (Zürich, Switzerland)

Data is core to everything we do at PriceHubble. We rely on a wide variety of data from multiple sources. As a data scraper, your mission will be to source, capture and extract new datasets that help us develop cutting edge valuation and forecasting tools for the real estate market. Responsibilities: Identify and analyse new data sourcesAccess new data with whatever strategy is suitable, e.g. write web-scraping and parsing scripts,…Create new datasets and integrate them in top shape into the data pipelineDefine and implement data acquisition strategiesRequirements:

Data Engineer (Paris, Berlin and/or Zurich) at PriceHubble AG (Zürich, Switzerland)

As a data engineer, your mission will be to build and maintain our extract-transform-load infrastructure which consumes raw data and transforms it to valuable real estate insights. Your daily challenges will be to mine a wide range and variety of new datasets of all sort, build new datasets, extract and create new features. These features and insights are either directly used as part of our product or as a signal in our machine learning algorithms.

Data Engineer (m/f/d) who is NOT looking for the next Gig! at GfK (Nürnberg, Deutschland)

Market research is the original data driven business. Incubated and spun-off from a university, we have earned the trust of the world´s biggest companies and leading brands - for more than 80 years. Today, everything at GfK starts and ends with Data and Science We are proud of our heritage – and our future: Currently we’re on a transformational journey from a traditional market research company to a trusted provider of prescriptive data analytics powered by cutting edge technology.

Data Backend Engineer (Python) - Retail Operations at Zalando SE (Berlin, Deutschland)

ABOUT THE TEAM Department: Retail Operations – Team Sizing & In-season Algorithms Reports to: Engineering Lead Team Size: <10 Recruiter Name, E-mail: Rebecca Werner, rebecca.werner@zalando.de As a Data Backend Engineer in the Team Sizing & In-season Algorithms, you’ll build data-driven products that help Zalando to place the right orders for the right articles at the beginning of each buying season. Systems you’ll build provide recommendations for quantities and sizes of each new article. WHERE YOUR EXPERTISE IS NEEDED

Full Stack Developer Data / Content Management at AXA Schweiz (Winterthur, Switzerland)

Digitale Transformation ist oft nur Marketing – nicht bei uns! Entwickle, implementiere und betreibe, in einem agilen Produktteam, unsere Plattform für Content Management. Zusammen mit deinem Team bist du ein wichtiger Erfolgsfaktor für die Digitalisierung und gleichzeitig das Gedächtnis der AXA Schweiz. Unsere Kultur ist geprägt von DevOps, Scrum, agilen Teams, visionären Product Ownern, Cloud und Design Thinking - dies nicht nur in der Innovationsabteilung. Dein Beitrag: Aktive und eigenständige Entwicklung im Umfeld von Data Management zur Speicherung unsere Unternehmensdaten in Zusammenarbeit mit unseren internen StakeholdernWeiterentwicklung und Optimierung unserer Content Management Plattform (FileNet P8) sowie bestehenden Softwarekomponenten (u.

Senior Backend Software Engineer (with Ops/Data Experience) at GoEuro Travel GmbH (Berlin, Germany)

Do you not only love to code but also want to see your work making the lives of millions of people easier?  Please keep reading! You will join GoEuro’s Marketing Growth team, where you will work on a variety of projects that aim at attracting users and making them travel with GoEuro for the first, second and hundredth time.  To achieve this vision, you will do the following: Design and build backend systems and contribute to frontend projects such as our Landing Pages System, ADs System, Analytics Data Pipelines, just to name a few.

Senior Data Engineer at PriceHubble AG (Zürich, Switzerland)

Your role Data engineers are the central productive force of PriceHubble. As a data engineer, your mission will be to build and maintain our extract-transform-load infrastructure which consumes raw data and transforms it to valuable real estate insights. Your daily challenges will be to mine a wide range and variety of new datasets of all sort, build new datasets, extract and create new features. These features and insights are either directly used as part of our product or as a signal in our machine learning algorithms.

(Senior) Software Engineer (Golang) | Data Science Platform at Contiamo (Berlin, Germany)

The challenge: Building services around a containerized data science environment Our mission is to combine sophisticated data science with a great user experience. Our flexible data science environment enables businesses to create interactive, data-driven decision tools and automations.This environment gives data scientists a flexible, consistent, and simple collaborative tool based on Docker containers orchestrated in Kubernetes. Use cases can be implemented in a large variety of languages (Python, R, Go, Scala) and are deployed as stateless functions that can easily be composed into powerful data pipelines.

Machine Learning Engineer - Python at WeQ Global Tech GmbH (Berlin, Germany)

We’re looking for a talented Python Engineer to help us improve our recommendation platform. You will be part of an international team that operates a recommendation engine which processes more than a billion of requests per month with a single-digit average latency and develops distributed systems that learn and recommend based on billions of events per day in real-time. You will work closely with our data engineers and ML scientists to implement, evaluate and improve algorithms.

Big Data Engineer (m/f/d) who is NOT looking for the next Gig! at GfK (Nürnberg, Deutschland)

Market research is the original data driven business. Incubated and spun-off from a university, we have earned the trust of the world´s biggest companies and leading brands - for more than 80 years. Today, everything at GfK starts and ends with Data and Science We are proud of our heritage – and our future: Currently we’re on a transformational journey from a traditional market research company to a trusted provider of prescriptive data analytics powered by cutting edge technology.

Big Data Architect (Berlin / Magdeburg) at Ultra Tendency (Berlin, Deutschland)

If you are an architect that enjoys designing distributed applications for large-scale clusters and strives to create high quality code - we have just the right challenges for you ! Self-motivation and results orientated internal drive as well as problem solving mentality would fit in perfectly within our team.  At Ultra Tendency you would: Solve Big Data problems for our customers throughout the project life cycleLead a development team and ensure high quality standards and best practicesWork with our customers to identify requirements and design applications according to their needsDocument architectural decisions and implement them together with the rest of the development teamIdeally you have:

Big Data Engineer (m/w) -TensorFlow, SQL, AWS at ESG Elektroniksystem- und Logistik-GmbH (München, Deutschland)

Seit 50 Jahren steht die ESG für umfassende Expertise im IT- und sicherheitsrelevanten Umfeld. Unter der Marke CYOSS unterstützen wir Kunden aus Industrie und Behörden durch den Einsatz modernster Big Data Technologien und Methoden der künstlichen Intelligenz dabei, relevantes und handlungsfähiges Wissen aus Massendaten zu ziehen und damit die großartigen Chancen einer vernetzten Welt zu nutzen. Arbeiten Sie mit Spezialisten an hochkomplexen Projekten und entwickeln Sie sich mit uns weiter. 

Machine Learning Engineer (m/f/d) with statistical know-how! at GfK (Nürnberg, Deutschland)

Market research is the original data driven business. Incubated and spun-off from a university, we have earned the trust of the world´s biggest companies and leading brands - for more than 80 years. Today, everything at GfK starts and ends with Data and Science We are proud of our heritage – and our future: Currently we’re on a transformational journey from a traditional market research company to a trusted provider of prescriptive data analytics powered by cutting edge technology.

Data Engineer (m/f) Financial Industry at Solactive AG (Frankfurt am Main, Deutschland)

Our Data Solutions team forms an essential link between our business and infrastructure teams: it defines and implements business processes in data pipelines, monitors existing services and thereby provides the best service to our customers. As a full-stack developer in our Data Solutions team you will be working on a broad set of real-world problems of large scale that have a direct and significant business impact. Are you willing to challenge our status quo, drive innovation and apply agile best practices?