Posts List

Senior Engineer (Python/Java) - Machine Learning - Up to 85,000 at Outfittery (Berlin, Germany)

OUTFITTERY is Europe’s largest Personal Shopping Service for men. We know that shopping isn’t a pleasure for every man. This is why we set a clear goal: a world where men have time for the important things in life and are still well-dressed. Would you like to work on a variety of tasks, take over responsibilities and gain strong insights into an E-Commerce company, all of this in a diverse and international team?

Site Reliability Engineer | Data Science Platform at Contiamo (Berlin, Germany)

The product: A containerized data science environment Our ambition is to create a platform that gives data scientists a flexible, consistent, and simple environment based on Docker containers, where their code can be written in a large variety of languages (Python, R, Go, Scala). This tool then turns their code into stateless functions that can be easily deployed into powerful data pipelines. The stack Kubernetes, OpenFaas, Docker The challenge Having great DevOps engineering support is crucial in order to guarantee that our micro-service based platform runs smoothly and reliably, no matter where it is deployed (we support cloud and on-premise deployments).

Data Engineer (Paris, Berlin and/or Zurich) at PriceHubble AG (Zürich, Switzerland)

As a data engineer, your mission will be to build and maintain our extract-transform-load infrastructure which consumes raw data and transforms it to valuable real estate insights. Your daily challenges will be to mine a wide range and variety of new datasets of all sort, build new datasets, extract and create new features. These features and insights are either directly used as part of our product or as a signal in our machine learning algorithms.

Data Engineer (m/w) for Open-Source Enterprise Data Stack at Alvary (Hannover, Germany)

Your role: Design, build, test and package components to ingest, transform and store large volumes of streaming data that are composable into reliable data pipelinesConduct requirements engineering and map requests to a suitable data architectureOrchestrate and instrument data pipelines so they are scalable and maintainableMaintain exiting code base and take care of automated building, packaging and deployingEvaluate and benchmark technology options. Run PoCs and estimate operations cost.Align with Backend Engineers, define requirements and request optimizationsYour profile:

Data Scientist (m/w/d) Data Management und Systemsoftware at Rohde & Schwarz (München, Deutschland)

Ihre Aufgaben: Sie entwickeln innovative Lösungen mit Methoden des modernen Datenmanagements und der automatisierten DatenanalyseSie führen Proofs of Concept durch und implementieren Prototypen zur Datenmodellierung, -verwaltung sowie -auswertungSie haben Spaß daran, mit umfangreichen, komplexen Datensätzen verschiedener technischer Domänen zu arbeitenSie entwickeln Lösungen zu speziellen Herausforderungen der Datenanalyse mit passenden Datenmanagement Konzepte und KomponentenEnd2End Datenanalysen inkl. Anforderungsanalyse, Datenvorbereitung, Modellbildung, Datenmanagement, Validierung sowie Data Storytelling gehört auch zu ihren AufgabenSie sind als Spezialist der Ansprechpartner für unterschiedliche interne und externe Stakeholder und arbeiten cross-funktional im gesamten UnternehmenIhr Profil:

Data Engineer (m/f/d) - Analytical Databases at Smaato (Hamburg, Germany)

As a Data Engineer you will be working on our large scale analytical databases and the surrounding ingestion pipeline. Your job will involve constant feature development, performance improvements and assure platform stability. The mission of our analytics team is “data driven decisions at your fingertips”. You own and provide the system that all business decisions will be based on. Precision and high quality results are essential in this role. You can expect an international team of Developers who are based in Hamburg.

Python Backend Engineer - Machine Learning - Pricing & Forecasting at Zalando SE (Berlin, Germany)

ABOUT THE TEAM Department: Supply & Demand Reports to: Engineering Lead Team Size: <10 Recruiter: Almog Greenberg, almog.greenberg@zalando.de As a Python Backend Engineer in the Pricing & Forecasting team, you’ll bring the engineering perspective into our data science teams. You will build the required microservices serving large data from the models and the pipeline for machine learning. You will challenge our status quo, drive innovation and apply agile best practices. WHERE YOUR EXPERTISE IS NEEDED

Lead Data Engineer at PriceHubble AG (Zürich, Switzerland)

Your role Data engineers are the central productive force of PriceHubble. As a Lead data engineer, your mission will be to lead our Data Engineers across the 3 offices and to build and maintain our extract-transform-load infrastructure which consumes raw data and transforms it to valuable real estate insights. Your daily challenges will be to mine a wide range and variety of new datasets of all sort, build new datasets, extract and create new features.

Software Engineer Java & Machine Learning (m/w/d) at TWT Interactive GmbH (Düsseldorf, Deutschland)

DU HAST EIN GESPÜR FÜR DIE NEUESTEN TRENDS UND SETZT BEIM PROGRAMMIEREN AUF DIE NEUESTEN TECHNOLOGIEN? DICH BEGEISTERT DIE INNOVATIONSKRAFT VON GOOGLE UND MÖCHTEST DAMIT DEN ERFOLG UNSERER KUNDEN ENTWICKELN? Ebenso wie wir setzt Du neben den neuesten Technologien auf exzellente Qualität und weißt daher um die Vorteile des Einsatzes der Atlassian-Produktpalette zur Sicherung hochwertigen Codes? Möchtest Du die Chance nutzen, mit uns anspruchsvolle Software zum Leben zu erwecken? Als Software Engineer Java & Machine Learning (m/w/d) an einem unserer Standorte in Düsseldorf oder Berlin arbeitest Du mit über 380 anderen Digital-Spezialisten daran, innovative Applikationen für unsere Kunden zu realisieren.

Data Engineer (m/w/d) für das Kreativzentrum der Süddeutschen Zeitung at Süddeutsche Zeitung Digitale Medien GmbH (München, Deutschland)

Über uns: Die „Süddeutsche Zeitung Digitale Medien“ erweitert ihr Team. Arbeite mit uns an der digitalen Zukunft der Zeitung! Als Tochtergesellschaft des Süddeutschen Verlages ist die Süddeutsche Zeitung Digitale Medien GmbH das digitale Kreativzentrum von Deutschlands größter überregionaler Qualitätstageszeitung. Viele engagierte Köpfe entwickeln bei uns SZ.de im Browser und als App weiter. Auch die digitale Ausgabe der SZ mit allen Sonderpublikationen, SZ-Magazin.de und jetzt.de entstehen hier. Mit Services wie Newslettern, Messengern, Chatbots und Inhalten für alle Sinne, wie DasReze.

Data Infrastructure Engineer (m/f/d) at FlixBus (Berlin, Germany)

Your Tasks – Paint the world green Holistic cloud-based infrastructure automationDistributed data processing clusters as well as data streaming platforms based on Kafka, Flink and SparkMicroservice platforms based on DockerDevelopment infrastructure and QA automationContinuous Integration/Delivery/DeploymentYour Profile – Ready to hop on board Experience in building and operating complex infrastructureExpert-level: Linux, System AdministrationExperience with Cloud Services, Expert-Level with either AWS or GCP  Experience server and operation-system-level virtualization is a strong plus, in particular practical experience with Docker and cluster technologies like Kubernetes, AWS ECS, OpenShiftMindset: “Automate Everything”, “Infrastructure as Code”, “Pipelines as Code”, “Everything as Code”Hands-on experience with “Infrastructure as Code” tools: TerraForm, CloudFormation, PackerExperience with a provisioning / configuration management tools (Ansible, Chef, Puppet, Salt)Experience designing, building and integrating systems for instrumentation, metrics/log collection, and monitoring: CloudWatch, Prometheus, Grafana, DataDog, ELKAt least basic knowledge in designing and implementing Service Level AgreementsSolid knowledge of Network and general Security EngineeringAt least basic experience with systems and approaches for Test, Build and Deployment automation (CI/CD): Jenkins, TravisCI, BambooAt least basic hands-on DBA experience, experience with data backup and recoveryExperience with JVM-based build automation is a plus: Maven, Gradle, Nexus, JFrog Artifactory Link: https://www.

Senior Data Engineer at Ultra Tendency (Berlin, Deutschland)

Lead your own development team and our customers to success! Ultra Tendency is looking for someone who convinces not just by writing excellent code, but also through strong presence and leadership.  At Ultra Tendency you would: Work in our office in Berlin/Magdeburg and on-site at our customer’s officesMake Big Data useful (build program code, test and deploy to various environments, design and optimize data processing algorithms for our customers)Develop outstanding Big Data application following the latest trends and methodologiesBe a role model and strong leader for your team and oversee the big picturePrioritize tasks efficiently, evaluating and balancing the needs of all stakeholdersIdeally you have:

Lead Software Engineer - Big Data at Ultra Tendency (Berlin, Deutschland)

Lead your own development team and our customers to success! Ultra Tendency is looking for someone who convinces not just by writing excellent code, but also through strong presence and leadership.  At Ultra Tendency you would: Work in our office in Berlin/Magdeburg and on-site at our customer’s officesMake Big Data useful (build program code, test and deploy to various environments, design and optimize data processing algorithms for our customers)Develop outstanding Big Data application following the latest trends and methodologiesBe a role model and strong leader for your team and oversee the big picturePrioritize tasks efficiently, evaluating and balancing the needs of all stakeholdersIdeally you have:

Data Engineer Cloud (m/w/d) at Breuninger GmbH & Co. (Stuttgart, Deutschland)

Fashion und Lifestyle, 5.500 Mitarbeiter, 11 Department Stores, 1 E-Shop, 1000 Marken, 20 Restaurants & Confiserien, 15 erstklassige Services und stets ein besonderes Einkaufserlebnis – das ist Breuninger. Aufgaben Das Data Platform Services Team ist verantwortlich für den Betrieb sowie die Weiterentwicklung der Data Platform bei Breuninger. Die Plattform ist die Arbeitsgrundlage analytischer Prozesse, die unternehmensrelevante KPIs erarbeiten und diese in Dashboards abbilden. Außerdem werden Daten für operative Systeme bereitgestellt und regelmäßig neue Datenfeeds (intern wie extern) angebunden, welche die Genauigkeit der Insights verbessern.

Machine Learning Engineer (f/m/d) - Cologne at real.digital (Köln, Deutschland)

You love large data volumes and are enthusiastic about the technology-driven optimisation of shop logic? Then you are at the right place as a Machine Learning Engineer (f/m/d) at real.digital Your tasks – this is what awaits you in detail You will closely cooperate with our data engineers and business stakeholders to realise machine learning projects from initiation to endYou will choose emerging technologies and approaches to create high-performance machine Learning processing solutions and other data driven applications that scaleImplementation and development of data pipeline, algorithms, and data stores for a pioneering cloud-based big data applicationChallenging our status quo and helping us define best practices for how we workYour profile – this is what we expect from you

Data Engineer at Alvary (Hannover, Deutschland)

Your role: Design, build, test and package components to ingest, transform and store large volumes of streaming data that are composable into reliable data pipelinesConduct requirements engineering and map requests to a suitable data architectureOrchestrate and instrument data pipelines so they are scalable and maintainableMaintain exiting code base and take care of automated building, packaging and deployingEvaluate and benchmark technology options. Run PoCs and estimate operations cost.Align with Backend Engineers, define requirements and request optimizationsYour profile:

Senior Data Engineer (m/f) at SumUp (Berlin, Germany)

About SumUp Sumup is a successful and fast growing company that operates in many countries and empowers merchants to accept card payments in an easy and convenient way and wake up the entrepreneur within anyone. At the beginning of 2018 we were named the fastest growing company in Europe and we won’t stop there. Already we operate in over 30 countries, with new countries added every year on our path to become the first global card acceptance company in the world.

Data Engineer at KI labs GmbH (München, Germany)

For KI labs we are looking for data engineers at mid-senior levels of experience (and also working students), to join our rapidly growing data team in Munich as part-time or full-time employees. Your Responsibilities Design, build, and scale data products, and deploy them on public and private clouds, e.g. Amazon AWS, Google Cloud or Microsoft Azure;Enable and facilitate a data-driven culture for internal and external clients, and use advanced data pipelines to generate insight where to go;Support Cloud Engineers and Data Scientists with architecting and orchestrating data infrastructures;Support Software Development teams with Data, e.

Senior Data Engineer at PriceHubble AG (Zürich, Switzerland)

Your role Data engineers are the central productive force of PriceHubble. As a data engineer, your mission will be to build and maintain our extract-transform-load infrastructure which consumes raw data and transforms it to valuable real estate insights. Your daily challenges will be to mine a wide range and variety of new datasets of all sort, build new datasets, extract and create new features. These features and insights are either directly used as part of our product or as a signal in our machine learning algorithms.

(Senior) Software Engineer (Golang) | Data Science Platform at Contiamo (Berlin, Germany)

The challenge: Building services around a containerized data science environment Our mission is to combine sophisticated data science with a great user experience. Our flexible data science environment enables businesses to create interactive, data-driven decision tools and automations.This environment gives data scientists a flexible, consistent, and simple collaborative tool based on Docker containers orchestrated in Kubernetes. Use cases can be implemented in a large variety of languages (Python, R, Go, Scala) and are deployed as stateless functions that can easily be composed into powerful data pipelines.