Your role:
Design, build, test and package components to ingest, transform and store large volumes of streaming data that are composable into reliable data pipelinesConduct requirements engineering and map requests to a suitable data architectureOrchestrate and instrument data pipelines so they are scalable and maintainableMaintain exiting code base and take care of automated building, packaging and deployingEvaluate and benchmark technology options. Run PoCs and estimate operations cost.Align with Backend Engineers, define requirements and request optimizationsYour profile:
At KI labs we design and build state of the art software and data products and solutions for the major brands of Germany and Europe. We aim to push the status quo of technology in corporations, with special focus areas of software, data, and culture. Inside, we are a team of software developers, designers, product managers and data scientists, who are passionate about building the products of the future, today. We believe in open-source and independent teams, follow Agile practices, lean startup method and aim to share this culture with our clients.
Ihre Aufgaben:
Sie entwickeln innovative Lösungen mit Methoden des modernen Datenmanagements und der automatisierten DatenanalyseSie führen Proofs of Concept durch und implementieren Prototypen zur Datenmodellierung, -verwaltung sowie -auswertungSie haben Spaß daran, mit umfangreichen, komplexen Datensätzen verschiedener technischer Domänen zu arbeitenSie entwickeln Lösungen zu speziellen Herausforderungen der Datenanalyse mit passenden Datenmanagement Konzepte und KomponentenEnd2End Datenanalysen inkl. Anforderungsanalyse, Datenvorbereitung, Modellbildung, Datenmanagement, Validierung sowie Data Storytelling gehört auch zu ihren AufgabenSie sind als Spezialist der Ansprechpartner für unterschiedliche interne und externe Stakeholder und arbeiten cross-funktional im gesamten UnternehmenIhr Profil:
As a Data Engineer you will be working on our large scale analytical databases and the surrounding ingestion pipeline. Your job will involve constant feature development, performance improvements and assure platform stability. The mission of our analytics team is “data driven decisions at your fingertips”. You own and provide the system that all business decisions will be based on. Precision and high quality results are essential in this role.
You can expect an international team of Developers who are based in Hamburg.
ABOUT THE TEAM Department: Supply & Demand Reports to: Engineering Lead Team Size: <10 Recruiter: Almog Greenberg, almog.greenberg@zalando.de
As a Python Backend Engineer in the Pricing & Forecasting team, you’ll bring the engineering perspective into our data science teams. You will build the required microservices serving large data from the models and the pipeline for machine learning. You will challenge our status quo, drive innovation and apply agile best practices.
WHERE YOUR EXPERTISE IS NEEDED
Your role
Data engineers are the central productive force of PriceHubble. As a Lead data engineer, your mission will be to lead our Data Engineers across the 3 offices and to build and maintain our extract-transform-load infrastructure which consumes raw data and transforms it to valuable real estate insights. Your daily challenges will be to mine a wide range and variety of new datasets of all sort, build new datasets, extract and create new features.
DU HAST EIN GESPÜR FÜR DIE NEUESTEN TRENDS UND SETZT BEIM PROGRAMMIEREN AUF DIE NEUESTEN TECHNOLOGIEN? DICH BEGEISTERT DIE INNOVATIONSKRAFT VON GOOGLE UND MÖCHTEST DAMIT DEN ERFOLG UNSERER KUNDEN ENTWICKELN?
Ebenso wie wir setzt Du neben den neuesten Technologien auf exzellente Qualität und weißt daher um die Vorteile des Einsatzes der Atlassian-Produktpalette zur Sicherung hochwertigen Codes? Möchtest Du die Chance nutzen, mit uns anspruchsvolle Software zum Leben zu erwecken?
Als Software Engineer Java & Machine Learning (m/w/d) an einem unserer Standorte in Düsseldorf oder Berlin arbeitest Du mit über 380 anderen Digital-Spezialisten daran, innovative Applikationen für unsere Kunden zu realisieren.
About SumUp
Sumup is a successful and fast growing company that operates in many countries and empowers merchants to accept card payments in an easy and convenient way and wake up the entrepreneur within anyone. At the beginning of 2018 we were named the fastest growing company in Europe and we won’t stop there. Already we operate in over 30 countries, with new countries added every year on our path to become the first global card acceptance company in the world.
Your Tasks – Paint the world green
Holistic cloud-based infrastructure automationDistributed data processing clusters as well as data streaming platforms based on Kafka, Flink and SparkMicroservice platforms based on DockerDevelopment infrastructure and QA automationContinuous Integration/Delivery/DeploymentYour Profile – Ready to hop on board
Experience in building and operating complex infrastructureExpert-level: Linux, System AdministrationExperience with Cloud Services, Expert-Level with either AWS or GCP Experience server and operation-system-level virtualization is a strong plus, in particular practical experience with Docker and cluster technologies like Kubernetes, AWS ECS, OpenShiftMindset: “Automate Everything”, “Infrastructure as Code”, “Pipelines as Code”, “Everything as Code”Hands-on experience with “Infrastructure as Code” tools: TerraForm, CloudFormation, PackerExperience with a provisioning / configuration management tools (Ansible, Chef, Puppet, Salt)Experience designing, building and integrating systems for instrumentation, metrics/log collection, and monitoring: CloudWatch, Prometheus, Grafana, DataDog, ELKAt least basic knowledge in designing and implementing Service Level AgreementsSolid knowledge of Network and general Security EngineeringAt least basic experience with systems and approaches for Test, Build and Deployment automation (CI/CD): Jenkins, TravisCI, BambooAt least basic hands-on DBA experience, experience with data backup and recoveryExperience with JVM-based build automation is a plus: Maven, Gradle, Nexus, JFrog Artifactory Link: https://www.
Your Tasks – Paint the world green
Holistic cloud-based infrastructure automationDistributed data processing clusters as well as data streaming platforms based on Kafka, Flink and SparkMicroservice platforms based on DockerDevelopment infrastructure and QA automationContinuous Integration/Delivery/DeploymentYour Profile – Ready to hop on board
Experience in building and operating complex infrastructureExpert-level: Linux, System AdministrationExperience with Cloud Services, Expert-Level with either AWS or GCP Experience server and operation-system-level virtualization is a strong plus, in particular practical experience with Docker and cluster technologies like Kubernetes, AWS ECS, OpenShiftMindset: “Automate Everything”, “Infrastructure as Code”, “Pipelines as Code”, “Everything as Code”Hands-on experience with “Infrastructure as Code” tools: TerraForm, CloudFormation, PackerExperience with a provisioning / configuration management tools (Ansible, Chef, Puppet, Salt)Experience designing, building and integrating systems for instrumentation, metrics/log collection, and monitoring: CloudWatch, Prometheus, Grafana, DataDog, ELKAt least basic knowledge in designing and implementing Service Level AgreementsSolid knowledge of Network and general Security EngineeringAt least basic experience with systems and approaches for Test, Build and Deployment automation (CI/CD): Jenkins, TravisCI, BambooAt least basic hands-on DBA experience, experience with data backup and recoveryExperience with JVM-based build automation is a plus: Maven, Gradle, Nexus, JFrog Artifactory Link: https://www.
OUTFITTERY is Europe’s largest Personal Shopping Service for men. We know that shopping isn’t a pleasure for every man. This is why we set a clear goal: a world where men have time for the important things in life and are still well-dressed. Would you like to work on a variety of tasks, take over responsibilities and gain strong insights into an E-Commerce company, all of this in a diverse and international team?
The product: A containerized data science environment
Our ambition is to create a platform that gives data scientists a flexible, consistent, and simple environment based on Docker containers, where their code can be written in a large variety of languages (Python, R, Go, Scala). This tool then turns their code into stateless functions that can be easily deployed into powerful data pipelines.
The stack
Kubernetes, OpenFaas, Docker
The challenge
Having great DevOps engineering support is crucial in order to guarantee that our micro-service based platform runs smoothly and reliably, no matter where it is deployed (we support cloud and on-premise deployments).
Data is core to everything we do at PriceHubble. We rely on a wide variety of data from multiple sources. As a data scraper, your mission will be to source, capture and extract new datasets that help us develop cutting edge valuation and forecasting tools for the real estate market.
Responsibilities:
Identify and analyse new data sourcesAccess new data with whatever strategy is suitable, e.g. write web-scraping and parsing scripts,…Create new datasets and integrate them in top shape into the data pipelineDefine and implement data acquisition strategiesRequirements:
As a data engineer, your mission will be to build and maintain our extract-transform-load infrastructure which consumes raw data and transforms it to valuable real estate insights. Your daily challenges will be to mine a wide range and variety of new datasets of all sort, build new datasets, extract and create new features. These features and insights are either directly used as part of our product or as a signal in our machine learning algorithms.
Über uns:
Die „Süddeutsche Zeitung Digitale Medien“ erweitert ihr Team. Arbeite mit uns an der digitalen Zukunft der Zeitung!
Als Tochtergesellschaft des Süddeutschen Verlages ist die Süddeutsche Zeitung Digitale Medien GmbH das digitale Kreativzentrum von Deutschlands größter überregionaler Qualitätstageszeitung. Viele engagierte Köpfe entwickeln bei uns SZ.de im Browser und als App weiter. Auch die digitale Ausgabe der SZ mit allen Sonderpublikationen, SZ-Magazin.de und jetzt.de entstehen hier. Mit Services wie Newslettern, Messengern, Chatbots und Inhalten für alle Sinne, wie DasReze.
Lead your own development team and our customers to success! Ultra Tendency is looking for someone who convinces not just by writing excellent code, but also through strong presence and leadership.
At Ultra Tendency you would:
Work in our office in Berlin/Magdeburg and on-site at our customer’s officesMake Big Data useful (build program code, test and deploy to various environments, design and optimize data processing algorithms for our customers)Develop outstanding Big Data application following the latest trends and methodologiesBe a role model and strong leader for your team and oversee the big picturePrioritize tasks efficiently, evaluating and balancing the needs of all stakeholdersIdeally you have:
Lead your own development team and our customers to success! Ultra Tendency is looking for someone who convinces not just by writing excellent code, but also through strong presence and leadership.
At Ultra Tendency you would:
Work in our office in Berlin/Magdeburg and on-site at our customer’s officesMake Big Data useful (build program code, test and deploy to various environments, design and optimize data processing algorithms for our customers)Develop outstanding Big Data application following the latest trends and methodologiesBe a role model and strong leader for your team and oversee the big picturePrioritize tasks efficiently, evaluating and balancing the needs of all stakeholdersIdeally you have:
Fashion und Lifestyle, 5.500 Mitarbeiter, 11 Department Stores, 1 E-Shop, 1000 Marken, 20 Restaurants & Confiserien, 15 erstklassige Services und stets ein besonderes Einkaufserlebnis – das ist Breuninger.
Aufgaben
Das Data Platform Services Team ist verantwortlich für den Betrieb sowie die Weiterentwicklung der Data Platform bei Breuninger. Die Plattform ist die Arbeitsgrundlage analytischer Prozesse, die unternehmensrelevante KPIs erarbeiten und diese in Dashboards abbilden. Außerdem werden Daten für operative Systeme bereitgestellt und regelmäßig neue Datenfeeds (intern wie extern) angebunden, welche die Genauigkeit der Insights verbessern.
You love large data volumes and are enthusiastic about the technology-driven optimisation of shop logic? Then you are at the right place as a Machine Learning Engineer (f/m/d) at real.digital
Your tasks – this is what awaits you in detail
You will closely cooperate with our data engineers and business stakeholders to realise machine learning projects from initiation to endYou will choose emerging technologies and approaches to create high-performance machine Learning processing solutions and other data driven applications that scaleImplementation and development of data pipeline, algorithms, and data stores for a pioneering cloud-based big data applicationChallenging our status quo and helping us define best practices for how we workYour profile – this is what we expect from you
Your role:
Design, build, test and package components to ingest, transform and store large volumes of streaming data that are composable into reliable data pipelinesConduct requirements engineering and map requests to a suitable data architectureOrchestrate and instrument data pipelines so they are scalable and maintainableMaintain exiting code base and take care of automated building, packaging and deployingEvaluate and benchmark technology options. Run PoCs and estimate operations cost.Align with Backend Engineers, define requirements and request optimizationsYour profile: