As a Machine Learning Engineer (m/f) at plista you will work on the optimization of our self-developed recommender technology that is used by publishers around the world to recommend news articles and ads to their users. This includes the analysis of large-scale datasets, creation of recommendation algorithms and it’s implementation. You will work with a modern stack of technologies such as Apache Spark and AWS to structure the big data volume.
We are seeking a data engineer who will work with our ambitious team and take our data infrastructure to the next level to help make our customers successful. If you are passionate about empowering people in making data-driven decisions, then we’re eager to get to know you.
You have the option to start immediately in our Hamburg office. This is a full-time, permanent position.
Responsibilities
Shape and execute a technological mid and long term vision for data engineering with the teamProvide data in a reliable and efficient way to our stakeholders by keeping the data warehouse and all incoming and outgoing data streams running and up to dateHelp improving our product by finding and implementing new ways of using dataResearch and implement a new and advanced tech stackBuild and improve software solutions to support the work of Data Analysts and other teams in their daily workDefine processes and contracts for providing clean and unified dataRequirements
About the job
The Big Data Developer works on our SaaS platform, and brings passionate inquisitiveness, primary research, and forward thinking to every assignment. Through shared responsibility for all team deliverables, and communication with Product Owners as well as other stakeholders within the company, the Big Data Developer builds software to pass automated acceptance tests and to deliver sprint commitments.
You can expect a very international team of Developers who are based in Hamburg.
The Senior Big Data Architect will be responsible for strengthening and further extending our data platform, which serves as a basis for analysis that drives decision making in the company.
GameDuell is one of the leading casual games companies in Germany located in the heart of Berlin. With more than 130 million users we are constantly looking for ways to improve our product and our players` experience. In order to fulfill our mission of bringing people together to have a good time with games, we are following a solid engineering approach.
To support our ambitious growth, we are now looking for Data Engineers (m/f) in Munich starting as soon as possible.
Your Tasks – Paint the world green
You are in charge to build a data platform that consists of several services (messaging, streaming, persistence, etc.) that can be used by our product development teams in a self-service wayYou consult and support the data platform users in creating high performant data-driven analysis or applicationsYour Profile – Ready to hop on board
Data Engineer (f/m)
WE ARE LENDICO:
Lendico is a modern FinTech company focussing on the intermediation of company loans via an online lending market place. On its platforms, Lendico connects companies and investors and offers a fast and digital alternative to regular banks. The loan process is completely online and without any branch network. With the most modern technologies and innovative processes Lendico aims for minimizing the cost and handling efforts connected to company financing.
data Artisans is currently building a new team of Solution Architects in Europe and the US. You’ll be part of a new and fast-growing team helping customers having a great experience using our products and Apache Flink. The role will sit at the forefront of one of the most significant paradigm shifts in information processing and real-time architectures in recent history - stream processing - which sets the foundation to transform companies and industries for the on-demand services era.
Das bringt der Job:
Wir wollen die Nutzbarkeit und den Impact unserer Datenplattform für real-time- und batch-Verarbeitung im Konzern maßgeblich steigern – mit deiner Expertise!.Gemeinsam mit Deinen Kollegen übernimmst Du die Verantwortung für Konzeption und Umsetzung unserer neuen, hoch performanten und robusten Datenplattform in der Cloud für big, fast & smart Data.Du verantwortest die Verarbeitung und Aufbereitung der Daten entlang ihres gesamten Lebenszyklus in unseren Data Pipelines der Datenstromverarbeitung.Du arbeitest mit unterschiedlichsten Datasets und stellst eine hohe Konsistenz sowie Verfügbarkeit sicher und schaffst neue Einblicke zu unseren Kunden.
Our IT-Team focuses on steadily improving our smava ecosystem of personal credit solutions. We continuously develop our online loan comparison platform and implement new product features. The whole application system consists of more than 50 individual services. And now, in the continuous process of achieving strategic goals, we are looking for Senior Java Big Data Engineer (m/f) who will redesign our data models to its best and will take care of the efficient use of the database.
Gestalten Sie gemeinsam mit uns die digitale Zukunft und sorgen Sie dafür, dass Innovationen sicher sind! TÜV SÜD begleitet mit weltweit 24.000 klugen Köpfen seit 150 Jahren High-Tech-Entwicklungen und Innovationen: vom Dampfkessel bis zum autonomen Fahren. Mit Ihrem Expertenwissen stärken Sie TÜV SÜD in der Weiterentwicklung bestehender und neuer Geschäftsfelder. Wir freuen uns auf Ihre Bewerbung am Standort München als
Data Engineer (w/m) im Bereich Data Analytics
AufgabenKonzeption und Umsetzung (Rapid Prototyping) skalierbarer Architekturen für State-of-the-Art Big Data-AnwendungsfälleUmsetzung komplexer ETL-/ELT-Prozesse auf Basis aktueller Big Data-TechnologienAustausch mit Projektteams interner und externer Kunden zur AnforderungsanalyseUse Case-getriebene und agile Zusammenarbeit in kleinen Teams mit Fachexperten, Data Scientists sowie Data ArchitectsPerformance-Optimierungen und Skalierbarkeitsanalysen für Analytics Use Cases nahe der Echtzeit und im BatchDefinition und technische Umsetzung von Maßnahmen zur Sicherung der Datenqualität über den gesamten DatenlebenszyklusVerantwortung für Konfiguration, Management und Betrieb der Rapid Prototyping-Umgebungen, einschließlich der Datenbank- und Storage-UmgebungenVerfolgung von Trends im Analytics- und Big Data- Umfeld sowie Mitwirkung bei strategischen Initiativen des TÜV SÜDQualifikation
The Opportunity
Zalando is transforming from Europe’s leading e-commerce company into a multi-service platform for fashion. From logistics, to big brands, to manufactures - we’re building the platform that connects all people and parts of the fashion ecosystem.
You will work for the Data Services department in Zalando, in the team that is taking care of defining our Data Lake. The Data Lake is a platform that aims to collect all the data and business events generated within Zalando, in near-real-time and batch manner, archive them, offer transformation functionalities and data access that will be used to take data-driven decisions.
We are currently looking for a motivated and experienced Data Infrastructure Lead (m/f).
You will be the lead of a growing Data Engineering Team at mytaxi that works with an extensive variety of data systems in order to expand and develop an analytical platform that enables business users and data scientists to make data driven decisions, build innovative data products and roll out advanced analytics.
The Data Engineering Team is committed to developing a modern data infrastructure at mytaxi based on a Hadoop Data Warehouse.
Data stream processing is redefining what’s possible in the world of data-driven applications and services. Apache Flink is one of the systems at the forefront of this development, pushing the boundaries of what can be achieved with data stream processing.
Apache Flink currently powers some of the largest data stream processing pipelines in the world, with users such as Alibaba, Uber, ING, Netflix, and more running Flink in production. Flink is also one of the most active and fastest-growing open source projects in the Apache Software Foundation.
The opportunity
Zalando is transforming from Europe’s leading e-commerce company into a multi-service platform for fashion. From logistics, to big brands, to manufactures - we’re building the platform that connects all people and parts of the fashion ecosystem.
As a Data Platform Architect at Zalando, you will be responsible for building, scaling and architecting one of the largest big data platforms in ecommerce. You will develop big data solutions, services, and messaging frameworks to help us continuously process our data faster and more efficiently.
About HEREHERE is a leader in navigation, mapping and location experiences and our technology is used in many places. If youve ever checked in on Facebook, youve used HERE. Or, if you have in-car navigation, youve probably had our help getting where youre going -over two thirds of the worlds cars, from Ford to Ferrari, use our software. And every time you receive a package from Amazon or FedEx, weve helped guide it to your door.
Tasks:
In the core processes of our business divisions, large data volumes are created on whose basis our “Big Data, Machine Learning, Artificial Intelligence” department optimises processes that increase product quality and develops new products and services. As part of your role as a Machine Learning Engineer, you will process data and provide them to end-to-end projects and use cases. For this purpose, you will specify the requirements from use cases in collaboration with specialist departments and data scientists and will subsequently develop suitable IT solutions together with further partners in agile projects.
Responsibilities include but are not limited to designing, developing and supporting big data systems and machine learning models; creating statistical models, analyzing and processing various kinds of data. Applicants are also expected to participate in after-hours work.
All candidates will have
a Bachelor’s or higher degree in technical field of studya minimum of two years experience designing, developing and supporting big data systems, machine learning models and end-to-end data pipelinesexcellent knowledge with at least one modern programming language, such as Go, Java, C++, Python and Scalaexperience with big data technologies, such as Kafka, Spark, Storm, Flink and Cassandraexcellent troubleshooting and creative problem-solving abilitiesexcellent written and oral communication and interpersonal skillsIdeally, candidates will also have
Herausfordernde IT-Projekte an der Spitze der technologischen EntwicklungKonzeption und Implementierung von Datenplattformen, Datenverarbeitungsprozessen und Daten-Services als hoch skalierbare verteilte Systeme in der Cloud, on-Premise oder hybridDesign und Implementierung von Software-Lösungen für die Nutzung polystrukturierter Daten mit Apache Hadoop, Spark, Kafka, SQL- und NoSQL-DatenbankenUmsetzung der Projekte von der Analyse bis zum RolloutQualitätsgetriebene Software-EntwicklungEinsatz agiler Vorgehensweisen (insbesondere Scrum)Skills & Requirements
Sie können ein abgeschlossenes Studium der Informatik, Wirtschaftsinformatik oder vergleichbare Qualifikation vorweisenSie bringen Berufserfahrung in Big-Data- oder Business-Intelligence-Projekten oder in der Software-Entwicklung mitMit verschiedenen Big-Data-Technologien wie z.
About the job
The Big Data Developer works on our SaaS platform, and brings passionate inquisitiveness, primary research, and forward thinking to every assignment. Through shared responsibility for all team deliverables, and communication with Product Owners as well as other stakeholders within the company, the Big Data Developer builds software to pass automated acceptance tests and to deliver sprint commitments.
You can expect a very international team of Developers who are based in Hamburg.