You will improve customer experience building data-driven services based on customer activities.
What you achieve
Customers and Swisscom are interacting through many channels, from shop to online, call to chat bots, letter to emails.
How to improve customer experience and Swisscom processes? What are the patterns bringing to a purchase or successful support case? Is online channel attracting more customers?
You will enable Swisscom to answer many other questions by providing insights about the customer journey, a cross channel collection of customer activities.
What’s next in music happens on SoundCloud first. As the world’s largest open audio platform, SoundCloud is powered by a connected community of creators, listeners and curators who share, discover and influence what’s new, now and next in music and audio.
We’re looking for an experienced data engineer to join our Recommendations team. This cross-functional team is responsible for developing, testing and maintaining efficient pipelines and services that transform raw data into features aimed at helping users discover new music on SoundCloud.
About Nubank Berlin
Nubank Berlin is a satellite engineering office dedicated to serving the horizontal engineering needs of Nubank in Brazil - the leading fintech in Latin America. In Brazil, since 2014 we have been actively servicing millions of clients with our mobile-controlled credit card product. With head office São Paulo and an engineering office in Berlin, Nubank has raised USD 330 million in investment rounds led by Sequoia Capital, Founders Fund, Tiger Global, Kaszek Ventures, Goldman Sachs, QED Investors and DST Global.
Are you up for a successful “Grown-Up” headquartered in Berlin-Friedrichshain? smava is one of the biggest fintech employers and has received several awards, including the price for Innovator 2019 and the Top Employer 2019. Our company grows by 80 percent each year – become part of the smava story now! And let us grow together. We are, smava – the online credit comparison. We make loans transparent, fair and cheap! Become a part of smava’s Data Infrastructure story now as a:
Werde ‘(Senior) Data Engineer’ (m/w/d) bei *um in Frankfurt am Main
“Lebe ein wenig mit uns in der Zukunft und entwickle gemeinsam in agilen Teams individuelle Lösungen für unsere Kunden”
Wir suchen Verstärkung für unser unglaubliches Team bei „The unbelievable Machine Company“. Als Experten mit Hand, Hirn und Herz entwickeln wir präzise IT-Lösungen für die individuellen Herausforderungen unserer Kunden. Unsere Steckenpferde sind die Bereiche Data Science, Cloud Services und Hosting. Wir reduzieren komplexe Problemstellungen mit neuen Denk- und Handlungsansätzen und schlagkräftigen Technologien.
Weshalb zuschauen, wenn man auch mitgestalten kann? In unserem ING Analytics Hub arbeiten 15 hochqualifizierte Köpfe an interdisziplinären Analytics-Projekten und verwandeln Fin in Tech. Sie schätzen internationalen Austausch auf höchstem Niveau, denken begeistert vor und quer und lieben es, Ihr Wissen produktiv weiterzugeben? Jump on.
Ihre Aufgaben
Software Development, Machine Learning, Spark & Big Data – das sind die Themen, in die Sie sich eingraben, in denen Sie Trends & Technologien nicht nur konsequent verfolgen, sondern auch im Team lancieren.
Wir suchen Verstärkung für unser außergewöhnliches Team bei „The unbelievable Machine Company.” Als Experten mit Hand, Hirn und Herz entwickeln wir präzise IT-Lösungen für die individuellen Herausforderungen unserer Kunden. Unsere Steckenpferde sind die Bereiche Big Data, Cloud Services und Hosting. Wir reduzieren komplexe Problemstellungen mit neuen Denk- und Handlungsansätzen und schlagkräftigen Technologien.
Wir sind keine Krawattenträger und dennoch hochgradig professionell. Darum erzielen wir außergewöhnliche Ergebnisse, wachsen seit Jahren kontinuierlich. Wenn du unsere Werte teilst und das „gewisse Etwas“ mitbringst, freuen wir uns auf deine Bewerbung!
We are looking for a motivated Data Engineer (m/f/x) joining our team of enthusiastic system engineers. Our data infrastructure is currently handling more than 10.000 requests per second and processing more than 1 TB of data each hour. Your main focus will be harvesting cloud- and container technologies including queueing, streams, distributed systems and data infrastructure.
As an expert for clean code, design patterns and software engineering best practices, you will help your colleagues to further build our data infrastructure.
An exciting task is waiting for you:
As a Junior or Senior Data Engineer (m/f/d) you will have the chance to become part of a fast-growing team in an early stage. You will improve and extend the current Data Platform in AWS that enables other teams to make data-based decisions based on that. You will also build Data Products in cross-functional teams together with Software Engineers and Data Scientists in order to automate operative processes backed up with intelligent algorithms which allow the company to grow in a scalable way.
Your Tasks – Paint the world green
You are part of an agile, self-organized, cross-functional development team owning the charter domainYou make sure that the data infrastructure is optimalYou work closely with business analysts to help them to achieve their goals by delivering suitable technical solutionsYou help to establish and maintain an efficient data communication between different teams and distributed systemsYour Profile – Ready to hop on board
Several years of experience as Data Engineer, BI Engineer or Software EngineerAdvanced knowledge of SQLSolid experience withData warehousesSQL query tuningETL / ELTData modellingExperience with technologies like Kafka, Redshift, Power BI and similar is a plus.
Entdecken Sie bei der Generali Deutschland Informatik Services GmbH neue Perspektiven als
Big Data Engineer (m/w/d) – Schwerpunkt Realtime Engineering / Streaming
(Hamburg, Aachen)
Ihre Aufgabe
Zur Verstärkung der Abteilung Kranken/Business Intelligence am Standort Aachen bzw. Hamburg suchen wir Sie zum nächstmöglichen Zeitpunkt als Big Data Engineer (m/w/d) – Schwerpunkt Realtime Engineering/Streaming.
Sie sind verantwortlich für den Aufbau und die Weiterentwicklung einer Realtime-Datenbereitstellung im Rahmen der Big Data-Architektur innerhalb eines agilen TeamsSie konzipieren, designen und entwickeln Datenbereitstellungs-Frameworks und -Jobs für strukturierte und unstrukturierte DatenSie wirken bei der Entwicklung der Datendrehscheibe (Data Lake) der Generali Gruppe als Big Data-Entwickler (m/w/d) mitSie sind verantwortlich für die Umsetzung von Datenintegrationsmodellen, entsprechend der Vorgaben zur Architektur, in JobsSie überführen physische Datenintegrationsmodelle und andere Designspezifikationen in QuellcodeSie sind verantwortlich für die Durchführung von Unit-/Performance-TestsSie entwickeln die Sourcen nicht nur, sondern betreiben diese auch (DevOps)Sie übernehmen in der Funktion die aktive Beratung des Kunden, um entsprechende Technologien zur Optimierung der Geschäftsprozesse nutzen zu könnenSie entwickeln kontinuierlich das strategische Big Data-Zielbild weiter und setzen dieses umSie arbeiten an der Definition und Kommunikation der BI- und Analytics Roadmap und der BI- und Analytics Vision mitSie harmonisieren und koordinieren BI- und Analytics-Anforderungen der Generali Gruppe im Big Data-KontextSie beraten Projekte und Kunden im Rahmen von Big Data-Vorhaben und zeigen neue BI- und Analytics-Trends aufSie unterstützen beim Design von ganzheitlichen Big Data Systemarchitekturen in ProjektenIhr Profil
To support our ambitious growth, we are now looking for a Principal Data Scientist (m/f/d) to join our team in Munich, Germany starting as soon as possible.
Your Tasks – Paint the world green
You work in our data science incubation team and help us to develop a decentralized data organization that implants quantitative decision making into our company’s DNAYou work alongside our product development and business unit analytics teams of individuals with diverse backgrounds and skills in analytics and data scienceYou will have the opportunity to directly influence strategic decisions by leading organizational level initiatives that drive scale, efficiency, and insight across our organizationYou collaborate with business and technology stakeholders to evaluate data sources, techniques, and tools to support our decentralized data science communityYou evolve FlixBus into a data-driven company by providing leadership and governance to a decentralized, company-wide data organizationYou consult and mentor data scientists, data engineers and data analysts to drive excellence in value generationYour Profile – Ready to hop on board
Are you looking for a challenging team lead role in an innovative and fast-moving environment?
We’re looking for someone who can drive the Express Booking Product Intelligence domain forward by driving the mission ‘to contribute to educate and enable profound decision making within our ExpressBooking product engineering team and advertisers by providing insights from reliable up-to-date data.‘with your team to success.
You will empower a team of enthusiastic talents, ensuring they remain high-performing, collaborative, flexible, proactive and open-minded to upcoming challenges.
Arbeiten Sie mit an einer Zukunft, die bewegt. Für unsere Division Currency Management Solutions suchen wir Sie als
Big Data Engineer (m/w/d)
Ihre Aufgaben:Sie sind verantwortlich für Design, Implementierung und Betrieb von effektiven Data Processing Architekturen innerhalb unserer innovativen Microservices- und Cloud-basierten Data Analytics, IIoT und DigitallösungenIngestion, Integration, Organization, Batch und Stream Processing, Lifecycle Management der DatenSichern der Qualität, Integrität und Privacy der DatenSetup, Monitoring und Tuning der Hadoop Cluster und Datenbanken, you build it you run itEnge Zusammenarbeit in agilen Teams mit Data Scientists, Entwicklungsteams und Product Ownern in der Datenmodellierung, Datenanalyse und TechnologieberatungIhr Profil:Studium (Master, FH / Uni) der Informatik oder einer vergleichbaren FachrichtungSehr gute Kenntnisse mit Data Ingestion / Integration (Flume, Sqoop, Nify) Data Storage (PostgreSQL, MongoDB), Distributed Storage (Hadoop, Cloudera), Messaging (Kafka, MQTT), Data Processing (Spark, Scala), Scheduling (Oozie, Pig)Praktische Erfahrung in der Entwicklung und Betrieb von large-scale Data Processing Pipelines in skalierbaren Microservice / REST ArchitekturenErfahrung in Cloud-Umgebungen wie Microsoft Azure oder AWS wünschenswertSehr gute Deutsch- und Englischkenntnisse in Wort und SchriftWir freuen uns auf Ihre Online-Bewerbung unter www.
Your role:
Design, build, test and package components to ingest, transform and store large volumes of streaming data that are composable into reliable data pipelinesConduct requirements engineering and map requests to a suitable data architectureOrchestrate and instrument data pipelines so they are scalable and maintainableMaintain exiting code base and take care of automated building, packaging and deployingEvaluate and benchmark technology options. Run PoCs and estimate operations cost.Align with Backend Engineers, define requirements and request optimizationsYour profile:
We’re looking for a motivated and driven (Senior) Data Engineer (m/f/d) who will help us shape our team, drive the company to the next level, and have the most direct influence on our success.
Your Tasks – Paint the world green
You will be responsible for building a data platform for running big data workloads at scale, collecting and combining data from various sources and help data consumers to consume data in our data lake.
About the team:
We are responsible for building the data platform. We are highly focused on simplicity and ease of use by all data-oriented users. We are a strong enabler of our data-strategy: make the right data easily available to everybody for high-quality decisions.
Our tech stack: Scala, Java, everything with Kafka (Streams, Connect, KSQL,…), Akka, Spark, Flink, Docker, K8s, Play, Slick, AWS Services (eg. EMR,S3), NewRelic, JUnit, ScalaTest, ScalaSpec
Teraki is a Berlin based tech driven company enabling true mobility. We stand for innovation in the rapidly developing connected car, self-driving and 3D mapping world. Teraki provides data reduction and data processing solutions for Automotive (IoT) applications and enables the launch of new applications by reducing hardware footprint, latency and costs. We help our customers on the challenges that are posed by the exploding amounts of data in connected vehicles for all sensor, video and 3D mapping data.
About the job
The Big Data Developer works on our SaaS platform, and brings passionate inquisitiveness, primary research, and forward thinking to every assignment. Through shared responsibility for all team deliverables, and communication with Product Owners as well as other stakeholders within the company, the Big Data Developer builds software to pass automated acceptance tests and to deliver sprint commitments.
You can expect a very international team of Developers who are based in Hamburg.
Teraki is a Berlin based tech driven company enabling true mobility. We stand for innovation in the rapidly developing connected car, self-driving and 3D mapping world. Teraki provides data reduction and data processing solutions for Automotive (IoT) applications and enables the launch of new applications by reducing hardware footprint, latency and costs. We help our customers on the challenges that are posed by the exploding amounts of data in connected vehicles for all sensor, video and 3D mapping data.