The opportunity
Zalando is transforming from Europe’s leading e-commerce company into a multi-service platform for fashion. From logistics, to big brands, to manufactures - we’re building the platform that connects all people and parts of the fashion ecosystem.
As a Data Platform Architect at Zalando, you will be responsible for building, scaling and architecting one of the largest big data platforms in ecommerce. You will develop big data solutions, services, and messaging frameworks to help us continuously process our data faster and more efficiently. You will challenge our status quo and help us define best practices for how we work. And you will have the freedom to launch your own open source projects, contribute to others’ projects, build internal community around your interests, and strengthen your personal brand—while receiving meaningful support at every step.
You will work for the Data Services department in Zalando, supporting the teams that are taking care of defining our Big Data strategy. Among the other services that you will help evolve, there are the Data Lake and Nakadi. The Data Lake is a platform that aims to collect all the data and business events generated within Zalando, in near-real-time and batch manner, archive them, offer transformation functionalities and data access that will be used to take data-driven decisions. Nakadi is an Event Bus (open sourced by Zalando) that creates an abstraction layer on top on Apache Kafka (and other distributed streaming platform) providing REST interfaces and several feature like schema validation, schema definition evolution, authentication, authorization.
What we are looking for
- Experience as data architect or senior engineer in larger scale companies in the internet domain like e-commerce, software or infrastructure as a service (SaaS, PaaS), or similar
- Ability to demonstrate competencies and experience working on time-critical, mass data-processing, parallel data-processing and database initiatives
- Deep knowledge of big data technologies, like Spark, Flink, Presto, Impala, Hadoop / MapReduce, Google BigQuery and scalable pub/sub event message queues (e.g. Kafka, Kinesis)
- Solid knowledge of data structures and applied data mining and machine learning techniques
- Knowledge of microservice based SOA and service communication using RESTful APIs, GraphQL, gRPC, Avro or other techniques
- Fluency in at least one programming language, such as Java, Scala or Python
- Peerless analytical and critical thinking skills and ability to balance technical trade-offs
- A great communicator with different stakeholders who is skilled in architecture documentation and technical presentations
- English language fluency
Your responsibilities
- Design and build a cloud-based mass data-processing and log data-processing architecture that will supplement our existing analytics and data warehouse (DWH) architecture
- Align multiple teams to apply new big data solutions and provide support as an architect and a peer review partner
- Help evaluate and push for the adoption of technologies that are best suited for specific projects
- Share your knowledge via documentation, coaching, code reviews, articles and tech talks
- Demonstrate excellent communication skills and act as a liaison between your team and others
You Benefit From
Culture. Culture of trust and empowerment, open source commitment, meetups, game nights, +70 internal technical and fun guilds, tech talks, product demos, Coderdojos, parties & events.
Perks. Competitive salary, 40% Zalando shopping discount, discounts from external partners, public transport discounts, relocation assistance for internationals, free drinks & fruits, hardware of your choice.
Development. Tour of Mastery, extensive onboarding, personal branding support, opportunity to attend and speak at conferences.
Work Environment. Self-organized, autonomous teams and flexible working hours.
Want to join us? Then go ahead and apply!
If you need guidance or have any questions about our hiring processes, please contact Taryn Bonugli