Imagine a workplace which encourages you to take on responsibility and where your ideas will be heard and implemented. Imagine a fast paced environment where your performance makes the difference. This is TrustYou! We are looking for adventurers to join our smart and inspiring team! TrustYou creates summaries of hotel reviews, with the goal of being as useful, or more, than a summary written by humans. And it’s our data engineers that make this happen!
BI Platform team at GoEuro is responsible for maintaining the engineering infrastructure and data platform for business intelligence, reporting and analytics with a vision of making GoEuro a truly data driven organization. We are searching for a Data Engineer with in-depth skills in distributed data processing and big data technologies and experience in writing complex data pipelines that process terabytes of data per day. As a data engineer, you will get the opportunity to shape the future of data-driven decision making at GoEuro by enabling the BI analysts, data scientists, product owners and other stakeholders to draw insights from our data.
We are offering an exciting position as a (Senior) Software Developer (f/m) in the field of reinforcement learning. Calling on your sound knowledge of reinforcement learning, you will independently develop new concepts and algorithms for motion planning and decision-making. You will be responsible for implementation and evaluation in simulations as well as in final testing with real test vehicles. Integration of the algorithms in a distributed software architecture means that much of your work will entail collaborating within a large team of machine learning-experts.
We are offering you an exciting role as a software engineer (f/m) in the field of machine learning. Join one of our agile features teams to develop sophisticated algorithms for autonomous driving. Contribute to the autonomous vehicle being able to orient itself, assess situations and select the most appropriate and comfortable driving strategy – even in urban environments. Use your expertise to participate actively in all development phases from prototyping in Python, using state-of-art machine learning frameworks and ROS, to writing C++ code that runs in the final production vehicles.
To strengthen our international team in Berlin, we are offering a full-time position as Engineering Manager, Data (m/f) Retailers face unique challenges - we support them with technology-led solutions. Crealytics is a customer-centric organization: our data-driven, proprietary technology aids some of the world’s biggest eCommerce players (including Foot Locker, ASOS, and Harrods). Crealytics’ Product & Technology specialists are key to this. From design and engineering to data science, they work closely with our Digital Marketing and Business Intelligence teams.
About the team: We are team Bus Factor, a cross-functional team of highly motivated data scientists and engineers. We are striving to solve highly challenging real-world problems that have an impact (and that are NOT related to marketing!). Our goal is to sell the right ticket at the right time to the right person for the right price to maximize revenue. We hold regular knowledge-sharing sessions for exchanging ideas about the latest technologies and organize bi-monthly HackrDays were we contribute to open source projects and innovative business initiatives.
The position provides the opportunity to work on a wide range of interesting topics from operationalizing deep learning models to training recommender systems on petabytes of data. As part of the data science team, you will be also given a lot of responsibilities to shape the direction of the team. If you would like to become part of this success story, please send your application. About your new roleYou will be part of the data science team and work closely with our data scientists to operationalize machine learning pipelinesYou will develop and implement effective data processing architecturesYou will also collaborate a lot with the data warehouse and data platform teamYou will participate at meetups, conferences and the research community and apply what you’ve learned back at your daily workSkills & RequirementsA deep understanding of distributed computing frameworks such as Spark (particularly SparkML, SparkSQL, tune/optimize and debug Spark jobs), Hadoop and/or FlinkExperience with big data at AWS, in particular using EMR and S3Experience with Docker and container orchestration like Kubernetes, Swarm or similarExperience with pipeline management tools like Airflow, Luigi or NiFiExperience with programming languages such as Python, Go and/or ScalaGood knowledge of SQL/RDBMSExperience with the command line, shell scripting and version control (Git)Excellent communication skills in English, both oral and written; German is nice to havePreferably experience with automatic configuration management like Terraform and PuppetPreferably experience with modern agile software development practices like microservices, test-driven development, pair programming, CI/CD etc.
These tasks are waiting for you: Develop and implement management processes for large IoT data volumes generated by users of smart devicesPerforming root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvementOptimization and maintenance of the ETL process (extraction, transformation, loading)Collaborate with our company’s IT department to assess technical requirementsManage and complete your own projects Assist with quality assurance and data maintenanceAssist with analytics and projectionsDevelop workable approaches for action, decision making and algorithmic sales and purchasingManage and expand our RT and NRT processing systemsUse standard and new data evaluation methods that are centered around user behavior regarding compatible IoT devicesWho we are looking for:
About the team: We are responsible for building the data platform. We are highly focused on simplicity and ease of use by all data-oriented users. We are a strong enabler of our data-strategy: make the right data easily available to everybody for high-quality decisions. Our tech stack: Scala, Java, everything with Kafka (Streams, Connect, KSQL,…), Akka, Spark, Flink, Docker, K8s, Play, Slick, AWS Services (eg. EMR,S3), NewRelic, JUnit, ScalaTest, ScalaSpec
Are you passionate about food, data and intelligent applications? Delivery Hero is building the next generation global online food-delivery platform, with data at the center of delivering amazing food experiences. We’re a truly global team, working across 45 countries to ensure our customers are able to find, order and receive their favourite food in the fastest way possible. Since we started our journey in 2011, Delivery Hero has become the world’s largest food-delivery network, and we’re focused on a culture of growth, in both size and opportunities.
Develop cutting edge Machine Learning applications Zalando is transforming from Europe’s leading e-commerce company into a multi-service platform for fashion. From logistics to big brands, to manufacturers - we’re building the platform that connects all people and parts of the fashion ecosystem. Forecasting is at the core of our commercial operation. Forecasting tools help to determine how much of each product to buy, to recommend the best prices for these products, and to ensure that we have the right level of logistics capacity to fulfil customer demand.
Are you looking for a job? Then keep looking. We’ve got a mission for you instead. You’re the best at what you do. You don’t just want a job. You’re looking for the ultimate challenge. In a team that knows no limits, because you don’t either. Welcome to Advertima. We combine machine learning, computer vision, and big data to create reactive and personalized experiences in the real world. Imagine a future where you only get confronted with relevant information - always perceived as a positive experience.
Zattoo is creating the future of television, live and on demand. We build apps for mobile, web and big screens like Apple, Android, Samsung, Amazon Fire TV, Xbox One and many more. The RoleData is a core advantage of OTT / unicast TV over traditional broadcast. We know exactly what content our viewers watch, how popular a show and how many seconds of each and every ad campaign is viewed. At Zattoo we are already reading a lot of our data, but we do not want it anymore, as we believe in the power of data.
This is a key role within our Data & Analytics team and gives you the opportunity to build a world-class Data Landscape for Scout24. This is ideal for someone with strong focus on cloud-based data technologies, good service and DataOps orientation, and loves to build data-driven business. You will be involved in working with huge amount of data, cutting-edge technologies and working with some of the most influential and impactful data projects in Germany.
To support our ambitious growth, we are now looking for a Web Analyst (M/F/X) to join our team in Munich, Germany starting as soon as possible. Your Tasks – Paint the world green You conduct quantitative evaluations of hypotheses in the product-lifecycle prior to the implementationYou take over the measurement responsibilities in the Build-Measure-Learn-CycleYou identify potential optimization opportunities in the customer lifecycle and form predictionsYou work closely together with Product Owners, using data to help them identify features that would make their product more user-friendly and effectiveYou help validate potential A/B tests hypothesis, as well as analyze tests’ results and share them with stakeholdersYou monitor business relevant KPIsYour Profile – Ready to hop on board
The product: A containerized data science environment Our ambition is to create a platform that gives data scientists a flexible, consistent, and simple environment based on Docker containers, where their code can be written in a large variety of languages (Python, R, Go, Scala). This tool then turns their code into stateless functions that can be easily deployed into powerful data pipelines. The stack Kubernetes, OpenFaas, Docker The challenge Having great DevOps engineering support is crucial in order to guarantee that our micro-service based platform runs smoothly and reliably, no matter where it is deployed (we support cloud and on-premise deployments).
The challenge: Building services around a containerized data science environment Our mission is to combine sophisticated data science with a great user experience. Our flexible data science environment enables businesses to create interactive, data-driven decision tools and automations.This environment gives data scientists a flexible, consistent, and simple collaborative tool based on Docker containers orchestrated in Kubernetes. Use cases can be implemented in a large variety of languages (Python, R, Go, Scala) and are deployed as stateless functions that can easily be composed into powerful data pipelines.
We’re looking for experienced engineers to join our team to build elegant, scalable systems that use NoSQL data stores, data warehouses, MapReduce, and streaming solutions to power a whole host of personalized experiences for Yelp’s users and drive optimizations for Yelp’s advertising businesses. If you’re the person who leads their team in replacing an aging system, or dives fearlessly into the guts of a running system to fix that bug everyone else is happy to gloss over, then you’re the one we’re looking for!
Yelp’s data mining engineers are a passionate and diverse group of engineers who can work across disciplines to build incredible data-driven products. We are responsible for the whole stack: scoping the problem by digging through data with Redshift and Jupyter, researching and developing potential algorithms and approaches, training and tuning a model, and finally scaling it to millions of users, businesses, and advertisers. On the User Location Intelligence team, our mission is to reliably handle the industry’s best user location data.
Data stream processing is redefining what’s possible in the world of data-driven applications and services. data Artisans with its dA Platform product and its major contributions to open-source Apache Flink is at the forefront of this development. Our teams are pushing the boundaries of what can be achieved with data stream processing, allowing our users and customers to gain more insights into their data in real-time. Apache Flink currently powers some of the largest data stream processing pipelines in the world, with users such as Alibaba, Uber, ING, Netflix, and more running Flink in production.