At a Glance:
We are looking for an experienced Data Engineer to join our data team and help us build, maintain, and grow our tools and processes to empower everyone at On to impact business outcomes by making informed decisions. We design data models, build data pipelines, decipher APIs, and leverage SQL to provide timely, clean, tested, documented, and well-defined data.
Your Team:
You will be part of a cross-functional development team that implements features from design to deployment, being responsible, and in charge of the full process and delivery.
Join finn to make mobility fun & sustainable. Play a key role to fulfill our mission to build the most popular mobility provider in the world. We launched, in 2019 offering an all-inclusive car-subscription in Europe.
Objectives
At finn we create software that will manage thousands of vehicles and customers on a daily basis. You have the chance to join this team early on and work independently on our product. If your coding skills are known around the globe, and you love to contribute your ideas and engineering skills to a growing company in the heart of Munich then apply now.
To revolutionize the future of transportation, one has to go one step further and set standards today. Smart technologies which are strongly taking over driving tasks and providing improved safety and comfort are already in place in our commercial vehicles today. But our vision is that “We make transportation smart”. So, we need visionary thinkers and creative developers as well as experts in the fields of data engineering, machine learning, artificial intelligence, sensor technology and software development to achieve this vision.
To revolutionize the future of transportation, one has to go one step further and set standards today. Smart technologies which are strongly taking over driving tasks and providing improved safety and comfort are already in place in our commercial vehicles today. But our vision is that “We make transportation smart”. So, we need visionary thinkers and creative developers as well as experts in the fields of data engineering, machine learning, artificial intelligence, sensor technology and software development to achieve this vision.
<p>For the application of modern machine learning techniques to complex NLP problems we are looking for an exceptionally experienced and creative engineer / scientist. In a small interdisciplinary Scrum team you will apply state-of-the-art deep learning methods to real-world problems, process large amounts of data and deploy production quality models at scale.</p><p><strong>Your main tasks include</strong></p><ul><li>Conception, design and implementation of mission-critical ML & AI solutions (analysis, architecture, design, implementation & deployment of end-to-end systems)</li><li>Design and implementation of data acquisition, ingestion, validation, transformation, augmentation and visualization pipelines</li><li>Investigating new approaches and evaluating new technologies and tools</li></ul><p><strong>Preferred qualifications</strong></p><ul><li>Master's / PhD in Computer Science, Data Science or equivalent work experience</li><li>Practical experience in machine learning technologies related to NLP for tasks such as Embeddings, Named Entity Recognition, Classification, Taxonomies Construction, Sentiment Analysis, Text Similarity and Predictive Modelling</li><li>Hands-on experience with ML toolkits such as Gorgonia, Tensorflow, Keras, PyTorch etc.
Role objectives and accountabilities:
SUPPORT our business with its data, research and analytics needs
- Provide prescriptive insights to all business functions that improve decision-making.
INFORM our business through reporting and ROI evaluation
- Develop intelligent reporting frameworks that define appropriate metrics and identify risks and
opportunities for our business
- Develop tools in Tableau that enable teams to access data for themselves, helping to create a data culture’ throughout the organization
Data Engineer with analytical skills (Python, PySpark) vacancy for a globally operating Zurich based company in the financial sector.
Your experience/skills:
6+ years of experience as Data Analyst or Data Engineer including 3+ years of practice with Python ideally within the banking industryExpertise in Data Models and Database Design as well as with Data Discovery and Data Cleaning on new data with ETLKnow-how of distributed computing tools such as PySpark, Cloudera and HDFS is a mustGood Understanding of Business Analysis and Machine Learning is preferableCapability of Java and SQL as well as experience with Foundry and Palantir tools is a plusBeing Agile practitioner with excellent stakeholder management skillsLanguages: English, fluent in written and spokenYour tasks:
<ul> <li>Drive the BI & data component of Bank-wide transformation and business projects </li> <li>Manage the timelines, dependencies and resources of the projects assigned</li> <li>Perform data functional analysis, analyze data business requirements</li> <li>Datawarehouse (DWH) design, data modelling, ETL design and development</li> <li>Definition and implementation of Key Performance Indicators</li> <li>Transform raw data into smart data and meaningful insights</li> <li>Participate actively in the definition of the Road map and future of BI & DWH in the Bank</li> <li>Identify opportunities to optimize and automate data processes across the Bank</li> <li>Ensure the availability, integrity and quality of the data in the DWH</li> <li>Work closely with business, IT and offshore production support and development teams</li> </ul>
We are looking for a Senior Data Engineer that is passionate about building modern, scalable and well-engineered data infrastructure products to enable product teams at idealo to work autonomously with data. You will be part of a team responsible for developing, running, maintaining the idealo data lake end-to-end. You will contribute to ensuring that this product among other products is successful at idealo.
As the Data Management area at idealo, we are building a cutting-edge, cloud-based data platform suitable for self-service analytics as well as for machine learning, data products, and our enterprise reporting.
The Role
Data is a core advantage of OTT/Unicast TV over traditional broadcast. As a platform operator we know exactly what content our users watch, how popular a show is and how many seconds of each and every ad campaign is viewed in our different applications. With roughly 3 million active users every month the data we collect from them allow for efficient and precise business processes including billing, forecasting and performance measurement and also planning for our product development and measurement.
Get to know us
eyeo is an open-source software company that builds products like Adblock Plus, Adblock Browser and Flattr. By leveraging distribution partnerships, we bring ad-blocking technology everywhere, giving users control over their online experience while offering creators, publishers and advertisers more ways to earn money for the free content they provide.
In combining our reach based on distribution partnerships and our own products, our technology runs on over 150 million devices.
––––––––––––
Team: Data Engineering
Location: Berlin
Contract: 6 month internship with 40 h per week
Working hours: Flexible working hours and home office opportunities
Apply by: March 27, 2020
––––––––––––
Your Team
The Data Engineering circle develops data services and pipelines that drive business insights for product, marketing and our customer relations teams. We design innovative solutions based on intelligent data algorithms using the latest technology stack to enable data-driven decision making in the company.
Principle Roles & Responsibilities
Perform data analysis / data management with state of the art approaches using statistical packages or proprietary statistical classesTest applications and other consulting products to ensure performance and to meet expectationsBe responsible for consulting project management, structuring and analysing of data collected over time for client projectsAssume responsibilities on large projects within the industry / Presentations before industry leaderData cleansing and contextualization. Discussion of data structure with customer.
Please note that due to the current climate with COVID-19, we are not able to accept candidates at the moment. However, we are hoping for the situation to relax soon, so we can restart our recruiting processes.
CARFAX Europe is the leading vehicle history data company within the European market. We are acquiring and processing data related to vehicles from various sources. As a member of the Data Technology Team you will be responsible for building applications around these extensive data sets.
Sunrise’ OTT division Wilmaa is a pioneer in the field of WebTV and stands for digital television on all screens in all usage situations - from smartphones on the move, through tablets and computers to large screens at home. The platform reaches around 300,000 unique clients per month and is particularly popular with the digital natives thanks to its constant innovative strength. Are you a Movie Nerd? You know the difference between Star Trek and Star Wars and you know all Characters of the “the Big Bang Theory”?
Join the European XFEL
European XFEL is an international non-profit company located in the Hamburg area in Germany. It operates a 3.4-km-long X-ray laser, which produces X-rays of unique quality for studies in physics, chemistry, the life sciences, materials research and other disciplines. The diverse scientific facilities at European XFEL enable scientists from across the globe to carry out a wide range of experimental techniques.
To support our Data Analysis Group we are looking for two
Du hörst nie auf, neugierig zu sein, und willst jeden Tag dazulernen. Und das am liebsten von den Besten und mit den besten Aussichten. Du suchst die Herausforderung und möchtest endlich die Theorie Deines Studiums in die Praxis umsetzen. Beste Voraussetzungen, um als Praktikantin oder Praktikant bei uns richtig durchzustarten.
Deine Aufgaben:
Damit wir unsere Services künftig noch individueller gestalten können, dreht sich im Tribe „Customer Interaction“ alles darum, die Wünsche & Vorstellungen unserer Kunden zu antizipieren.
At Audatic, we are building systems to intelligently modify sound using state of the art deep learning technology and unique datasets. This will enable millions of people with hearing loss to enjoy interactions in social settings like bars or restaurants again.
In order to realize this, we require an efficient and tailored infrastructure that enables our state-of-the-art deep learning architectures to train on our GPU cluster. If this sounds interesting to you, and you are looking to immerse yourself in cutting-edge technology, we have the perfect job for you!
OUR RESPONSIBILITIES
As a (Junior) Data Engineer you will help build up a top-notch data platform to democratise data among data savvy end users. Working with analysts, engineers and other stakeholders, you will play a key role in facilitating their data needs and help translating data to insights. In detail, your responsibilities include:
Assist building our new data platform from scratch to facilitate all the data needs of the companyChallenge the status quo and push changes to bring Lendico to the next levelDesigning and developing robust data pipelines to connect all data sourcesBuilding data products and tools to make stakeholders’ lives easier and automate manual and repetitive tasksBeing part of a self-organized multinational BI team (7 people, 7 nationalities)YOUR PROFILE
We’re looking for a Data Engineer to continuously develop our platform to democratize our data.
If you enjoy moving, merging or cleaning up to peta-bytes of data by utilizing Data Warehousing and Data Lake methods at its best this is your place to be.
You should be experienced and have done similar work elsewhere. We deeply believe in a DevOps culture which means you will be responsible for your code and tools from development to operation.