SPRING is responsible for over 60 products from more than 25 Brands and some of Europe’s biggest news sites including BILD.de and WELT.de. Over 350 people at SPRING are working on an innovative media platform to make quality journalism accessible for everyone. We are providing a great user experience on both sides – for readers and journalists, connecting them with the newest technologies.
Together with BILD.de and WELT.de we are reaching more than 20m users every day and our data pool is growing constantly.
We are offering you an exciting role as a software engineer (f/m) in the field of machine learning. Join one of our agile features teams to develop sophisticated algorithms for autonomous driving. Contribute to the autonomous vehicle being able to orient itself, assess situations and select the most appropriate and comfortable driving strategy – even in urban environments. Use your expertise to participate actively in all development phases from prototyping in Python, using state-of-art machine learning frameworks and ROS, to writing C++ code that runs in the final production vehicles.
About the team:
We are responsible for building the data platform. We are highly focused on simplicity and ease of use by all data-oriented users. We are a strong enabler of our data-strategy: make the right data easily available to everybody for high-quality decisions.
Our tech stack: Scala, Java, everything with Kafka (Streams, Connect, KSQL,…), Akka, Spark, Flink, Docker, K8s, Play, Slick, AWS Services (eg. EMR,S3), NewRelic, JUnit, ScalaTest, ScalaSpec
We’re looking for experienced engineers to join our team to build elegant, scalable systems that use NoSQL data stores, data warehouses, MapReduce, and streaming solutions to power a whole host of personalized experiences for Yelp’s users and drive optimizations for Yelp’s advertising businesses. If you’re the person who leads their team in replacing an aging system, or dives fearlessly into the guts of a running system to fix that bug everyone else is happy to gloss over, then you’re the one we’re looking for!
Yelp’s data mining engineers are a passionate and diverse group of engineers who can work across disciplines to build incredible data-driven products. We are responsible for the whole stack: scoping the problem by digging through data with Redshift and Jupyter, researching and developing potential algorithms and approaches, training and tuning a model, and finally scaling it to millions of users, businesses, and advertisers.
On the User Location Intelligence team, our mission is to reliably handle the industry’s best user location data.
Jobs & Karriere Business Intelligence Analyst (m/w) - Job Details
Jobs & Karriere Economics / Tech Internship - Job DetailsBusiness Intelligence Analyst (m/f)
as of now // unlimited // full-time (40h) // Hamburg
KEY RESPONSIBILITIES:
You extract, analyse and communicate reports and statistics for our business teams and partnersYou transform recurring analyses into KPIs we can easily monitorYou set up the tracking of our ad campaigns, so we all can optimise them daily or even in real-timeYou monitor the user flow from acquisition to monetisation and find new ways to maximise margin thereFrom business bird’s-eye-perspective you simplify processes, making them more transparent on the wayYou automate recurring manual processesMINIMUM QUALIFICATIONS:
Department: Merchant Operations
Reports to: Team Lead Strategy Development
Team Size: >10
The purpose of the role is to enable Merchant Operations teams through the creation and maintenance of a data pipeline to perform data integration at scale, as well as the maintenance of our historical and near real-time data warehouse.
WHERE YOUR EXPERTISE IS NEEDED
You are experienced in building, evaluating, maintaining and improving large-scale data-driven productsCreate and maintain a flexible high-performance data processing and integration pipeline at scale, providing high quality datasets for our internal users and applications.
Department: Merchant Operations
Reports to: Team Lead Strategy Development
Team Size: >10
The purpose of the role is to enable Merchant Operations teams through the creation and maintenance of a data pipeline to perform data integration at scale, as well as the maintenance of our historical and near real-time data warehouse.
WHERE YOUR EXPERTISE IS NEEDED
You are experienced in building, evaluating, maintaining and improving large-scale data-driven productsCreate and maintain a flexible high-performance data processing and integration pipeline at scale, providing high quality datasets for our internal users and applications.
Market research is the original data driven business. Incubated and spun-off from a university, we have earned the trust of the world´s biggest companies and leading brands - for more than 80 years. Today, everything at GfK starts and ends with Data and Science
We are proud of our heritage – and our future: Currently we’re on a transformational journey from a traditional market research company to a trusted provider of prescriptive data analytics powered by cutting edge technology.
Major duties and responsibilities:
Development, maintenance and optimization of machine learning models in collaboration with INAIT machine learning teamTechnological review of machine learning methods and technology
Essential skills and experience required:
Expert knowledge in machine learning methods including deep learning, boosting, decision trees and ensemble modelsExpert knowledge in fundamental mathematics used in machine learning methodsProfessional experience in developing, testing and optimizing machine learning models using at least one or several open source frameworks (TensorFlow, Caffe, Torch, …)
ABOUT THE TEAM
Department: Zalon Data
Reports to: Product Owner Zalon Data
Team Size: >10
Recruiter Name, E-mail: Taryn Bonugli
Are you looking for an opportunity to build a cutting-edge data products? We are ramping up our data team to support and scale Zalon’s machine learning, analytics and business needs. To do so we are looking for an analytical problem solver that likes to work in an agile and cross-functional team with the challenge to shape, build and refine the future of the company’s data quality for our services together with the integration of machine learning algorithms.
Major duties and responsibilities:
Statistical and data analysis of INAIT data to provide input to INAIT R&D teamTechnological review of statistical and data analysis methods and technology
Essential skills and experience required:
Expert knowledge in statisticsProfessional experience in Python software developmentProfessional experience in using establishedPython and/or R software frameworks (Pandas, NumPy, SciPy, …)
Preferred:
Expert knowledge in fundamental mathematics used in machine learning methodsStrong Experience in numerical methodsExperience in developing and optimizing machine learning models using one or several known frameworks such as TensorFlow, Caffe, Torch, …Professional experience in contributing to a SCRUM team
Teraki is a Berlin based tech driven company enabling true mobility. We stand for innovation in the rapidly developing connected car, self-driving and 3D mapping world. Teraki provides data reduction and data processing solutions for Automotive (IoT) applications and enables the launch of new applications by reducing hardware footprint, latency and costs. We help our customers on the challenges that are posed by the exploding amounts of data in connected vehicles for all sensor, video and 3D mapping data.
Project A is the operational VC that provides its ventures with capital, an extensive network and exclusive access to a wide range of operational expertise. The Berlin-based investor makes use of the 260m in assets under its management to back early-stage companies in the digital technology space. With its unique organizational structure featuring 100 operational experts, Project A offers its portfolio companies hands-on support in the areas of Software Engineering, Digital Marketing, Design, Communications, Business Intelligence, Sales and Recruiting.
We are the leading mobile point-of-sale (mPOS) company in Europe. Our vision as a global FinTech company is to build the first-ever global card acceptance brand, and we are well on our way as small businesses in over 31 countries around the world rely on SumUp to get paid. Our boldness, startup mindset, empathy and love for product foster a creative environment for our employees. We value an entrepreneurial spirit and seek to build lasting relationships among our employees.
Über uns:
Die „Süddeutsche Zeitung Digitale Medien“ erweitert ihr Team. Arbeite mit uns an der digitalen Zukunft der Zeitung!
Als Tochtergesellschaft des Süddeutschen Verlages ist die Süddeutsche Zeitung Digitale Medien GmbH das digitale Kreativzentrum von Deutschlands größter überregionaler Qualitätstageszeitung. Viele engagierte Köpfe entwickeln bei uns SZ.de im Browser und als App weiter. Auch die digitale Ausgabe der SZ mit allen Sonderpublikationen, SZ-Magazin.de und jetzt.de entstehen hier. Mit Services wie Newslettern, Messengern, Chatbots und Inhalten für alle Sinne, wie DasReze.
Big Data Software Engineer
Lead your own development team and our customers to success! Ultra Tendency is looking for someone who convinces not just by writing excellent code, but also through strong presence and leadership.
At Ultra Tendency you would:
Work in our office in Berlin/Magdeburg and on-site at our customer’s officesMake Big Data useful (build program code, test and deploy to various environments, design and optimize data processing algorithms for our customers)Develop outstanding Big Data application following the latest trends and methodologiesBe a role model and strong leader for your team and oversee the big picturePrioritize tasks efficiently, evaluating and balancing the needs of all stakeholdersIdeally you have:
As Data Engineer in the Merchant Operations team you will build the data warehouse for marketplace and processing big amounts of raw data. You work closely with key stakeholders in product, engineering and operations to form deep understanding of marketplace dynamics
WHERE YOUR EXPERTISE IS NEEDED
Own, design and organise all data flows from scratchInvolve product and engineering to integrate various sources of dataDevelop rigorous data science models to aggregate inconsistent real-time signals into strong predictors of market trendsAutomate and own the end-to-end process of modelling and data visualization.
The position
Are you passionate about data? Are you interested in shaping the next generation of data science driven products for the financial industry? Do you enjoy working in an agile environment involving multiple stakeholders? A challenging role as Senior Data Scientist in a demanding, dynamic and international software company using the latest innovations in predictive analytics and visualization techniques. You will be driving the creation of statistical and machine learning models from prototyping until the final deployment.
Develop Revolutionary Machine Learning Applications with us!
We are a team of highly motivated software architects and mathematicians with diverse backgrounds – from machine learning, routing algorithms, and signal processing to compiler technology and mathematics. We combine our knowledge to advance CeleraOne’s machine learning models and applications every day.
Job Description
- Modelling industrial processes using advanced machine learning methods
- Utilizing modern machine learning frameworks: Google TensorFlow, scikit-learn, pandas, and more