Imagine a workplace which encourages you to take on responsibility and where your ideas will be heard and implemented. Imagine a fast paced environment where your performance makes the difference. This is TrustYou! We are looking for adventurers to join our smart and inspiring team!
TrustYou creates summaries of hotel reviews, with the goal of being as useful, or more, than a summary written by humans. Our data is directly integrated on Google Maps, hotels.com, KAYAK, and used by more than 10,000 hotels worldwide to analyze their guest feedback. And it’s our data engineers that make this happen!
We are looking for a senior engineer to lead our international, motivated data engineering team! You are the perfect candidate if you have experience architecting complex data processing pipelines. You are familiar with the Hadoop ecosystem, and have worked with a variety of SQL and NoSQL databases. Experience in Python is a big plus, as is experience in projects that deploy machine learning/NLP models to production. You are also interested in managing teams. You enjoy running SCRUM processes, give constructive feedback to teams, and thrive on inside and outside feedback yourself.
Which challenges await you?
- Write large-scale data pipelines in Apache Spark and MapReduce
- Make architectural decisions, with the support of your engineers and other tech leads
- Lead a small team of data engineers and scientists: Run a SCRUM process, and give immediate and constructive feedback to them
- Lead R&D projects laden with new technologies, algorithms and uncertainty
What do we expect from you?
- 3+ years of experience building data intensive applications.
- Very strong programming and architectural experience, ideally in Python, Java or Scala but we are open to other experience if you would like to become a Python hacker.
- You find creative solutions to tough problems. You are not only a great developer, you are also an architect who is not afraid to pave the way for bigger and better things.
- Experience and skills to clean and scrub noisy datasets.
- Experience building data pipelines and ETLs using MapReduce, Spark or Flink.
Nice to have
- Expert-level knowledge in Python. Experience in frameworks such as Pandas, Scikit-learn, Scipy, Luigi / Airflow is a plus.
- Love for the command line with optional affinity for Linux scripting.
- Experience with Big data technologies (Hadoop, Spark, Flink, Hive, Impala, HBase, Pig, Redshift, Kafka).
- Experience building scalable REST-APIs using Python or similar technologies.
- Experience with data mining, machine learning, natural language processing or information retrieval is a plus.
- Experience with Agile Methodologies such as Scrum or Kanban.