Your role:
- Design, build, test and package components to ingest, transform and store large volumes of streaming data that are composable into reliable data pipelines
- Conduct requirements engineering and map requests to a suitable data architecture
- Orchestrate and instrument data pipelines so they are scalable and maintainable
- Maintain exiting code base and take care of automated building, packaging and deploying
- Evaluate and benchmark technology options. Run PoCs and estimate operations cost.
- Align with Backend Engineers, define requirements and request optimizations
Your profile:
- 3+ years experience in professional software development for data-intense applications
- Proficiency in using and maintaining CI/CD pipelines
- Hands-On experience with BigData Frameworks like Apache Kafka, Spark, RDBMS, etc.
- Understanding for operational considerations and cost / performance trade-offs
- Communicate effectively in a distributed agile team setup