YOUR ROLE:
Design, implement and maintain AWS cloud based data processing pipelines with performance, reliability, scalability, security and costs efficiency in mind
Cooperate closely with analysts and ML engineers to understand their data access patterns, design easy to use application interfaces to satisfy their demand
Build continuous integration and delivery pipelines through established infrastructure as code frameworks
YOU SHOULD HAVE:
University degree in Computer Science or related
1+ years of professional software engineering experience
Profound demonstrable knowledge of Python ecosystem
NICE TO HAVE:
Hands-on experience with AWS cloud infrastructure, preferably through any of the established IaC frameworks (Terraform, Cloud Development Kit, etc.)
Strong data modeling skills, knowledge of SQL and NoSQL based data architectures
Awareness of commonly used data analytics tools (Jupyter, Pandas, DVC, etc.)
Familiarity with large-scale data analytics engines (Apache Spark) is a big plus
Experience with DevOps, Continuous Integration and testing automation
Fluency (both written and spoken) in English