Medior AWS & Databricks Platform Engineer

Datamole is a culture-centric company with 10 years of experience helping clients become truly data-driven. Our expertise spans software engineering, data engineering, and machine learning, with a strong focus on AWS-native solutions through our Brno Data Engineering team.

We are now looking for new talent to help build a next-generation data platform for a long-term Dutch client in the public transport sector. You will work with AWS and Databricks to design and operate scalable, reliable data pipelines using services like S3, Kinesis, Firehose, transactional data lake formats (Delta Lake, Iceberg), Apache Spark (PySpark), and infrastructure-as-code tools such as AWS CDK or Terraform.

You will join a team of experienced cloud engineers following best practices, including the AWS Well-Architected Framework, to deliver a secure and mature data platform for analysts, data scientists, and ML engineers.

What will be your key responsibilities:

RESPONSIBILITIES:

  • Design, develop, and maintain data processing pipelines using Apache Spark, Delta Lake, and related technologies within the Databricks ecosystem, ensuring scalability, reliability, and operational excellence.
  • Design and model structured and unstructured datasets, including implementing efficient table layouts and partitioning strategies to support performant, scalable, and maintainable data workflows.
  • Implement and automate cloud infrastructure using Infrastructure as Code (Terraform, AWS CDK), supporting repeatable and consistent environment provisioning.
  • Contribute to the architecture and evolution of a next-generation data platform, applying industry best practices such as the AWS Well-Architected Framework.
  • Collaborate closely with data engineers, analysts, and ML practitioners to deliver platform capabilities that support analytical and machine learning workloads. 
  • Develop and maintain CI/CD pipelines using GitHub Actions, incorporating automated testing, linting, and artifact management.
  • Enhance platform observability, including metrics, logging, alerts, and tracing, ensuring issues are identified and resolved proactively.
  • Document platform architecture, operational processes, and standards, enabling alignment within the engineering team and supporting ongoing maintenance.
  • Participate in code reviews and provide guidance to colleagues, helping foster a collaborative, supportive, and growth-oriented engineering culture.


What experience should you have:

QUALIFICATIONS 

  • University degree in Computer Science, Software Engineering, or a related technical field.
  • 3+ years of professional engineering experience in cloud, data, or software development roles.
  • Practical experience with the Python ecosystem, ideally including PySpark for data processing.
  • Solid understanding of cloud engineering principles (AWS, Azure, or GCP), with AWS expertise considered a strong advantage.
  • Experience building data-intensive, distributed data processing pipelines, ideally using Databricks or underlying open-source technologies such as Apache Spark (PySpark), Delta Lake, or Apache Iceberg.
  • Familiarity with DevOps practices, including Continuous Integration (CI), automated testing, and delivery pipelines.
  • Hands-on experience with Infrastructure as Code (IaC), preferably using Terraform or AWS CDK.
  • A collaborative mindset with the ability to support and guide colleagues, contributing to a positive, knowledge-sharing team environment.


NICE TO HAVE:

  • AWS or Databricks certifications, demonstrating commitment to continuous learning and professional development.
  • Familiarity with data orchestration tools, with a preference for Apache Airflow (experience with AWS Step Functions is a plus).
  • Knowledge of cloud security and operational best practices, including IAM design, network security principles, vulnerability management, and frameworks such as the AWS Well-Architected Framework.
  • Exposure to monitoring and observability tools (CloudWatch, Grafana, Loki, Prometheus).
  • Experience in optimizing cost, performance, and reliability in cloud environments, guided by established architectural best practices.
  • Awareness of ML workflows and MLOps practices that may interact with or depend on the data platform.


I want to apply

Send offer to e-mail

More positions in category Information Technology, region Brno

PLANT IT ENGINEER

  • Goodcall
  • České Budějovice
  • 85 - 90 000 Kč/měs

Hledáme samostatného IT odborníka, který se stane velmi významnou součástí fungování zahraničního výrobního závodu v Českých Budějovicích. Nečeká vás řízení týmu ani manažerská agenda – naopak. Tahle…

PLANT IT ENGINEER

Senior Backend Developer

  • Deutsche Telekom Services Czech Republic
  • Brno
  • By agreement

Join our AI Shared Services team and play a key role in building robust backend systems that power complex data applications. We’re looking for an independent and motivated backend developer with…

Senior Backend Developer

Laravel Developer - Doplnění týmu - (Únor, Březen 2026)

  • Goodcall
  • České Budějovice
  • By agreement

Pro našeho klienta aktuálně hledáme Laravel Developera (Medior/Senior) do moderní a rychle se rozvíjející společnosti, která i přes svůj mladý věk stojí na pevných základech a reálných výsledcích.

Laravel Developer - Doplnění týmu - (Únor, Březen 2026)