About the Role
About the role.
Zilch is a fast-growing fintech innovator transforming the way consumers access credit and pay for everyday purchases. By combining responsible lending, real-time data intelligence, and a frictionless customer experience, we empower millions of people with transparent, interest-free payment options. We are scaling rapidly and investing in a world-class data foundation to support our next phase of growth.
We are seeking a Data Engineer to take ownership of the design, development, and evolution of our data platform. This role is critical to closing a skills gap within the Data team and ensuring that our data infrastructure is reliable, cost-efficient, observable, and fit for Zilch’s long-term ambitions.
You will work closely with Analytics Engineers, Data Scientists, Software Engineers, and Platform teams to build engineered solutions rather than relying solely on third-party tools or ad-hoc processes.
Day-to-day responsibilities.
Data Platform & Pipeline Engineering:
Design, build, and maintain scalable, reliable data pipelines across Snowflake, AWS, and supporting services.
Replace fragile or manual ingestion processes (e.g. user-generated files) with engineered, validated solutions, rejecting errors at the point of entry.
Own and improve ingestion from internal services, affiliates, and third-party providers.
Overhaul and standardise legacy pipelines and DAGs to improve maintainability and reliability.
Orchestration, CI/CD & Infrastructure:
Lead the evolution of our orchestration platform (Airflow), including performance improvements, standardisation, upgrades, and event-driven capabilities.
Containerise Airflow tasks and supporting services to improve isolation, reproducibility, and CI/CD integration.
Design and maintain CI/CD pipelines for data and ML workloads, balancing development speed with increasing scale and complexity.
Partner with Platform Engineering to align infrastructure patterns while retaining strong data-domain ownership.
Observability, Reliability & Cost Optimisation:
Design and implement end-to-end monitoring, alerting, and observability, including integration with Datadog and Slack.
Improve alert reliability and error visibility across the data platform.
Address performance issues such as high CPU usage and inefficient resource utilisation.
Lead Snowflake cost optimisation, applying deep platform knowledge to balance performance and spend.
Data Governance & Quality:
Design and implement data access controls and governance frameworks aligned with business and regulatory needs.
Ensure strong data quality, lineage, and documentation across the platform.
Automate lineage and exposure management (e.g. DBT exposures via Looker APIs).
Machine Learning & Advanced Analytics Enablement:
Support Data Science and ML Engineering by improving MLOps practices, deployment workflows, and infrastructure.
Help standardise ML CI/CD and environment parity across development and production.
Establish or support feature store patterns (e.g. Snowflake feature store, Feast, or AWS-native solutions).
Enable reproducible feature pipelines, backfills, and alignment with DBT and warehouse models.
Technical Leadership:
Act as a senior technical contributor within the Data team, setting best practices and raising engineering standards.
Influence platform tooling, architectural decisions, and long-term data strategy.
Reduce technical debt and improve the overall health of the data estate.
Tech Stack
SnowflakeAWSAirflowDatadogDBTLookerCI/CDMLOps