About the Role
Data Engineer (Databricks)
Inside IR35 | 6‑Month Contract
Sector: Public Sector / Central Government
Location: London or Telford (Hybrid – Remote with on‑site as required)
Security Clearance: Active SC Clearance due to time scales
About the Role
We are seeking an experienced Data Engineer with deep expertise in Databricks to support the development of scalable data platforms within a Central Government environment.
This role is delivery focused and requires someone who has worked in secure, regulated public sector settings. You will be responsible for designing, building, and optimising data pipelines that enable analytics, reporting, and downstream data services across the organisation.
Key Responsibilities
Design, build, and maintain data pipelines using Databricks
Develop ETL/ELT processes with PySpark and Spark SQL
Transform and model structured and semi‑structured datasets
Improve performance, reliability, and cost efficiency of data workloads
Ensure compliance with data governance, security, and quality standards
Work collaboratively with architects, analysts, and delivery teams
Produce clear documentation and contribute to engineering best practice routes
Required Skills & Experience
Active SC Clearance – mandatory and non‑negotiable
Strong hands‑on experience with Databricks in production environments
Expert knowledge of Apache Spark, including PySpark and Spark SQL
Proficiency with Python for data engineering
Experience delivering solutions on cloud data platforms (AWS preferred, Azure acceptable)
Understanding of data lake and lakehouse architectures
Ability to work autonomously and collaboratively within delivery squads
Experience working within Central Government departments (highly desirable)
Desirable
Azure Data Lake, Synapse, Delta Lake
Streaming technologies (Kafka, Event Hubs)
Tech Stack
DatabricksApache SparkPySparkSpark SQLPythonAWSAzureETLELT