About the Role
Kogo poszukujemy?
Skills:
2 to 4 years of hands-on experience as a Data Engineer in cloud environments
Strong SQL skills
Practical experience with Python in data engineering use cases
Experience with Azure Data Factory and Databricks
Good understanding of data pipelines, ETL/ELT and data lifecycle
Ability to communicate clearly with both technical and non-technical stakeholders
Experience working in Agile teams
Cost awareness when designing data solutions
Curiosity and willingness to learn continuously
Tech stack you will use:
SQL for data processing and analytics
Python for transformations and data quality logic
Azure Data Factory for ingestion (1-to-1 pipelines)
Databricks for transformations and data quality implementation
Azure-based data platform services
Git or similar source code management tools
CLI environments (bash, PowerShell)
What's in it for you?
Exciting role at a leading European bank
6 months assignment, B2B contract (with possibility to extend)
Fully remote role with availability within Central European Time (CET)
Personal growth and development opportunities
Czym będziesz się zajmować?
One of Europe’s banks is undergoing a digital transformation, focusing on cloud technologies and a customer-first approach. The goal is to unlock the full potential of enterprise data using scalable data engineering methods and strong data governance, including quality, metadata, and core management practices.
The Data Office works closely with IT and business teams to build a modern data platform and maintain data pipelines across both cloud and on-prem environments. Data will be a key driver in understanding and meeting customer needs.
We’re looking for someone who gets the fundamentals of Data Engineering - Data Management, DataOps, Security, Architecture, Orchestration and Software Engineering - and knows how to apply them with a focus on scalability, simplicity, agility and cost-efficiency.
What will you do:
Build and maintain robust data pipelines (batch, micro-batch, real-time)
Work with both structured and unstructured data from diverse sources
Transform raw data into trusted, value-adding assets for business use
Implement data quality frameworks (validation, profiling, cleansing)
Manage storage, metadata and master data for the entire platform
Support modern orchestration processes and pipeline automation
Ensure systems are scalable, cost-effective, and easy to maintain
Tech Stack
SQLPythonAzure Data FactoryDatabricksETLELTGitCLIbashPowerShell