About the Role
Visa is accelerating the delivery of data analytics and AI powered products to support client growth and strategic decision-making across regions. We are seeking a Data Engineer to execute on the design, delivery and evolution of scalable data engineering capabilities that underpin Data Science, AI and client facing products for all European markets.
The role requires understanding and translating business needs into data models, creating robust data pipelines, and developing and maintaining databases. The candidate should be able to define and manage data load procedures, implement data strategies, and ensure robust operational data management systems. Collaborating with stakeholders across the organization to understand their data needs and deliver solutions is also a key part of this role. The ideal candidate will be proficient in big data tools like Hadoop, Hive, and Spark, programming languages such as Python and SQL and have strong analytical skills related to working with structured and unstructured datasets.
Primary Responsibilities:
Requirement Analysis: Understand and translate business needs into data models supporting long-term solutions
Build and manage large scale ETL processes to generate data assets for the region
Build modular and reusable code considering the configurability and scalability while adhering to low-level design
Perform thorough unit testing of development tasks and document the test results using standard defined templates
Build, schedule, and manage DAGs in Apache Airflow efficiently
Monitor data processing tasks using Airflow
Ensure quality control of data assets, reconciling data loaded across different stages in the data pipeline
Utilize strong data analytics skills to identify, discuss, and promptly fix data issues
Apply debugging skills to quickly rectify execution errors, ensuring minimal delays and impact on business operations
Collaborate and communicate with stakeholders for requirement understanding and clarifications
Maintain the highest level of quality and detail-oriented approach in daily tasks
This is a hybrid position. This requires 3 days per week attendance in the office.
Qualifikationen
Basic Qualifications:
5 or more years of relevant work experience with a Bachelor's Degree or at least 2 years of work experience with an Advanced degree (e.g. Masters, MBA, JD, MD) or 0 years of work experience with a PhD
Preferred Qualifications:
6 or more years of work experience with a Bachelor's Degree or 4 or more years of relevant experience with an Advanced Degree (e.g. Masters, MBA, JD, MD) or up to 3 years of relevant experience with a PhD
2-4 years development experience in building data pipelines and writing ETL code using Hive, PySpark, SQL and Unix
Experience in writing and optimizing SQL queries in a big data environment
Experience working in Linux/Unix environment and exposure to command line utilities
Experience creating/supporting production software/systems and a proven track record of identifying and resolving performance bottlenecks for production systems
Exposure to code version control systems (git)
Experience working with cloud services (e.g. AWS, GCP, Azure)
Strong communication skills
Ability to understand business requirements of the broader business
Good understanding of agile working practices and related program management skills
Good communication and presentation skills with ability to interact with different cross-functional team members at varying levels
Advanced degree in technical field (e.g. Computer Science, statistics, etc.)
Experience with visualization tools like Tableau and Power BI
Exposure to Financial Services or the Payments Industry
Tech Stack
HadoopHiveSparkPythonSQLPySparkUnixLinuxAWSGCPAzuregit