About the Role
This is a support-focused, hands-on internship where you’ll assist engineers in monitoring, maintaining, and improving cloud-based data pipelines and environments.
Expect training, mentorship, and meaningful responsibilities - all tailored to help you build the foundations of a future Data Engineer career.
Your Responsibilities
Supporting engineering teams with day-to-day technical and operational tasks.
Assisting with monitoring, debugging, and validating cloud data pipelines.
Using Python and SQL for data processing, cleaning, and analysis.
Helping document workflows and implement small improvements.
Collaborating with mid and senior engineers to understand requirements and deliver simple solutions.
Learning modern tools such as Azure, Databricks, Spark, orchestration, and CI/CD.
Gradually taking on more advanced tasks as your skills grow.
What We’re Looking For
⚠️ Mandatory requirement:
Active student status (IT-related studies preferred).
No commercial experience required.
Working knowledge of Python - ability to write basic scripts and perform data transformations.
Working knowledge of SQL - comfortable with queries, joins, aggregations.
Student in Computer Science, Data Science, Applied Mathematics, Informatics, or a similar field.
Interest in cloud technologies, data engineering, or Big Data systems.
Eagerness to learn, ask questions, and grow in a fast-paced tech environment.
Analytical mindset and problem-solving attitude.
Nice to Have
Participation in student tech groups, hackathons, coding communities, or research circles.
Any exposure to Azure, Databricks, Spark, or ETL concepts (e.g., from university projects).
Familiarity with scripting, version control, or cloud basics.
Tech Stack
PythonSQLAzureDatabricksSparkCI/CD