Oferty pracy DevOps i Cloud Engineer201+ Ofert

Przeglądaj oferty pracy DevOps i cloud engineering. Kubernetes, AWS, CI/CD i więcej. Role infrastrukturalne.

Senior ML Infrastructure / ML DevOps Engineer

Pathway (pathway.com) · Poland

About PathwayPathway is shaking the foundations of artificial intelligence by introducing the world’s first post-transformer model that adapts and thinks just like humans. Pathway’s breakthrough architecture (BDH) outperforms Transformer and provides the enterprise with full visibility into how the model works. Combining the foundational model with the fastest data processing engine on the market, Pathway enables enterprises to move beyond incremental optimization and toward truly contextualized, experience-driven intelligence. The company is trusted by organizations such as NATO, La Poste, and Formula 1 racing teams.Pathway is led by co-founder & CEO Zuzanna Stamirowska, a complexity scientist who created a team consisting of AI pioneers, including CTO Jan Chorowski who was the first person to apply Attention to speech and worked with Nobel laureate Goeff Hinton at Google Brain, as well as CSO Adrian Kosowski, a leading computer scientist and quantum physicist who obtained his PhD at the age of 20. The company is backed by leading investors and advisors, including TQ Ventures and Lukasz Kaiser, co-author of the Transformer (“the T” in ChatGPT) and a key researcher behind OpenAI’s reasoning models. Pathway is headquartered in Palo Alto, California.The opportunityWe are looking for a Senior ML Infrastructure / DevOps Engineer who loves Linux, distributed systems, and scaling GPU clusters more than fiddling with notebooks. You will own the infrastructure that powers our ML training and inference workloads across multiple cloud providers, from bare‑bones Linux to container orchestration and CI/CD.You will sit close to the R&D team, but your home is production infrastructure: clusters, networks, storage, observability, and automation. Your work will directly determine how fast we can train, ship, and iterate on models.Why this role is specialOperate and scale GPU‑heavy clusters used daily by the R&D team for large‑scale training and low‑latency inference.​Design, build, and automate the ML platform rather than just run pre‑defined playbooks.​Work across multiple major cloud providers, solving interesting problems in networking, scheduling, and cost/performance optimization at scale.You WillDesign, operate, and scale GPU and CPU clusters for ML training and inference (Slurm, Kubernetes, autoscaling, queueing, quota management).Automate infrastructure provisioning and configuration using infrastructure‑as‑code (Terraform, CloudFormation, cluster‑tooling) and configuration management.Build and maintain robust ML pipelines (data ingestion, training, evaluation, deployment) with strong guarantees around reproducibility, traceability, and rollback.Implement and evolve ML‑centric CI/CD: testing, packaging, deployment of models and services.Own monitoring, logging, and alerting across training and serving: GPU/CPU utilization, latency, throughput, failures, and data/model drift (Grafana, Prometheus, Loki, CloudWatch).Work with terabyte‑scale datasets and the associated storage, networking, and performance challenges.Partner closely with ML engineers and researchers to productionize their work, translating experimental setups into robust, scalable systems.Participate in on‑call rotation for critical ML infrastructure and lead incident response and post‑mortems when things break.You areFormer or current Linux / systems / network administrator who is comfortable living in the shell and debugging at OS and network layers (systemd, filesystems, iptables/security groups, DNS, TLS, routing).5+ years of experience in DevOps/SRE/Platform/Infrastructure roles running production systems, ideally with high‑performance or ML workloads.​Deep familiarity with Linux as a daily driver, including shell scripting and configuration of clusters and services.What we are looking forStrong experience with workload management, containerization, and orchestration (Slurm, Docker, Kubernetes) in production environments.Solid understanding of CI/CD tools and workflows (GitHub Actions, GitLab CI, Jenkins, etc.), including building pipelines from scratch.Hands-on cloud infrastructure experience (AWS, GCP, Azure), especially around GPU instances, VPC/networking, storage, and managed ML services (e.g., SageMaker HyperPod, Vertex AI).Proficiency with infrastructure as code (Terraform, CloudFormation, or similar) and a bias toward automation over manual operations.Experience with monitoring and logging stacks (Grafana, Prometheus, Loki, CloudWatch, or equivalents).Familiarity with ML pipeline and experiment orchestration tools (MLflow, Kubeflow, Airflow, Metaflow, etc.) and with model/version management.Solid programming skills in Python, plus the ability to read and debug code that uses common ML libraries (PyTorch, TensorFlow) even if you are not a full‑time model developer.Strong ownership mindset, comfort with ambiguity, and enthusiasm for scaling and hardening critical infrastructure for an ML‑heavy environment.Willingness to learn.Why You Should ApplyIntellectually stimulating work environment. Be a pioneer: you get to work with realtime data processing & AI.Work in one of the hottest AI startups, with exciting career prospects. Team members are distributed across the world.Responsibilities and ability to make significant contribution to the company’ successInclusive workplace cultureFurther details Type of contract: Permanent employment contract Preferable joining date: Immediate. Compensation: based on profile and location. Location: Remote work. Possibility to work or meet with other team members in one of our offices: Palo Alto, CA; Paris, France or Wroclaw, Poland. Candidates based anywhere in the EU, United States, and Canada will be considered.

Full TimeRemotedirectDevOps
Salary not disclosed4 months ago

Job Description: About Organization The Cloud Customer Success Department at Rakuten Symphony is responsible for ensuring customers achieve maximum value from their cloud investments. The team provides strategic guidance, technical expertise, and ongoing support to help customers adopt, optimize, and successfully operate cloud solutions. Our Customer Success team is a global team with members located in Japan, India, and the US. The team operates in a matrixed structure with clear reporting lines and cross-functional collaboration. It consists of 40+ members, including architects, engineers, and program managers. Our personnel structure blends senior technical leaders with skilled individual contributors to ensure strong delivery, knowledge sharing, and operational continuity. Job Duties We are hiring a Cloud Architect to meet the growing technical and operational demands within the Rakuten Cloud Platform (RCP) Cloud. This role provides essential expertise to design secure, scalable architectures, reduce risk, optimize performance, and support complex cloud initiatives. Your contributions will strengthen platform reliability and enable sustainable business and customer growth. You will be part of the Customer Success team, delivering RMI (Rakuten Mobile Infrastructure) support for Rakuten Cloud and other customers. Key Responsibilities: Design secure and scalable cloud architectures within the RCP Cloud. Provide essential expertise to reduce risk and optimize performance for cloud solutions. Support complex cloud initiatives, ensuring alignment with customer objectives and business outcomes. Deliver RMI support for Rakuten Cloud and other customers. Collaborate with customers and internal teams to ensure successful adoption, optimization, and operation of cloud solutions. Minimum Qualifications Technical Expertise: Expertise in Linux system administration with strong troubleshooting and problem-solving capabilities. Certified Kubernetes Administrator (CKA) or equivalent hands-on experience with Kubernetes. Proficiency in automation and configuration management using Ansible. Solid understanding of cloud services, including compute, storage, networking, and automation frameworks. Understanding of Identity and Access Management and security vulnerability mitigation. Experience with virtualization technologies and containerized environments. Strong scripting skills in Bash, Python, and other automation tools to streamline operations. Mindset: Passion for innovation, with an interest in exploring emerging cloud technologies, tools, and processes to drive business value. Self-driven and quick learner, capable of delivering results independently with minimal supervision. Education: A Bachelor’s Degree in Computer Science, Engineering or equivalent experience. Preferred Qualifications B.E, B. Tech, BCA (Bachelor of Engineering, Bachelor of Technology, Bachelor of Computer Applications). Languages: English (Overall - 4 - Fluent) It's about TIME WE ARE DRIVING DISRUPTIVE CHANGES, AND TO DISRUPT, WE HAVE TO DO THINGS A BIT DIFFERENTLY. At Rakuten Symphony, our vision is to democratize connectivity everywhere. We are reimagining telecom, based on a modern approach to operations and a revolutionary new platform, fusing the best attributes of hyperscale business with the best of telecom to provide flexible, scalable, reliable, secure, affordable communications at low cost and high quality. By driving innovation in the telecom industry, we aim to empower our customers to become true digital platforms. IT’S TIME TO DISRUPT TELECOM. LET’S DO IT TOGETHER, AS FAST AS WE CAN.

Full TimedirectDevOps
Salary not disclosed2 months ago