About the Role
We are seeking a Senior AI Infrastructure Engineer to design, build, and scale the high-performance AI platform powering our autonomous driving models. While researchers focus on developing perception, planning, and world models, you will be responsible for the underlying infrastructure that enables distributed training, experiment tracking, and seamless model deployment. You will bridge the gap between research and production, ensuring our AI stack is scalable, resilient, and highly efficient
This role is onsite 5 days a week at our Mountain View, CA office!
What you'll do
Distributed Training & ML Systems Support
Scale Research Workloads: Enable researchers to scale complex models (VLA, World Models) across multi-node setups using PyTorch Distributed, and Ray Train.
Performance Optimization: Architect and optimize multi-GPU setups, ensuring efficient model parallelism and data parallelism techniques across H100/A100 clusters.
Networking & Hardware Tuning: Optimize low-level communication (e.g., NCCL tuning, InfiniBand, or RoCE v2) to minimize latency for 3D Gaussian Splatting (3DGS) and large-scale training.
Intelligent Resource Scheduling: Optimize hardware utilization and cost-efficiency through Kubernetes-native GPU scheduling (NVIDIA GPU Operator, KubeFlow).
Inference Performance Engineering: Deploy and scale optimized model artifacts using TensorRT, ONNX Runtime, and Triton Inference Server, fine-tuning pipelines for both real-time and batch processing
Agentic Infrastructure & Automation
Self-Healing AI Infrastructure: Architect and deploy Autonomous AI Agents (LangGraph, CrewAI, or AutoGen) to monitor GPU cluster health, enabling automated real-time triage of hardware failures and NCCL timeouts.
Agentic DevOps & CI/CD: Develop agent-driven automation, such as Agentic PR Reviewers for infrastructure code and AI agents that proactively suggest model-specific Kubernetes resource optimizations.
Agentic Data Curation: Support researchers in building "Data Machines" where AI agents autonomously curate, label, and verify high-priority edge cases from raw data.
Model Management & Lifecycle (MLOps)
Automated Lifecycle Management: Design and maintain ML infrastructure leveraging MLFlow, Argo Workflows, and Kubernetes to automate the end-to-end model lifecycle.
Experiment & Model Tracking: Integrate feature stores and experiment tracking systems to provide a robust system of record for every model iteration.
Deployment Strategies: Implement robust serving mechanisms, including A/B testing, shadow deployments, and rollback mechanisms
Cloud-Native Foundations & Data Integration
Infrastructure as Code: Drive the "Everything as Code" philosophy using Terraform and Helm.
Data Pipelines: Collaborate with data teams to scale ETL pipelines using Apache Airflow, Kafka, and Spark for large-scale dataset management. ○
Integrated Data Factories: Collaborate with data engineering teams to scale high-bandwidth ETL pipelines using Apache Airflow, Kafka, and Spark, ensuring seamless data flow from raw sensor logs to optimized storage in S3, GCS, or Delta Lake
Monitoring & Observability
System Metrics: Define and track key ML system metrics, including training convergence, latency, throughput, and drift detection.
Infrastructure Health: Maintain deep visibility into platform health using Prometheus, Grafana, OpenTelemetry, and ELK Stack.
Deep Stack Observability: Develop comprehensive monitoring using Prometheus, Grafana, and OpenTelemetry to track low-level infrastructure health alongside high-level ML metrics like training convergence and throughput.
AI-Specific Metrics & Drift: Define and monitor critical ML system KPIs, including model latency, inference throughput, and feature drift detection
What we're looking for
Experience: 5+ years in ML infrastructure, MLOps, or DevOps supporting high-scale compute environments.
ML Expertise: Deep understanding of multi-GPU training strategies (FSDP, DeepSpeed, Ray Train) and high-performance networking (NCCL, InfiniBand).
Infrastructure Automation: Mastery of Kubernetes, Terraform, and Helm, with a focus on GPU-native orchestration.
AI Agent Frameworks: Proven experience building or supporting Agentic Workflows for infrastructure or data automation (e.g., using LLMs to drive DevOps tasks).
Platform Mastery: Expertise in MLFlow, Argo Workflows, and Kubernetes.
Containerization: Strong experience with Docker, Kubernetes, and Helm.
Data & CI/CD: Proficiency in Apache Airflow, Kafka, Spark, and GitOps automation.
Core Skills: Proficiency in Python and Bash; experience with Go or Rust is a plus