Skip to content

Member of Technical Staff - Compute Infrastructure

Xai
Palo Alto, CAApr 10, 2026·Posted 1 day ago
View Application Page

Domain

Tech Stack

KubernetesKVMXenQEMUKataFirecrackergVisorSysbox

Must-Have Requirements

  • Deep expertise in virtualization technologies (KVM, Xen, QEMU)
  • Advanced containerization/sandboxing experience (Kata, Firecracker, gVisor, Sysbox, or equivalent)
  • Strong proficiency in C/C++ and Rust
  • Proven track record profiling, debugging, and optimizing complex system-level performance issues
  • Deep knowledge of Linux kernel internals, resource management, scheduling, memory management
  • Hands-on experience building or significantly enhancing distributed compute platforms at scale

Nice to Have

  • -Linux kernel development experience
  • -Hypervisor extensions experience
  • -Proven track record operating or designing large-scale AI training/inference clusters (GPU/TPU scale)
  • -Experience with custom runtimes and isolation techniques for AI compute
  • -Familiarity with performance tools, tracing, and debugging in production distributed environments

Description

About xAI xAI’s mission is to create AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. Our team is small, highly motivated, and focused on engineering excellence. This organization is for individuals who appreciate challenging themselves and thrive on curiosity. We operate with a flat organizational structure. All employees are expected to be hands-on and to contribute directly to the company’s mission. Leadership is given to those who show initiative and consistently deliver excellence. Work ethic and strong prioritization skills are important. All employees are expected to have strong communication skills. They should be able to concisely and accurately share knowledge with their teammates.

ABOUT THE ROLE

The Compute Infrastructure team at xAI is responsible for designing, building, and operating the massive-scale clusters and orchestration platforms that power frontier AI training, inference, and agent workloads at unprecedented scale. In this role, you will push the boundaries of container orchestration far beyond existing systems like Kubernetes, manage exascale compute resources, optimize for high-performance training runs and production serving, and collaborate closely with research and systems teams to deliver reliable, ultra-scalable infrastructure that enables xAI's next-generation models and applications.

RESPONSIBILITIES

Build and manage massive-scale clusters to host, persist, train, and serve AI workloads with extreme reliability and performance. Design, develop, and extend an in-house container orchestration platform that achieves superior scalability, isolation, resource efficiency, and fault-tolerance compared to off-the-shelf solutions. Collaborate with research teams to architect and optimize compute clusters specifically for large-scale training runs, inference services, and real-time applications. Profile, debug, and resolve complex system-level performance bottlenecks, resource contention, scheduling issues, and reliability problems across the full stack. Own end-to-end infrastructure initiatives with first-principles design, rigorous testing, automation, and continuous optimization to support frontier AI compute demands.

BASIC QUALIFICATIONS

Deep expertise in virtualization technologies (KVM, Xen, QEMU) and advanced containerization/sandboxing (Kata, Firecracker, gVisor, Sysbox, or equivalent). Strong proficiency in systems programming languages such as C/C++ and Rust. Proven track record profiling, debugging, and optimizing complex system-level performance issues, with deep knowledge of Linux kernel internals, resource management, scheduling, memory management, and low-level engineering. Hands-on experience building or significantly enhancing distributed compute platforms, orchestration systems, or high-performance infrastructure at scale. Ability to thrive in a fast-paced, meritocratic environment with full ownership, high standards, and a focus on rigorous execution.

PREFERRED SKILLS AND EXPERIENCE

Experience in Linux kernel development, hypervisor extensions, or low-level system programming for compute-intensive workloads. Proven track record operating or designing large-scale AI training/inference clusters (GPU/TPU scale). Experience with custom runtimes, isolation techniques, or bespoke platforms for specialized AI compute. Familiarity with performance tools, tracing, and debugging in production distributed environments.

COMPENSATION AND BENEFITS

$180,000 - $440,000 USD

Base salary is just one part of our total rewards package at xAI, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short & long-term disability insurance, life insurance, and various other discounts and perks. xAI is an equal opportunity employer. For details on data processing, view our Recruitment Privacy Notice .

Location Context