Skip to content

Senior Software Engineer - Search Runtime

Nebius
Amsterdam, Netherlands; London, United KingdomApr 10, 2026·Posted 1 day ago
View Application Page

Description

Why work at Nebius Nebius is leading a new era in cloud computing to serve the global AI economy. We create the tools and resources our customers need to solve real-world challenges and transform industries, without massive infrastructure costs or the need to build large in-house AI/ML teams. Our employees work at the cutting edge of AI cloud infrastructure alongside some of the most experienced and innovative leaders and engineers in the field. Where we work Headquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with R&D hubs across Europe, North America, and Israel. The team of over 1400 employees includes more than 400 highly skilled engineers with deep expertise across hardware and software engineering, as well as an in-house AI R&D team. The Product In a rapidly evolving world, trust in AI depends on AI agents being grounded in fresh, verified real-world data. Search is the foundation that makes this possible. We are building an agent-native search platform designed specifically for AI systems rather than human users. Our product provides programmatic, low-latency, and observable search APIs that AI agents use to retrieve, filter, and reason over real-world information at scale. The Role We are looking for a Senior Software Engineer to work on the runtime systems of a novel search engine tailored for agentic AI consumption. In this role, you will focus on building low-latency, high-throughput systems that serve search queries in real time. You will work on the critical path of user-facing requests, where performance, predictability, and efficiency directly impact product quality. You will design and operate systems that handle thousands of requests per second under strict latency budgets, optimising every layer from request handling to data access and response assembly. In this position, your responsibility will be to Design, implement, and operate core runtime services for serving search queries at scale Build and optimise request flows, including query processing, retrieval orchestration, and response assembly under strict latency budgets Develop systems that maintain performance and predictability under high load Optimise CPU, memory, and data access patterns in performance-critical paths Ensure reliability, observability, and predictability across production services Build well-tested systems with clear responsibilities and interaction contracts, while remaining flexible as architecture evolves Define and implement observability primitives, including structured logs, metrics, traces, and latency breakdowns Monitor throughput, latency, and resource usage, and drive improvements in performance and cost efficiency Collaborate with indexing and ML teams to integrate retrieval and ranking components, keeping ML logic decoupled from core system internals Support experimentation and iteration through controlled rollouts and rigorous benchmarking

You may be a good fit if you

Have 5+ years of experience as a software engineer working on production backend systems Have strong hands-on expertise in C++ or Rust in real-world, high-load services Have built and operated high-load, low-latency user-facing systems handling thousands of RPS under strict latency constraints Understand performance at a systems level — CPU, memory, networking, and data access Have operated your own code in production: deployed it, debugged incidents, and rolled back changes when necessary Think end-to-end about request flows rather than staying within isolated components Can balance correctness, latency, and development velocity, making pragmatic tradeoffs when scope or time requires Collaborate effectively across engineering, ML, and product teams, communicating clearly in cross-functional settings

Strong candidates may also have experience with

DBMS internals (open source or SaaS) and cloud infrastructure High-load web applications or large-scale APIs Performance-critical systems such as trading platforms or real-time data pipelines Low-level performance tuning and hardware-level optimisation Open-source contributions or active involvement in the engineering community Competitive programming or CTF participation SHAD or similar advanced technical programmes Conference talks or technical publications We conduct coding interviews as part of the process. What we offer Competitive salary and comprehensive benefits package. Opportunities for professional growth within Nebius. Flexible working arrangements. A dynamic and collaborative work environment that values initiative and innovation. We’re growing and expanding our products every day. If you’re up to the challenge and are excited about AI and ML as much as we are, join us!