AI Factory Customer Engineer
Description
About the Company Armada is a full-stack edge infrastructure company delivering compute, connectivity, and sovereign AI/ML to some of the world’s most remote places. Named one of Fast Company's Most Innovative Companies, Armada’s solutions are deployed in over 60 countries globally for organizations ranging from energy to defense. With over $200 million in funding, Armada is backed by top investors such as Microsoft (M12), Founders Fund, and has strategic partnerships including Starlink, Skydio, and NVIDIA. We are looking for the most brilliant minds in the world to join us. Working at Armada means taking ownership, driving autonomy, and delivering impact. You’ll tackle challenges that haven’t been solved before and help build something transformative from the ground up. What you do here will not only define your career but help further Armada’s mission to bridge the digital divide for customers around the world.
About the Role
The AI Factory Customer Engineer plays a pivotal role in bridging the gap between customers and Armada's Product and Engineering teams. This role requires deep technical credibility, hands-on infrastructure experience, and strong interpersonal skills to translate complex AI infrastructure and data center architectures into clear, practical solutions for customers. This role goes beyond traditional pre-sales. The Customer Engineer acts as the technical interface between customers and Armada Engineering, ensuring customer requirements, constraints, and feedback are accurately represented and incorporated into solution design and future platform evolution. Armada's AI Factory serves a broad and growing set of customers across enterprise, industrial, and infrastructure-heavy verticals. A meaningful and growing subset of strategic targets includes AI-focused datacenter co-location providers and Neocloud operators which represent a primary segment where Armada's modular, liquid-cooled AI Factory is a compelling fit for capacity expansion and rapid deployment. Additionally, renewable energy operators who own or control significant land, power, and stranded energy assets are increasingly positioned to host or operate AI compute infrastructure at scale. The ideal candidate brings an engineering-first mindset, strong data center and infrastructure expertise, particularly around liquid-cooling, modular data centers, Nvidia architecture, and the energy, curiosity, and positivity required to operate in fast-moving, ambiguous environments. This role is instrumental in driving adoption, trust, and long-term success for Armada's AI Factory offerings. What You'll Do — Key Responsibilities Core Technical Customer Engagement Act as the primary technical interface between customers and Armada's Product and Engineering teams, translating real-world requirements into actionable technical inputs. Provide expert technical guidance on Armada's AI Factory products, including Modular Data Centers (MDCs), liquid-cooled data center designs, and GPU-based systems from partners such as NVIDIA. Understand customer workloads deeply both current and future and advise on rack-level design, system architecture, and deployment tradeoffs. Lead technical demonstrations, proof-of-concepts, and workshops, tailoring content to customer constraints and operational realities. Cultivate trusted-advisor relationships with customer engineering, infrastructure, IT, OT, and security stakeholders. Enable field and sales teams by simplifying complex technical concepts into clear, customer-relevant narratives. Bring a high-ownership, self-starting mindset with the energy and curiosity to fill gaps, solve problems, and drive outcomes end-to-end. Renewable Energy & Land-and-Power Operator Engagement Engage renewable energy operators, independent power producers, and upstream oil and gas companies (e.g., Chord Energy, Devon Energy, and similar) as a strategic and growing customer segment for Armada's AI Factory — helping them understand how their existing power generation capacity, land assets, and fiber infrastructure make them natural hosts for modular AI compute deployments. Articulate the value proposition of Armada's MDC platform to energy sector operators: rapid deployment timelines, self-contained power and cooling architecture, scalable capacity, and the ability to monetize stranded or excess power assets through AI compute hosting. Develop familiarity with the operational model of energy companies: power generation economics, land and lease structures, regulatory constraints, and the OT/IT boundary — and translate these into informed AI Factory deployment conversations. Navigate both the OT and IT stakeholder landscape within energy companies, building credibility with infrastructure, operations, and executive teams alike. Identify and communicate the specific deployment considerations relevant to energy sector environments: remote or off-grid locations, variable power availability, environmental and permitting requirements, and physical security constraints. AI Datacenter Co-Lo & Neocloud Engagement Engage AI-focused datacenter co-location operators and Neocloud providers as a secondary but relevant customer segment, positioning Armada's AI Factory as a solution for rapid capacity expansion, modular overflow deployments, and high-density liquid-cooled compute environments. Develop sufficient familiarity with co-lo and Neocloud operational models — GPU cluster economics, high-density power and cooling requirements, multi-tenant compute, and deployment velocity expectations — to hold credible technical conversations and scope solutions appropriately. Collaborate with Sales to identify where AI Factory's modular, rapidly deployable infrastructure is most differentiated against co-lo and Neocloud alternatives, and tailor technical positioning accordingly. Cross-Functional Collaboration & Field Strategy Collaborate cross-functionally with Sales, Product, and Engineering to support customer engagements, technical scoping, and solution design across all target verticals. Stay current on industry trends related to AI infrastructure, energy sector AI adoption, data center design, liquid cooling, and modular systems — and share insights internally to shape future product strategy. Bring positive energy, curiosity, and a collaborative mindset to every engagement — this role values technical excellence and having fun while building something meaningful. Required Qualifications Bachelor's degree in Engineering, Computer Science, or a related field (or equivalent hands-on engineering experience). Advanced engineering degrees are a plus. 5+ years of relevant experience in data center engineering, infrastructure engineering, pre-sales, sales engineering, or solution architecture roles. Years of experience are less important than depth and relevance to data center and infrastructure topics. Strong understanding of compute, networking, and storage fundamentals, including AI and GPU-based systems (e.g., NVIDIA DGX, HGX, MGX, or similar).
Hands-on experience with data center systems, including
Mechanical, electrical, and plumbing systems Cooling architectures, with direct experience or strong familiarity with liquid-cooled data centers and/or modular data centers (MDCs) Racking and rack-level design considerations Engineering mindset with curiosity about what lives inside the rack, including openness to evolving beyond current reference designs (e.g., B300 / GB300) as architectures evolve. Familiarity with cloud platforms, containerization, virtualization, and modern enterprise infrastructure. Ability to navigate and build credibility with both OT and IT stakeholders — particularly relevant when engaging energy, industrial, and infrastructure-heavy customers. Strong troubleshooting skills and comfort operating in complex, live customer environments. Excellent communication skills, with the ability to engage both highly technical engineers and non-technical stakeholders. Willingness to travel as required. Demonstrable Expertise — Technical Focus Areas Primary Focus: Modular & Liquid-Cooled Data Center Architecture Expertise in modular or micro data center design and deployment. Experience integrating compute, storage, and networking in space-, power-, and thermally-constrained environments. Knowledge of liquid cooling architectures, rack-level thermal management, and power density considerations. Ability to translate customer workload requirements into scalable, production-ready infrastructure designs. Secondary Exposure (Nice to Have): Energy Sector & Industrial Infrastructure Familiarity with power generation, distribution, and energy economics — particularly relevant to operators considering AI compute hosting on owned or leased power assets. Understanding of remote or off-grid infrastructure deployment: power availability constraints, site connectivity, environmental permitting, and physical security considerations. Awareness of the OT/IT landscape in energy and industrial environments, including SCADA systems, operational reliability requirements, and safety-driven design constraints. Background in or exposure to oil and gas, upstream energy, independent power production, or renewable energy operations. Secondary Exposure (Nice to Have): AI & Edge AI Infrastructure High-level understanding of AI workloads and GPU clusters. Familiarity with AI frameworks and real-world deployment tradeoffs. Awareness of edge and distributed AI architectures. Secondary Exposure (Nice to Have): Data Center Systems Advanced exposure to electrical systems (PDUs, UPS, subsystems). Experience with water-based cooling systems, controls, and monitoring. Understanding of operational reliability, redundancy, and resilience.
Preferred Qualifications
Experience supporting customers in regulated or mission-critical environments, including energy, utilities, oil and gas, or industrial infrastructure. Background in or direct exposure to renewable energy operators, independent power producers, or upstream oil and gas companies as a vendor, partner, or customer. Familiarity with how energy and land-and-power operators evaluate infrastructure hosting and co-location decisions, including site selection, power contracting, and capital deployment timelines. Background in Data Centers, MSPs, Telcos, Utilities, or industrial infrastructure. Experience working with physical infrastructure products, including logistics and deployment constraints. Some familiarity with AI datacenter co-location or Neocloud operating models, GPU cluster economics, or high-density compute environments. Familiarity with structured sales methodologies (MEDDPICC, Challenger, Command of the Message). Compensation & Benefits
For U.S. Based candidates
To ensure fairness and transparency, the starting base salary range for this role for candidates in the U.S. are listed, varying based on location experience, skills, and qualifications. In addition to base salary, this role will also be offered equity and subsidized benefits (details available upon request) . Benefits Competitive base salary and equity Medical, dental, and vision (subsidized cost) Health savings accounts (HSA), flexible spending accounts (FSA), and dependent care FSAs (DCFSA) Retirement plan options, including 401(k) and Roth 401(k) Unlimited paid time off (PTO) 14 paid company holidays per year #LI-Remote #HP-1 Compensation $160,000 —
$200,000 USD
You're a Great Fit if You're A go-getter with a growth mindset. You're intellectually curious, have strong business acumen, and actively seek opportunities to build relevant skills and knowledge A detail-oriented problem-solver. You can independently gather information, solve problems efficiently, and deliver results with a "get-it-done" attitude Thrive in a fast-paced environment. You're energized by an entrepreneurial spirit, capable of working quickly, and excited to contribute to a growing company A collaborative team player. You focus on business success and are motivated by team accomplishment vs personal agenda Highly organized and results-driven. Strong prioritization skills and a dedicated work ethic are essential for you Equal Opportunity Statement At Armada, we are committed to fostering a work environment where everyone is given equal opportunities to thrive. As an equal opportunity employer, we strictly prohibit discrimination or harassment based on race, color, gender, religion, sexual orientation, national origin, disability, genetic information, pregnancy, or any other characteristic protected by law. This policy applies to all employment decisions, including hiring, promotions, and compensation. Our hiring is guided by qualifications, merit, and the business needs at the time. Unsolicited Resumes and Candidates Armada does not accept unsolicited resumes or candidate submissions from external agencies or recruiters. All candidates must apply directly through our careers page. Any resumes submitted by agencies without a prior signed agreement will be considered unsolicited and Armada will not be obligated to pay any fees.