Job Title:
AI Data Center Power Architect
Job Location:
Fremont, CA
Business Unit:
HPCBU
Job Summary
The Power Architect is responsible for developing and defining the overall architecture for power connectors, power whips, and liquid-cooled busbar systems within AI servers and racks, ensuring compliance with performance, reliability, and scalability requirements. The architect will ensure designs adhere to relevant industry standards and regulations, including electrical safety and thermal management.
Additionally, the role involves collaborating with cross-functional teams, including Sales, PLM (Product Lifecycle Management), and R&D, to create product roadmaps and architectures, and promote them to customers. Staying updated on technological advancements and emerging trends in power systems, the architect will apply insights to optimize designs. Participation in industry organizations such as OCP (Open Compute Project) and OIF (Optical Internetworking Forum) will be required to drive business opportunities and customer partnerships.
Essential Duties and Responsibilities
Design & Optimization:
Architect power distribution systems (connectors, cables, busbars, whips) for AI servers, ensuring scalability, reliability, and thermal efficiency. Focus on high-current applications (e.g., 130–3,000A) and alignment precision ( 1.00mm) to prevent damage in dense GPU/TPU environments.
Power Estimation and Modeling:
Build and maintain accurate power estimation models and tools for AI server and rack-level systems. Conduct performance versus power analyses, support early architectural exploration, and collaborate with hardware/software teams to improve Perf/Watt metrics and total cost of ownership for next-generation AI platforms.
Compliance & Standards:
Ensure designs meet IEC 61439, IEEE, and safety regulations, including seismic and thermal management requirements (e.g., -40 C to +125 C operating range).
Cross-Functional Collaboration:
Partner with Sales, PLM, and R&D to align product roadmaps with AI infrastructure demands (e.g., NVIDIA GPU power needs) and customer requirements.
Innovation & Trends:
Integrate emerging technologies like liquid-cooled busbars, low-GWP cooling, and advanced materials (high-conductivity copper alloys) to enhance power density and efficiency.
Industry Engagement:
Represent the organization in consortia like OCP, OIF, and IEEE to drive standardization and partnerships, leveraging reference architectures for rapid deployment.
Qualifications
Education
Minimum: Bachelor’s degree in Electrical Engineering, Mechanical Engineering, Power Systems, or related field
Preferred: Master’s/PhD in Power Mechanics, Thermal Engineering, or Applied Physics
Experience
10–15 years in power architecture for high-density systems (100kW–1MW/rack), including AI/ML infrastructure
Expertise in power connector/busbar design (e.g., liquid-cooled, solder-tail, screw-mount)
High-current systems (up to 3,000A), and thermal management such as liquid-cooling for AI workloads
Proficiency in IEC/IEEE standards, CAD tools, and system-level validation (e.g., 25kA short-circuit rating)
Experience collaborating with OEMs, chip vendors (NVIDIA, AMD, Intel), and data center operators to optimize AI server and rack power delivery
Technical Skills
Knowledge of dynamic workload management (e.g., BBU/powershelf)
Familiarity with AI reference architectures (e.g., NVIDIA designs supporting 132 kW/rack)
Soft Skills
Strong communication skills to bridge technical teams and business stakeholders
Please send your resume to