50 kW orbital compute trade study
Space Server
A constraints-first engineering narrative around orbital AI infrastructure: power budgeting, GPU block sizing, thermal rejection, launch packaging, and the discipline to lock assumptions before pretending the architecture is real.
systemsthermalarchitecturecomputetrade-study
Problem
- Most “AI in space” conversation collapses into hand-waving almost immediately. The real problem is brutal: if you want meaningful compute in orbit, you must close the loop on power, thermal rejection, packaging, fault domains, and launch geometry at the same time.
- This case study starts in the right place: not with hype, but with a locked 50 kW electrical bus and a requirement to reason from there.
Locked Assumptions
- The compute baseline locks a 50 kW continuous node budget and treats H200-class accelerators as the payload class, in the 600–700 W per GPU band.
- The design abstracts repeatable GPU Blocks: 10 GPUs per block, plus host CPU, memory, local NVMe, and fabric. That is a real systems move because it creates a unit of scaling instead of free-form wishcasting.
- The baseline module target is roughly 6 GPU Blocks or about 60 GPUs, with payload trimmed to stay inside the electrical budget once pumps, avionics, thermal loops, and housekeeping are accounted for.
Thermal Math
- The core insight is radiation physics, not branding. Heat rejection follows q = epsilon * sigma * A * T^4, so radiator temperature dominates area requirements. Raising effective emission temperature changes the problem by orders of magnitude, not percentages.
- At 50 kW and emissivity 0.9, a simplified blackbody-style radiator estimate is about 0.47 m^2 at 1200 K, versus about 121 m^2 at 300 K. That is a 256x area swing from temperature scaling alone.
- That does not mean the design is solved; it means the repo is asking the correct first-order question: what thermal regime makes the architecture even worth packaging?
Architecture Discipline
- The docs separate compute baseline, launch packaging, and architecture assumptions rather than blending them into one giant speculative memo. That structure is important because it keeps the reader aware of what is locked, what is placeholder, and what is still an open item.
- Open items such as radiation mitigation strategy, crosslink power, exact GPU count after pump and HX parasitics, and launcher packaging limits are explicitly called out instead of buried. That is senior-level technical hygiene.
- Packaging philosophy is also framed correctly: rigid compute core, deployable radiators and arrays, single-fault containable module, fairing compatibility assumed but not falsely guaranteed.
Why This Turns Heads
- The value is not that the project claims a final orbital product; the value is that it demonstrates the ability to build a serious trade study with explicit boundaries, quantitative anchors, and traceable open questions.
- That is exactly the skill companies want in senior systems work: converting a seductive idea into a defensible architecture conversation before money and ego get too far ahead of physics.
Result
- This case study reads as systems engineering rather than futurist fan fiction: a bounded electrical budget, modular GPU packing, radiator-temperature sensitivity, and a packaging model that remains honest about what is not yet locked.
- It shows the difference between imagination and engineering judgment.