Key Moments

Inside The Startup Launching AI Data Centers Into Space

Y CombinatorY Combinator
Science & Technology5 min read13 min video
Nov 13, 2025|309,464 views|7,215|680
Save to Pod
TL;DR

StarCloud is launching AI data centers into space, powered by sunlight and cooled by the vacuum. While facing skepticism, their recent launch of an NVIDIA H100 GPU marks a significant first step toward potentially cheaper, more efficient computing.

Key Insights

1

StarCloud's first satellite carries an NVIDIA H100 GPU, marking the first time such a powerful, data center-grade GPU has operated in space—100 times more powerful than any previous space computer.

2

The company aims to eventually build 40-megawatt orbital data centers, each weighing around 100 tons, fitting into a single Starship bay.

3

Orbital data centers would require zero fresh water for cooling, unlike terrestrial data centers which can deplete local water sources significantly.

4

StarCloud was able to take its first satellite from founding to orbit in just 15 months, significantly faster than the typical four years for previous startups.

5

The planned second launch in October next year will be at least 10 times more powerful than the first, featuring NVIDIA's Blackwell architecture and multiple GPUs.

6

Major tech companies like Google, SpaceX, and Amazon are also exploring the concept of data centers in orbit.

Launching the first data center-grade GPU into space

StarCloud has achieved a significant milestone by launching a satellite equipped with an NVIDIA H100 GPU into orbit. This event is touted as potentially marking the birth of a new industry: data centers in space. The deployed GPU is described as being 100 times more powerful than any computer previously operated in the vacuum of space, representing the first attempt to bring data center-grade, terrestrial Earth-based GPUs into orbit. This successful launch is seen as a crucial step in proving that state-of-the-art computing hardware can function effectively in space, validating StarCloud's thermal management and radiation shielding techniques. The company plans to run various demonstration workloads, including Google's Gemini, and perform model fine-tuning and training in space.

The compelling case for orbital data centers

The primary motivation behind StarCloud's ambitious vision is the mounting constraints faced by terrestrial data centers, particularly concerning energy and water resources. Data centers consume vast amounts of energy and require extensive cooling, often through the evaporation of large quantities of fresh water, which is leading to severe water depletion issues in some regions. StarCloud's orbital data centers offer a solution by drawing uninterrupted sunlight for power and using deep space as a heat sink through infrared radiation. This approach promises to eliminate fresh water usage entirely, drastically reduce carbon emissions compared to Earth-based facilities, and free data centers from limitations of land, grid power, and cooling infrastructure. The goal is to eventually build massive, 40-megawatt orbital data centers, each weighing approximately 100 tons, potentially rivaling the largest terrestrial data centers in efficiency and cost.

Rapid development and a complementary founding team

StarCloud's journey from concept to orbital deployment has been remarkably swift. The company was founded about a year and a half prior to the interview and managed to design, build, and test its first satellite in just 15 months, a pace significantly faster than the typical four years seen in previous space startups. This speed is attributed to the founders' highly complementary backgrounds and in-house capabilities. Philip Johnston, the co-founder, has a background in applied math and theoretical physics and initial software engineering experience. Co-founder Addi brings two decades of data center experience from Microsoft and worked as a principal software engineer at SpaceX, focusing on software and making chips function in high-radiation environments. Co-founder and CTO Ezra, with a PhD in engineering, spent a decade designing satellites, including work on NASA's Lunar Pathfinder mission, and leads the development of deployable structures like solar panels and radiators, which are considered StarCloud's core intellectual property. This blend of expertise covers commercial compute payloads, satellite structures, and the critical thermal management systems.

Addressing skepticism through engineering solutions

The concept of orbital data centers has not been without its critics, with online debates often focusing on the practicalities of heat dissipation in space. A common criticism suggests that dissipating the immense heat generated by powerful GPUs would require an impractically large surface area. StarCloud's technical response is centered on their core intellectual property: the development of large, low-cost, and low-mass deployable radiators. Their engineering team is dedicated to building these extensive surface areas specifically for radiating heat into the vacuum of space, directly countering the skepticism about the practicality of thermal management in orbit. This focus on innovative radiator design is key to making their ambitious data center vision feasible.

Pivoting from space-based solar to orbital data centers

The genesis of StarCloud's idea involved exploring space-based solar power, a concept involving large solar panels in orbit to beam energy back to Earth. However, financial analysis revealed that this model would only become viable at launch costs of around $50 per kilogram, a target far from current capabilities. A significant drawback identified with space-based solar was the substantial energy loss (95%) during transmission from space to Earth. This led the team to pivot their focus. Instead of transmitting power, they considered sending the data centers themselves into space. This revised concept found a more attainable breakeven point at launch costs of approximately $500 per kilogram, a figure much closer to current economic realities, forming the basis of their white paper and subsequent company.

The future roadmap: increased power and connectivity

Looking ahead, StarCloud has ambitious plans for subsequent launches that will significantly escalate the capabilities of their orbital data centers. Their second satellite, scheduled to launch in October of the following year, is projected to be at least ten times more powerful than the initial demonstrator. This next-generation satellite will feature NVIDIA's Blackwell architecture and incorporate a greater number of GPUs. A key advancement will be the inclusion of high-bandwidth optical terminals, ensuring 24/7 high-bandwidth connectivity with very low latency. While the grand vision of massive 5-gigawatt data centers in orbit may still be a decade or more away, these planned launches represent critical steps towards that future, with major tech players also beginning to explore similar orbital computing concepts.

StarCloud's Space Data Center Strategy

Practical takeaways from this episode

Do This

Leverage constant solar energy for AI compute in orbit.
Utilize deep space for heat radiation, eliminating fresh water use.
Scale data centers indefinitely by removing terrestrial limitations.
Focus on difficult, high-impact projects over easy ones.
Build deployable radiators with large surface areas for heat dissipation.

Avoid This

Don't rely on terrestrial energy grids or water supplies.
Don't dismiss ambitious, seemingly 'wacky' visions.
Don't underestimate the technical risks of operating in space.
Don't lose energy transmitting power from space to Earth (as with space-based solar).

Launch Cost Break-even Points for Space-Based Business Models

Data extracted from this episode

Business ModelBreak-even Launch Cost per KiloCurrent Feasibility
Space-Based Solar$50Long way away
Orbital Data Centers$500Much closer

Common Questions

StarCloud is building data centers in space to provide GPU compute power to other satellites and eventually compete with terrestrial data centers on energy costs. The goal is to leverage constant solar energy and deep space cooling for more efficient and less resource-intensive AI computing.

Topics

Mentioned in this video

More from Y Combinator

View all 562 summaries

Found this useful? Build your knowledge library

Get AI-powered summaries of any YouTube video, podcast, or article in seconds. Save them to your personal pods and access them anytime.

Try Summify free