Advertisement

Starcloud Achieves First-Ever LLM Training and Inference from Space, Pioneering Orbital AI Compute

Starcloud Achieves First-Ever LLM Training and Inference from Space, Pioneering Orbital AI Compute Space News

Bengaluru-based startup Starcloud has made history by successfully training and running large language models (LLMs) from space, marking a major breakthrough in orbital compute technology. Its Starcloud-1 satellite, equipped with an NVIDIA H100 GPU, trained Andrej Karpathy’s nano-GPT on Shakespeare’s works and executed inference on Google DeepMind’s Gemma model. Founder Philip Johnston described it as “the first LLM in space,” highlighting the potential to leverage the “near limitless energy of our Sun” for AI workloads.

Backed by NVIDIA and alumni of Y Combinator and Google for Startups, Starcloud aims to reduce the environmental footprint of terrestrial data centres by shifting compute-intensive AI operations to orbit. The company plans a 5-gigawatt solar-powered orbital data centre spanning four kilometres, promising a cost-effective and compact alternative to Earth-based infrastructure. CTO Adi Oltean emphasized the technical challenges overcome to operate the H100 GPU in space.

The race for orbital AI compute is intensifying. Google’s Project Suncatcher and SpaceX’s Starlink V3 satellites aim to deliver high-capacity AI compute in orbit. Elon Musk envisions Starship deploying up to 500 GW of solar-powered AI satellites annually, potentially exceeding the US economy’s electricity use in intelligence processing within two years.