An industry analyst revealed that China has made a major AI breakthrough. The country has successfully deployed a single generative AI (GAI) model across multiple data centers. This is no small feat, considering the complexity of integrating different GPUs in a single location, let alone coordinating servers spread across various regions.
Patrick Moorhead, Chief Analyst at Moor Insights & Strategy, announced on X (formerly Twitter) that China is the first to accomplish this. He shared that he learned about this advancement during an unrelated NDA meeting.
This multi-location GAI training technique is critical for China’s AI ambitions. American sanctions have limited its access to the most advanced chips, slowing research and development efforts. To stay competitive, Nvidia created the H20 AI chips, designed to meet Washington’s restrictions while still serving the Chinese market. However, rumors suggest these chips may also face future bans, increasing the uncertainty for Chinese tech companies navigating the evolving political landscape.
Overcoming GPU Shortages with Multi-Brand Clusters
Chinese researchers have been developing a method to combine GPUs from different brands into a unified training cluster. This strategy allows institutions to maximize their limited supply of high-end, sanctioned chips, like the Nvidia A100, by pairing them with more accessible GPUs, such as Huawei’s Ascend 910B and Nvidia’s H20. While this approach has historically led to efficiency drops, it seems China has made significant progress in overcoming those challenges.
Recent news about a single generative AI (GAI) model deployed across multiple data centers highlights this achievement. Although details about this GAI remain scarce, it demonstrates China’s commitment to pushing forward its AI ambitions. As Huawei stated, China will continue to find innovative ways to advance its AI development, even under the constraints of American sanctions. Necessity continues to drive these inventive solutions.