Huawei OBS
Huawei Cloud's Object Storage Service — S3-compatible, tightly co-engineered with Huawei's domestic AI accelerator (Ascend 910B/910C) and the MindSpore framework. The de facto storage tier for foundation-model training that targets domestic-silicon clusters; notably the storage substrate for Zhipu AI's GLM-5 (744B parameters trained on 100,000 Ascend 910B chips).
Summary
Huawei Cloud's Object Storage Service — S3-compatible, tightly co-engineered with Huawei's domestic AI accelerator (Ascend 910B/910C) and the MindSpore framework. The de facto storage tier for foundation-model training that targets domestic-silicon clusters; notably the storage substrate for Zhipu AI's GLM-5 (744B parameters trained on 100,000 Ascend 910B chips).
The third pillar of the China S3 trio. Where Aliyun OSS is the broad-market default and Tencent COS is the Tencent-ecosystem default, Huawei OBS is the **vertical-stack** choice — chosen specifically when the training run targets Huawei silicon, integrating with MindSpore and the company's data-fabric stack.
- Huawei OBS is the storage half of a vertically-integrated NVIDIA-substitute stack; using it without Ascend silicon and MindSpore loses most of the integration value.
- Like the other Chinese clouds, the architectural assumption is in-PRC regions for in-PRC users; international footprint is limited (Singapore, Bangkok, Johannesburg, Mexico City, São Paulo).
- Huawei is on US Entity List restrictions — Western cloud-portability planning around Huawei OBS must account for sanctions exposure separate from the data-localization story.
- 11 nines durability vs Aliyun OSS / Tencent COS at 12 nines. Marginal in practice but worth noting for compliance-team comparison sheets.
- The MLPS Level 3+ certification advantage is real for Chinese government / SOE / regulated industry tenders — Huawei is structurally preferred there. But that advantage doesn't transfer to commercial / consumer-facing workloads, where Tencent or Alibaba ecosystem reach matters more.
- WORM retention is available but configuration-by-bucket — make the compliance-vs-cost tradeoff at provisioning, not retroactively.
Pricing posture (April 2026): Standard ~$0.017–0.018/GB-mo → IA ~$0.009 → Archive ~$0.0045 → Deep Archive ~$0.002. Outbound traffic ~$0.08–0.12/GB tiered. Free tier of 5 GB Standard.
Why it's the China-AI choice when sovereignty is the constraint: ModelArts (AI platform) + HiLens (edge AI) + IoT Platform integrations come for free with OBS. Zhipu AI's GLM-5 (744B, 100,000 Ascend 910B) trained on OBS as the storage tier — operational proof point for the Ascend-Huawei-OBS-MindSpore vertical stack as an NVIDIA substitute. Object size cap 48.8 TB via parallel multipart upload makes it competitive with S3 at training-corpus scale.
implementsS3 APIenablesEast Data West ComputingenablesTraining Data Streaming from Object Storage — Ascend + MindSpore + OBS as a stacksolvesChina Data Localizationscoped_toSovereign Storage
Definition
Huawei Cloud's Object Storage Service — an S3-compatible object store tightly co-engineered with Huawei's domestic AI accelerator stack (**Ascend 910B/910C**) and the **MindSpore** training framework. The de facto storage tier for foundation-model training that uses Huawei silicon as a substitute for restricted NVIDIA hardware, including **Zhipu AI's GLM-5** (744B parameters trained on 100,000 Ascend 910B chips). Provides standard S3 API compatibility plus deep integration with Huawei's data-fabric stack (DLI, MRS, GaussDB). **99.999999999% (11 nines) per-object durability**.
US semiconductor export controls cut Chinese labs off from H100/H200/Blackwell GPUs. Huawei's response — Ascend silicon + MindSpore + OBS — is the vertically integrated Chinese alternative to the NVIDIA + S3 + PyTorch stack. OBS is the storage half of that bet, and the strategic destination for any training run that cannot use US-headquartered cloud or hardware.
Training storage for foundation models running on Ascend silicon, S3-compatible object storage inside Huawei Cloud regions, sovereign-cloud deployments for regulated industries (Chinese banks, telcos, government), GLM-5 / Pangu / DeepSeek-class training corpora that must stay on domestic hardware.
Connections 9
Outbound 8
scoped_to3implements1Inbound 1
depends_on1Resources 3
Huawei Cloud Object Storage Service product page covering storage classes, regional deployment, and integration with Huawei's data fabric (DLI, MRS, ModelArts).
OBS technical documentation and API reference — establishes the S3-compatibility scope and the integration path with Ascend/MindSpore for foundation-model training.
Documents GLM-5's training architecture using 100,000 Huawei Ascend 910B chips with OBS as the storage tier — the canonical case study of the OBS + Ascend + MindSpore vertical stack at frontier-model scale.