About LLMS3.com
What This Is
LLMS3.com is a structured index of the S3 and object storage ecosystem, built for engineers running local-first AI infrastructure. It maps 219 concepts — technologies, standards, architectures, pain points, and the ML model classes that operate on S3-stored data — and the 948 relationships between them.
The index tracks which technologies solve which operational problems, which emerging tools serve as alternatives to established ones, and how the architecture patterns connect storage to compute to retrieval. If you're building data pipelines for LLM workloads on self-hosted or hybrid S3 infrastructure, this is the reference map.
Why Relationships Matter
A list of technologies is a catalog. A graph of how they relate is an index. Every node in LLMS3 carries typed, directed edges:
solves— which technologies and architectures address which pain pointsalternative_to— where newer tools are emerging alongside established onescompetes_with— direct competitive pressure between projectsenables,depends_on,implements— the dependency and capability graph
These edges are what make the index navigable. When a new storage engine appears, the interesting question isn't what it does but what it solves, what it replaces, and what it depends on.
What's In the Index
Nodes span 7 categories: Topics (navigational entry points), Technologies (concrete tools and platforms), Standards (specifications and protocols), Architectures (repeatable design patterns), Pain Points (known operational problems), Model Classes (categories of ML models by their role in S3 systems), and LLM Capabilities (specific functions models perform on S3-stored data).
The 36 guides walk through real engineering decisions — choosing a table format, evaluating self-hosted S3 alternatives, understanding vector indexing tradeoffs. The 10 thematic clusters organize the index by problem space rather than by type.
Scope
Every node passes a simple test: does this concept exist because of S3 and object storage, or does it merely touch it? The index covers the ecosystem from the storage layer through the compute and retrieval layers that sit on top of it, with particular attention to the architectures and tooling relevant to self-hosted, hybrid, and edge deployments handling AI workloads.
For LLMs
The index is published in machine-readable form via the llms.txt standard:
/llms.txt— concise index with one-line descriptions/llms-full.txt— full content, relationship graph, and guide text
How It's Built
Content lives in structured markdown (INDEX.md, SUMMARIES.md, RESOURCES.md, GUIDES.md). The site is built with Astro, which parses these files at build time into typed data. All pages, stats, graph data, and machine-readable exports derive from the same source files.