LLM-Ready Training Dataset for Apple's Foundation Models (iOS 26)

I've been working with iOS 26 and Apple's new Foundation Models Framework. I realized every major LLM (GPT-4, Claude, Gemini) currently knows nothing about it—because training cutoffs predate its release.

So I created a structured training dataset specifically for LLM ingestion. It covers:
- Core API usage (SystemLanguageModel, LanguageModelSession, Tool protocol)
- Advanced implementation (@Generable/@Guide macros, constrained decoding, tool chaining)
- Strategic features (adapter training, multi-step workflows, Apple’s AI vision)

All written in Markdown, with production-tested Swift code from actual iOS 26 beta usage.
Tokenized structure, progressive complexity, no fluff—just what a model needs to learn this framework from scratch.

Details + preview here: https://rileyhealth.gumroad.com/l/bwoqe

Happy to answer questions or provide a sample if helpful.

https://rileyhealth.gumroad.com/l/bwoqe

Discussion

rileygersh
Updated URL: https://llmbridge.gumroad.com/l/bwoqe
jameshart
Wait, is technical publishing back? Do we need to be commissioning authors to write programming books with a view to feeding them into LLMs as training data?
rileygersh
Exactly! We're in a transition period where technical knowledge exists but AI can't access it. This bridges that gap until models retrain. The methodology works for any new framework - Foundation Models just happens to be the current example.

I used AI research tools to systematically extract and organize Foundation Models knowledge from over 100 sites that wasn't in any training data. The value is in the methodology and validation, not just raw output.

nickthegreek
404 at 5:45pm ET.
rileygersh
Updated URL - https://llmbridge.gumroad.com/l/bwoqe