Guides
RAG Context Injection
How SLM augments answers with latest Solana/Anchor documentation.
How it works
For knowledge-type questions (not code generation), SLM queries a Qdrant vector index of latest Solana docs, Anchor source, SPL program docs, and community tech (Firedancer, ZK Compression, Token-2022). Relevance above 0.80 injects context into the system prompt.
Code-gen bypass
RAG is skipped for requests matching write|create|build|implement|show|code|program|function|instruction. This avoids the model feeling restricted to the retrieved snippets when composing full programs.
Sources
- Solana docs (core, RPC, advanced)
- Anchor docs + changelog + source
- Solana Cookbook
- SPL program READMEs
- Metaplex (Token Metadata, Core, Bubblegum)
- SIMDs, Firedancer, ZK Compression, Jito, Marinade
- Solana Whitepaper