# Blazen > Rust-first LLM orchestration framework with Python, Node, and WASM bindings. Composable workflows, typed events, durable pause/resume, distributed sub-workflows, and a unified provider layer. Companion files for LLMs: - [Full documentation](https://blazen.dev/llms-full.txt) — every page concatenated - [Python slice](https://blazen.dev/llms-python.txt) — Python-specific docs plus cross-cutting guides - [Node slice](https://blazen.dev/llms-node.txt) — Node-specific docs plus cross-cutting guides - [Rust slice](https://blazen.dev/llms-rust.txt) — Rust-specific docs plus cross-cutting guides - [WASM slice](https://blazen.dev/llms-wasm.txt) — WASM-specific docs plus cross-cutting guides ## Getting Started - [Introduction](https://blazen.dev/docs/getting-started/introduction): What is Blazen and why use it for AI workflows - [Installation](https://blazen.dev/docs/getting-started/installation): Install Blazen for Rust, Python, Node.js, or WebAssembly - [Core Concepts](https://blazen.dev/docs/getting-started/concepts): Events, Steps, Workflows, Context, and Streaming ## Guides - [Quickstart (node)](https://blazen.dev/docs/guides/node/quickstart): Build your first Blazen workflow in Node.js - [Quickstart (python)](https://blazen.dev/docs/guides/python/quickstart): Build your first Blazen workflow in Python - [Quickstart (rust)](https://blazen.dev/docs/guides/rust/quickstart): Build your first Blazen workflow in Rust - [WASM Quickstart (wasm)](https://blazen.dev/docs/guides/wasm/quickstart): Use Blazen in the browser or Node.js via WebAssembly - [Events (node)](https://blazen.dev/docs/guides/node/events): Create and route custom events in Node.js - [Events (python)](https://blazen.dev/docs/guides/python/events): Create and route custom events in Python - [Events (rust)](https://blazen.dev/docs/guides/rust/events): Define and use custom events in Rust - [WASM Workflows (wasm)](https://blazen.dev/docs/guides/wasm/workflows): Run Blazen workflows entirely in WebAssembly - [Streaming (node)](https://blazen.dev/docs/guides/node/streaming): Stream events from running workflows in Node.js - [Streaming (python)](https://blazen.dev/docs/guides/python/streaming): Stream events from running workflows in Python - [Streaming (rust)](https://blazen.dev/docs/guides/rust/streaming): Stream events from running workflows in Rust - [WASM Agent (wasm)](https://blazen.dev/docs/guides/wasm/agent): Run AI agents with tool calling in WebAssembly - [Context (node)](https://blazen.dev/docs/guides/node/context): Share state between workflow steps in Node.js - [Context (python)](https://blazen.dev/docs/guides/python/context): Share state between workflow steps in Python - [Context (rust)](https://blazen.dev/docs/guides/rust/context): Share state between workflow steps in Rust - [Edge Deployment (wasm)](https://blazen.dev/docs/guides/wasm/deployment): Deploy Blazen as a WASM component on ZLayer or edge platforms - [Branching (node)](https://blazen.dev/docs/guides/node/branching): Conditional routing and fan-out in Node.js - [Branching (python)](https://blazen.dev/docs/guides/python/branching): Conditional routing and fan-out in Python - [Branching (rust)](https://blazen.dev/docs/guides/rust/branching): Conditional routing and fan-out in Rust - [Local Inference (wasm)](https://blazen.dev/docs/guides/wasm/local_inference): Run embeddings and LLMs entirely in the browser with the Blazen WASM SDK - [Human-in-the-Loop (node)](https://blazen.dev/docs/guides/node/human-in-the-loop): Build workflows that pause for human input in Node.js - [Human-in-the-Loop (python)](https://blazen.dev/docs/guides/python/human-in-the-loop): Build workflows that pause for human input in Python - [Human-in-the-Loop (rust)](https://blazen.dev/docs/guides/rust/human-in-the-loop): Build workflows that pause for human input in Rust - [Multimodal Content (wasm)](https://blazen.dev/docs/guides/wasm/multimodal): Pass images, audio, video, files, 3D models, and CAD files through Blazen in the browser, edge workers, and embedded runtimes via @blazen/sdk - [Middleware & Composition (node)](https://blazen.dev/docs/guides/node/middleware): Compose retry, caching, fallback, and custom middleware in Node.js - [Middleware & Composition (python)](https://blazen.dev/docs/guides/python/middleware): Compose retry, caching, fallback, and custom middleware in Python - [Multimodal Content (rust)](https://blazen.dev/docs/guides/rust/multimodal): Pass images, audio, video, files, 3D models, and CAD files through Blazen — and let tools accept them via content handles - [Middleware & Composition (wasm)](https://blazen.dev/docs/guides/wasm/middleware): Compose retry, caching, fallback, and custom middleware in the WASM SDK - [Embeddings (node)](https://blazen.dev/docs/guides/node/embeddings): Generate vector embeddings with Blazen in Node.js - [Embeddings (python)](https://blazen.dev/docs/guides/python/embeddings): Generate vector embeddings with Blazen in Python - [Multimodal Content (node)](https://blazen.dev/docs/guides/node/multimodal): Pass images, audio, video, files, 3D models, and CAD files through Blazen -- and let tools accept them via content handles - [Multimodal Content (python)](https://blazen.dev/docs/guides/python/multimodal): Pass images, audio, video, files, 3D models, and CAD files through Blazen -- and let tools accept them via content handles - [Distributed Workflows](https://blazen.dev/docs/guides/distributed): Run sub-workflows across machines with blazen-peer - [Media Generation](https://blazen.dev/docs/guides/media-generation): Generate images, video, audio, and 3D models with fal.ai - [Audio Transcription](https://blazen.dev/docs/guides/transcription): Convert speech to text with fal.ai or local whisper.cpp - [Memory & Semantic Search](https://blazen.dev/docs/guides/memory): Store and retrieve documents with embedding- and SimHash-based search - [Prompt Templates](https://blazen.dev/docs/guides/prompts): Reusable prompt templates with variable interpolation and a registry - [Batch Completions](https://blazen.dev/docs/guides/batch-processing): Run many completion requests concurrently with bounded parallelism - [Custom Providers (Subclassing)](https://blazen.dev/docs/guides/custom-providers): Bring your own LLM, embedding, TTS, image, or memory backend to Blazen - [Local Inference](https://blazen.dev/docs/guides/local-inference): Run LLMs, embeddings, TTS, and transcription on-device - [Chat Window (Token-Limited Conversations)](https://blazen.dev/docs/guides/chat-window): Maintain conversation history within a fixed token budget - [Telemetry & Observability](https://blazen.dev/docs/guides/telemetry): Export traces, spans, and metrics with OTLP, Langfuse, or Prometheus - [Multimodal Tools: Inputs and Results](https://blazen.dev/docs/guides/tool-multimodal): Pass images / audio / video / files into tools as content handles, and return multimodal payloads from tools across every LLM provider ## API Reference - [Rust API Reference (rust)](https://blazen.dev/docs/api/rust): Complete API reference for blazen-llm in Rust - [Python API Reference (python)](https://blazen.dev/docs/api/python): Complete API reference for blazen in Python - [Node.js API Reference (node)](https://blazen.dev/docs/api/node): Complete API reference for blazen in Node.js - [WASM API Reference (wasm)](https://blazen.dev/docs/api/wasm): Complete API reference for the Blazen WebAssembly SDK ## Examples - [Rust Examples (rust)](https://blazen.dev/docs/examples/rust): Complete runnable Rust examples for Blazen - [Python Examples (python)](https://blazen.dev/docs/examples/python): Complete runnable Python examples for Blazen - [Node.js Examples (node)](https://blazen.dev/docs/examples/node): Complete runnable Node.js examples for Blazen - [WASM Examples (wasm)](https://blazen.dev/docs/examples/wasm): Example applications using Blazen WebAssembly SDK