Skip to main content
fluffbuzz infer is the canonical headless surface for provider-backed inference workflows. It intentionally exposes capability families, not raw gateway RPC names and not raw agent tool ids.

Turn infer into a skill

Copy and paste this to an agent:
Read https://docs.fluffbuzz.com/cli/infer, then create a skill that routes my common workflows to `fluffbuzz infer`.
Focus on model runs, image generation, video generation, audio transcription, TTS, web search, and embeddings.
A good infer-based skill should:
  • map common user intents to the correct infer subcommand
  • include a few canonical infer examples for the workflows it covers
  • prefer fluffbuzz infer ... in examples and suggestions
  • avoid re-documenting the entire infer surface inside the skill body
Typical infer-focused skill coverage:
  • fluffbuzz infer model run
  • fluffbuzz infer image generate
  • fluffbuzz infer audio transcribe
  • fluffbuzz infer tts convert
  • fluffbuzz infer web search
  • fluffbuzz infer embedding create

Why use infer

fluffbuzz infer provides one consistent CLI for provider-backed inference tasks inside FluffBuzz. Benefits:
  • Use the providers and models already configured in FluffBuzz instead of wiring up one-off wrappers for each backend.
  • Keep model, image, audio transcription, TTS, video, web, and embedding workflows under one command tree.
  • Use a stable --json output shape for scripts, automation, and agent-driven workflows.
  • Prefer a first-party FluffBuzz surface when the task is fundamentally “run inference.”
  • Use the normal local path without requiring the gateway for most infer commands.

Command tree

 fluffbuzz infer
  list
  inspect

  model
    run
    list
    inspect
    providers
    auth login
    auth logout
    auth status

  image
    generate
    edit
    describe
    describe-many
    providers

  audio
    transcribe
    providers

  tts
    convert
    voices
    providers
    status
    enable
    disable
    set-provider

  video
    generate
    describe
    providers

  web
    search
    fetch
    providers

  embedding
    create
    providers

Common tasks

This table maps common inference tasks to the corresponding infer command.
TaskCommandNotes
Run a text/model promptfluffbuzz infer model run --prompt "..." --jsonUses the normal local path by default
Generate an imagefluffbuzz infer image generate --prompt "..." --jsonUse image edit when starting from an existing file
Describe an image filefluffbuzz infer image describe --file ./image.png --json--model must be an image-capable <provider/model>
Transcribe audiofluffbuzz infer audio transcribe --file ./memo.m4a --json--model must be <provider/model>
Synthesize speechfluffbuzz infer tts convert --text "..." --output ./speech.mp3 --jsontts status is gateway-oriented
Generate a videofluffbuzz infer video generate --prompt "..." --json
Describe a video filefluffbuzz infer video describe --file ./clip.mp4 --json--model must be <provider/model>
Search the webfluffbuzz infer web search --query "..." --json
Fetch a web pagefluffbuzz infer web fetch --url https://example.com --json
Create embeddingsfluffbuzz infer embedding create --text "..." --json

Behavior

  • fluffbuzz infer ... is the primary CLI surface for these workflows.
  • Use --json when the output will be consumed by another command or script.
  • Use --provider or --model provider/model when a specific backend is required.
  • For image describe, audio transcribe, and video describe, --model must use the form <provider/model>.
  • For image describe, an explicit --model runs that provider/model directly. The model must be image-capable in the model catalog or provider config. codex/<model> runs a bounded Codex app-server image-understanding turn; openai-codex/<model> uses the OpenAI Codex OAuth provider path.
  • Stateless execution commands default to local.
  • Gateway-managed state commands default to gateway.
  • The normal local path does not require the gateway to be running.

Model

Use model for provider-backed text inference and model/provider inspection.
fluffbuzz infer model run --prompt "Reply with exactly: smoke-ok" --json
fluffbuzz infer model run --prompt "Summarize this changelog entry" --provider openai --json
fluffbuzz infer model providers --json
fluffbuzz infer model inspect --name gpt-5.5 --json
Notes:
  • model run reuses the agent runtime so provider/model overrides behave like normal agent execution.
  • model auth login, model auth logout, and model auth status manage saved provider auth state.

Image

Use image for generation, edit, and description.
fluffbuzz infer image generate --prompt "friendly puppy illustration" --json
fluffbuzz infer image generate --prompt "cinematic product photo of headphones" --json
fluffbuzz infer image describe --file ./photo.jpg --json
fluffbuzz infer image describe --file ./ui-screenshot.png --model openai/gpt-4.1-mini --json
fluffbuzz infer image describe --file ./photo.jpg --model ollama/qwen2.5vl:7b --json
Notes:
  • Use image edit when starting from existing input files.
  • For image describe, --model must be an image-capable <provider/model>.
  • For local Ollama vision models, pull the model first and set OLLAMA_API_KEY to any placeholder value, for example ollama-local. See Ollama.

Audio

Use audio for file transcription.
fluffbuzz infer audio transcribe --file ./memo.m4a --json
fluffbuzz infer audio transcribe --file ./team-sync.m4a --language en --prompt "Focus on names and action items" --json
fluffbuzz infer audio transcribe --file ./memo.m4a --model openai/whisper-1 --json
Notes:
  • audio transcribe is for file transcription, not realtime session management.
  • --model must be <provider/model>.

TTS

Use tts for speech synthesis and TTS provider state.
fluffbuzz infer tts convert --text "hello from fluffbuzz" --output ./hello.mp3 --json
fluffbuzz infer tts convert --text "Your build is complete" --output ./build-complete.mp3 --json
fluffbuzz infer tts providers --json
fluffbuzz infer tts status --json
Notes:
  • tts status defaults to gateway because it reflects gateway-managed TTS state.
  • Use tts providers, tts voices, and tts set-provider to inspect and configure TTS behavior.

Video

Use video for generation and description.
fluffbuzz infer video generate --prompt "cinematic sunset over the ocean" --json
fluffbuzz infer video generate --prompt "slow drone shot over a forest lake" --json
fluffbuzz infer video describe --file ./clip.mp4 --json
fluffbuzz infer video describe --file ./clip.mp4 --model openai/gpt-4.1-mini --json
Notes:
  • --model must be <provider/model> for video describe.

Web

Use web for search and fetch workflows.
fluffbuzz infer web search --query "FluffBuzz docs" --json
fluffbuzz infer web search --query "FluffBuzz infer web providers" --json
fluffbuzz infer web fetch --url https://docs.fluffbuzz.com/cli/infer --json
fluffbuzz infer web providers --json
Notes:
  • Use web providers to inspect available, configured, and selected providers.

Embedding

Use embedding for vector creation and embedding provider inspection.
fluffbuzz infer embedding create --text "friendly puppy" --json
fluffbuzz infer embedding create --text "customer support ticket: delayed shipment" --model openai/text-embedding-3-large --json
fluffbuzz infer embedding providers --json

JSON output

Infer commands normalize JSON output under a shared envelope:
{
  "ok": true,
  "capability": "image.generate",
  "transport": "local",
  "provider": "openai",
  "model": "gpt-image-2",
  "attempts": [],
  "outputs": []
}
Top-level fields are stable:
  • ok
  • capability
  • transport
  • provider
  • model
  • attempts
  • outputs
  • error

Common pitfalls

# Bad
fluffbuzz infer media image generate --prompt "friendly puppy"

# Good
fluffbuzz infer image generate --prompt "friendly puppy"
# Bad
fluffbuzz infer audio transcribe --file ./memo.m4a --model whisper-1 --json

# Good
fluffbuzz infer audio transcribe --file ./memo.m4a --model openai/whisper-1 --json

Notes

  • fluffbuzz capability ... is an alias for fluffbuzz infer ....