codex plugin lets FluffBuzz run embedded agent turns through the
Codex app-server instead of the built-in PI harness.
Use this when you want Codex to own the low-level agent session: model
discovery, native thread resume, native compaction, and app-server execution.
FluffBuzz still owns chat channels, session files, model selection, tools,
approvals, media delivery, and the visible transcript mirror.
Native Codex turns also respect the shared plugin hooks so prompt shims,
compaction-aware automation, tool middleware, and lifecycle observers stay
aligned with the PI harness:
before_prompt_buildbefore_compaction,after_compactionllm_input,llm_outputtool_result,after_tool_callbefore_message_writeagent_end
tool_result middleware.
The harness is off by default. New configs should keep OpenAI model refs
canonical as openai/gpt-* and explicitly force
embeddedHarness.runtime: "codex" or FLUFFBUZZ_AGENT_RUNTIME=codex when they
want native app-server execution. Legacy codex/* model refs still auto-select
the harness for compatibility.
Pick the right model prefix
OpenAI-family routes are prefix-specific. Useopenai-codex/* when you want
Codex OAuth through PI; use openai/* when you want direct OpenAI API access or
when you are forcing the native Codex app-server harness:
| Model ref | Runtime path | Use when |
|---|---|---|
openai/gpt-5.4 | OpenAI provider through FluffBuzz/PI plumbing | You want current direct OpenAI Platform API access with OPENAI_API_KEY. |
openai-codex/gpt-5.5 | OpenAI Codex OAuth through FluffBuzz/PI | You want ChatGPT/Codex subscription auth with the default PI runner. |
openai/gpt-5.5 + embeddedHarness.runtime: "codex" | Codex app-server harness | You want native Codex app-server execution for the embedded agent turn. |
openai-codex/gpt-5.5 for PI OAuth, or openai/gpt-5.5 with the Codex
app-server harness. Direct API-key access for openai/gpt-5.5 is supported
once OpenAI enables GPT-5.5 on the public API.
Legacy codex/gpt-* refs remain accepted as compatibility aliases. New PI
Codex OAuth configs should use openai-codex/gpt-*; new native app-server
harness configs should use openai/gpt-* plus embeddedHarness.runtime: "codex".
agents.defaults.imageModel follows the same prefix split. Use
openai-codex/gpt-* when image understanding should run through the OpenAI
Codex OAuth provider path. Use codex/gpt-* when image understanding should run
through a bounded Codex app-server turn. The Codex app-server model must
advertise image input support; text-only Codex models fail before the media turn
starts.
Use /status to confirm the effective harness for the current session. If the
selection is surprising, enable debug logging for the agents/harness subsystem
and inspect the gateway’s structured agent harness selected record. It
includes the selected harness id, selection reason, runtime/fallback policy, and,
in auto mode, each plugin candidate’s support result.
Harness selection is not a live session control. When an embedded turn runs,
FluffBuzz records the selected harness id on that session and keeps using it for
later turns in the same session id. Change embeddedHarness config or
FLUFFBUZZ_AGENT_RUNTIME when you want future sessions to use another harness;
use /new or /reset to start a fresh session before switching an existing
conversation between PI and Codex. This avoids replaying one transcript through
two incompatible native session systems.
Legacy sessions created before harness pins are treated as PI-pinned once they
have transcript history. Use /new or /reset to opt that conversation into
Codex after changing config.
/status shows the effective non-PI harness next to Fast, for example
Fast · codex. The default PI harness remains Runner: pi (embedded) and does
not add a separate harness badge.
Requirements
- FluffBuzz with the bundled
codexplugin available. - Codex app-server
0.118.0or newer. - Codex auth available to the app-server process.
OPENAI_API_KEY, plus
optional Codex CLI files such as ~/.codex/auth.json and
~/.codex/config.toml. Use the same auth material your local Codex app-server
uses.
Minimal config
Useopenai/gpt-5.5, enable the bundled plugin, and force the codex harness:
plugins.allow, include codex there too:
agents.defaults.model or an agent model to
codex/<model> still auto-enable the bundled codex plugin. New configs should
prefer openai/<model> plus the explicit embeddedHarness entry above.
Add Codex without replacing other models
Keepruntime: "auto" when you want legacy codex/* refs to select Codex and
PI for everything else. For new configs, prefer explicit runtime: "codex" on
the agents that should use the harness.
/model gptor/model openai/gpt-5.5uses the Codex app-server harness for this config./model opususes the Anthropic provider path.- If a non-Codex model is selected, PI remains the compatibility harness.
Codex-only deployments
Disable PI fallback when you need to prove that every embedded agent turn uses the Codex harness:Per-agent Codex
You can make one agent Codex-only while the default agent keeps normal auto-selection:/new creates a fresh
FluffBuzz session and the Codex harness creates or resumes its sidecar app-server
thread as needed. /reset clears the FluffBuzz session binding for that thread
and lets the next turn resolve the harness from current config again.
Model discovery
By default, the Codex plugin asks the app-server for available models. If discovery fails or times out, it uses a bundled fallback catalog for:- GPT-5.5
- GPT-5.4 mini
- GPT-5.2
plugins.entries.codex.config.discovery:
App-server connection and policy
By default, the plugin starts Codex locally with:approvalPolicy: "never", approvalsReviewer: "user", and
sandbox: "danger-full-access". This is the trusted local operator posture used
for autonomous heartbeats: Codex can use shell and network tools without
stopping on native approval prompts that nobody is around to answer.
To opt in to Codex guardian-reviewed approvals, set appServer.mode: "guardian":
guardian preset expands to approvalPolicy: "on-request", approvalsReviewer: "guardian_subagent", and sandbox: "workspace-write". Individual policy fields still override mode, so advanced deployments can mix the preset with explicit choices.
For an already-running app-server, use WebSocket transport:
appServer fields:
| Field | Default | Meaning |
|---|---|---|
transport | "stdio" | "stdio" spawns Codex; "websocket" connects to url. |
command | "codex" | Executable for stdio transport. |
args | ["app-server", "--listen", "stdio://"] | Arguments for stdio transport. |
url | unset | WebSocket app-server URL. |
authToken | unset | Bearer token for WebSocket transport. |
headers | {} | Extra WebSocket headers. |
requestTimeoutMs | 60000 | Timeout for app-server control-plane calls. |
mode | "yolo" | Preset for YOLO or guardian-reviewed execution. |
approvalPolicy | "never" | Native Codex approval policy sent to thread start/resume/turn. |
sandbox | "danger-full-access" | Native Codex sandbox mode sent to thread start/resume. |
approvalsReviewer | "user" | Use "guardian_subagent" to let Codex Guardian review prompts. |
serviceTier | unset | Optional Codex app-server service tier: "fast", "flex", or null. Invalid legacy values are ignored. |
FLUFFBUZZ_CODEX_APP_SERVER_BINFLUFFBUZZ_CODEX_APP_SERVER_ARGSFLUFFBUZZ_CODEX_APP_SERVER_MODE=yolo|guardianFLUFFBUZZ_CODEX_APP_SERVER_APPROVAL_POLICYFLUFFBUZZ_CODEX_APP_SERVER_SANDBOX
FLUFFBUZZ_CODEX_APP_SERVER_GUARDIAN=1 was removed. Use
plugins.entries.codex.config.appServer.mode: "guardian" instead, or
FLUFFBUZZ_CODEX_APP_SERVER_MODE=guardian for one-off local testing. Config is
preferred for repeatable deployments because it keeps the plugin behavior in the
same reviewed file as the rest of the Codex harness setup.
Common recipes
Local Codex with default stdio transport:openai/gpt-5.5 to openai/gpt-5.2 keeps the
thread binding but asks Codex to continue with the newly selected model.
Codex command
The bundled plugin registers/codex as an authorized slash command. It is
generic and works on any channel that supports FluffBuzz text commands.
Common forms:
/codex statusshows live app-server connectivity, models, account, rate limits, MCP servers, and skills./codex modelslists live Codex app-server models./codex threads [filter]lists recent Codex threads./codex resume <thread-id>attaches the current FluffBuzz session to an existing Codex thread./codex compactasks Codex app-server to compact the attached thread./codex reviewstarts Codex native review for the attached thread./codex accountshows account and rate-limit status./codex mcplists Codex app-server MCP server status./codex skillslists Codex app-server skills.
/codex resume writes the same sidecar binding file that the harness uses for
normal turns. On the next message, FluffBuzz resumes that Codex thread, passes the
currently selected FluffBuzz model into app-server, and keeps extended history
enabled.
The command surface requires Codex app-server 0.118.0 or newer. Individual
control methods are reported as unsupported by this Codex app-server if a
future or custom app-server does not expose that JSON-RPC method.
Tools, media, and compaction
The Codex harness changes the low-level embedded agent executor only. FluffBuzz still builds the tool list and receives dynamic tool results from the harness. Text, images, video, music, TTS, approvals, and messaging-tool output continue through the normal FluffBuzz delivery path. Codex MCP tool approval elicitations are routed through FluffBuzz’s plugin approval flow when Codex marks_meta.codex_approval_kind as
"mcp_tool_call"; other elicitation and free-form input requests still fail
closed.
When the selected model uses the Codex harness, native thread compaction is
delegated to Codex app-server. FluffBuzz keeps a transcript mirror for channel
history, search, /new, /reset, and future model or harness switching. The
mirror includes the user prompt, final assistant text, and lightweight Codex
reasoning or plan records when the app-server emits them. Today, FluffBuzz only
records native compaction start and completion signals. It does not yet expose a
human-readable compaction summary or an auditable list of which entries Codex
kept after compaction.
Media generation does not require PI. Image, video, music, PDF, TTS, and media
understanding continue to use the matching provider/model settings such as
agents.defaults.imageGenerationModel, videoGenerationModel, pdfModel, and
messages.tts.
Troubleshooting
Codex does not appear in/model: enable plugins.entries.codex.enabled,
select an openai/gpt-* model with embeddedHarness.runtime: "codex" (or a
legacy codex/* ref), and check whether plugins.allow excludes codex.
FluffBuzz uses PI instead of Codex: if no Codex harness claims the run,
FluffBuzz may use PI as the compatibility backend. Set
embeddedHarness.runtime: "codex" to force Codex selection while testing, or
embeddedHarness.fallback: "none" to fail when no plugin harness matches. Once
Codex app-server is selected, its failures surface directly without extra
fallback config.
The app-server is rejected: upgrade Codex so the app-server handshake
reports version 0.118.0 or newer.
Model discovery is slow: lower plugins.entries.codex.config.discovery.timeoutMs
or disable discovery.
WebSocket transport fails immediately: check appServer.url, authToken,
and that the remote app-server speaks the same Codex app-server protocol version.
A non-Codex model uses PI: that is expected unless you forced
embeddedHarness.runtime: "codex" (or selected a legacy codex/* ref). Plain
openai/gpt-* and other provider refs stay on their normal provider path.