Problem
You want to let the AI use tools — browse, write files, send messages — but only after you see what it intends to do and can stop or scope the action.

Control what the AI can do before it does it

Every agent vendor tells you "full autonomy!" PebbleFlow's contract is the opposite: the agent describes what it's about to do, and you decide whether to let it — per action, per conversation, or forever. Here is how that looks end to end.

Before you start

Open Settings > Tools to see what tools are currently enabled for your mode. Disable anything you don't want the agent reaching for in the first place. Approval only applies to tools that are enabled — disabled tools are never offered to the model.

When the agent wants to use a tool

Ask for something that exercises a tool. "Search the web for recent changes to EU AI Act," "list my calendar for tomorrow," "save this note to a file" — any of these will cause the agent to propose a tool call.

The agent does not proceed on its own. A modal titled Approval Required blocks the run and shows:

  • Intent — one line describing what the agent is trying to do, in its own words. This is the first thing your eye lands on.
  • Tool name and action — e.g., googleCalendar.listEvents, fileSystem.writeFile.
  • Risk level — color-coded shield: green for low-risk reads, amber for writes, red for destructive or send-money-style operations. High-risk tools also render a warning banner: "This is a high-risk operation. Only approve if you understand what it will do."
  • Details — an expandable section with the sanitized arguments. Secrets (API keys, tokens, passwords) are stripped before display. A Show raw JSON toggle inside reveals the untouched payload if you want to see exactly what goes on the wire.

Pick the scope that fits the risk

The footer offers four buttons. Each maps to a different trust decision:

Button What it does
Deny Block this single action. The agent gets the denial and can try something else.
Approve Once Allow this specific call, then ask again on the next invocation. Right for one-off tasks.
Approve for Conversation Allow all uses of this tool inside the current thread. Resets when you start a new conversation. Right for the common case: "I'm doing research for the next hour, let the agent search freely."
Always Approve Permanent global permission across all conversations. Shown in red for high-risk tools so you can't click through by habit. Right for low-stakes tools you use constantly (like the calculator).

Watch it happen

Once approved, the tool call executes and appears in the sidepanel next to the conversation with its live status — pending → executing → succeeded or failed — alongside the arguments it actually ran with and the result it got back. Nothing runs that you didn't see; nothing runs silently in the background.

Stop at any time

If a tool you approved is misbehaving or the conversation is going somewhere you didn't intend, hit the stop button on the composer. The agent halts immediately, pending approvals are cleared, and any in-flight browser session closes cleanly. No orphaned tool calls, no runaway loops.

When the agent pauses to ask you a question

Approval is the agent asking for permission. Sometimes the agent needs input — it's not sure which of two paths you want, or it wants you to sign off on a plan before it spends a dozen tool calls executing it. For that it uses a separate human-in-the-loop panel that pauses the turn and asks you directly.

The panel shows a title, a markdown-rendered proposal (the agent's plan, its draft, its question), and four response buttons:

  • Approve — proceed as proposed.
  • Approve But… — proceed with the adjustments you type in the notes field that appears. The agent continues the same turn with your guidance attached.
  • Reject But… — don't do this, and here's why (or here's what I'd rather). Again, continues the same turn with your rationale.
  • Reject — abandon this direction entirely.

If the agent's proposal offers a set of concrete options, they show up as quick-pick chips above the buttons so you can click an answer instead of typing. High-risk proposals get a red warning banner above the body. Your response resolves inline — the agent doesn't start over, it just picks up the thread with your answer in context.

Make it the default: tell PebbleFlow to create a cautious mode

The workflow above is the per-action version. If you want this as your default posture for an entire class of tasks, the cleanest answer is a mode.

Just ask. In the composer, type something like:

"Create a new mode called Cautious that always presents a plan before executing any tools, asks me to approve the plan, and does not proceed until I confirm."

PebbleFlow will use its built-in configuration tool to spin up the mode for you — system prompt, name, defaults. Switch to that mode from the header mode picker and every agent run starts with a plan, pauses for your approval, and only then touches tools. Edit the mode the same way later: "Update the Cautious mode to also summarize results after each step." The agent edits itself.

Creating or updating custom modes is a Pro-tier capability. See Modes & Personalities for the full picture of what modes can configure — system prompt, enabled tools, variables, and more.

See also

  • Tools & Integrations — What's in the tool catalog and how to enable or disable each one
  • Modes & Personalities — What a mode can configure, and how to switch between them
  • Privacy & Data — Why credentials used by tools stay in Keychain (or equivalent) rather than in a shared config file
  • Why this design — The blog post that unpacks the security contrast with "autonomous" agents