Using BTT's MCP server from a separate user sandbox — a few questions!
Running BTT v6.413 on Macbook Silicon M3. I enabled the external MCP server (BTTExternalMCPServerEnabled) on a dedicated macOS Standard user so BTT runs sandboxed from my primary user, then reverse-tunnel the SSE endpoint to a Linux VM where Claude Code (the MCP client) lives. Cross-user, FUS-backgrounded screenshots come back clean — really pleased with it. I run a pretty complex scaffold built on top of Claude Code (Daniel Miessler’s PAI) and am quite excited about how BTT might extend its capabilities. As a starting point, it seems the integration will enable my PAI to de facto emulate Claude Cowork tools, but with the brains, skills, and memory system it’s missing.
A few questions:
Dedicated docs? /docs/ai-assistants/mcp covers BTT consuming MCP servers (shell, sequential-thinking, etc.). Is there a page for BTT exposing itself as an MCP server? I found the feature via the UI and prefs but no docs page for it.
Transport roadmap — MCP server speaks SSE at /sse and JSON-RPC at /messages. Is /mcp (streamable HTTP, the newer MCP transport) planned, or is SSE the canonical long-term one?
Auth — I don't see a token/bearer mechanism on the external MCP server; loopback binding is the entire trust boundary.
Is that by design, and is optional bearer auth on the roadmap?
Assistant auto-activation — external MCP clients hit "tool X is known but not available" until they call activate_btt_assistant each session. Is there a setting to default-activate an assistant for new external MCP connections, or should clients just always call activate first?
Screenshot save path — screenshots go to NSTemporaryDirectory() under the MCP-hosting user, which is TCC-protected so other users on the same Mac can't read them directly. I currently bounce via ai.run_shell_script + /Users/Shared/. Is there a pref or defaults key to redirect the output path? If not, worth considering?
Sandboxing pattern — any guidance for running BTT as an agent-controlled sandbox (dedicated Standard user, loopback + tunnel, TCC grants, FUS-backgrounded)? Happy to contribute my setup scripts if useful.
Thanks for the brilliant feature @Andreas_Hegenberg; the MCP surface is delightful to work against!
Ah very interesting use case! The MCP server is very new and currently mostly a nice playground for experiments (I think you are the first one who discovered it). I'll think about all of your points
For now just a few answers:
I'm currently redoing all the AI docs and will make sure to include the MCP stuff.
streamable HTTP will come with one of the next builds
I have not thought about this yet, but in scenarios like yours it would probably make sense. Need to think about it
Good idea, adding an auto-activation option should be straight forward
There is no real "screenshot save path", do you know which tool is currently being called by your assistant? It should be easy to create a custom skill that can take screenshots and save them to a specific folder. I'll give it a try later.
(5) Which tool I'm calling for screenshots: the built-in screenshot tool exposed via external MCP. The call I'm hitting is { target: 2 } (main screen); occasionally screenshot_window with a known window id. Output lands at NSTemporaryDirectory()/BTTScreenshots/screenshot_screen_2_.png.
That path is per-user (pai-playground in my setup) under macOS TCC, so my primary user (and via scp the VM) can't read it directly — which is why I currently bounce through ai.run_shell_script → /Users/Shared/ in a follow-up run_javascript call. A custom skill with a configurable save path — or just defaulting to a world-readable location when MCP-exposed — would be ideal.
(2) Streamable HTTP: glad that's coming. If you have a pre-release build you want validated against the reverse-tunnel + cross-user setup before shipping, happy to run it.
(6) Sandbox pattern: if it'd be useful, I can polish the Standard User/sandbox setup scripts into a public gist and write up the pattern (user creation, TCC grants, loopback binding, autossh LaunchAgent, FUS-background caveats) as a forum post or guide. You can link it from the eventual docs refresh if it fits
One minor doc correction: send_shortcut takes key + mods as two params, not a combined string. Tripped me up when calling it from inside run_javascript. The /ai-assistants/tools docs are correct; just easy to misread.