Feature Request: (Settings) Option to Disable all actions using Chat-GPT / LLM

I think that the inclusion of functions to use Chat-GPT and other generative AI BTT is great. However, some of us have data, either work or personal, that needs to not get sent to an LLM that might use it for training data.

Can you consider adding a settings option to disable all functions that send data to an external/offboard service? (More advanced would be to disable these in specific apps...)

At the very least, I'd like to know what these are, as it's not entirely obvious.
For instance, I'm sure that any of these actions are off-board:

  • anything that says "Chat-GPT"
  • Google search

But what about OCR? Is this using an on-board service or external? "Find/Search text on screen?" "Find/search image on screen"? Are there any other off-board services that might leak data?

Thanks for considering!

I might not be understanding your message correctly – couldn't you simply not use those Actions?

Indeed, all ChatGPT related Actions make an HTTP request to OpenAI's API with a payload containing data that you provide, but the contents of this data is in your control.

I recommend that you read BTT's privacy policy and check out this thread:

You could also install a proxy to sniff all incoming and outgoing HTTP requests being made on your Mac and filter for BetterTouchTool (I recommend Proxyman). You'll be able to check if there's any data leaking.

1 Like

these are all on-board running locally.

So just not using the ChatGPT actions should solve your issue. Or you can also provide your own URLs for those actions if you have a compatible LLM running locally

Right you can use LM Studio https://lmstudio.ai which runs an OpenAI compatible API locally, no data leaking. I just got off a 12 hour flight and having an LLM available to ask question without needing internet while working on the plane was awesome...

1 Like

I appreciate the responses, especially the confirmation from Andreas that those other actions are on-board.

I'll take a look at the suggested on-board LLM, that's an interesting idea.