I had a question and a suggestion for the "Transform & Replace Selection With ChatGPT" function. Would it be possible and feasible to customize the type of LLMs in addition to the default ChatGPT from OpenAI?
The idea would be to connect to other LLMs providers like Groq or OpenRouter through their compatible OpenAI.
Customize the API URL with the key and be able to specify the desired language model.
You can achieve this using Transform & Replace Selection With Java Script:
I already run local models and also openrouter.ai (it wraps many models with a single API address). I use a custom endpoint and can tweak other things like the system prompt. Allowing for other OpenAI compatible APIs in the ChatGPT action would be straightforward and great.
In the next step, I think expanding the support from OpenAI to other providers in "Transform and Replace Selection with ChatGPT" would be another feature request. Of course openrouter is neat, but it is still one provider. Unfortunately for native support each API is different (Anthropic vs. Google vs. OpenAI), and the APIs do change relatively quickly. There are other differences like model names and parameters each API supports, so it will need a bit of thought about how configurable this will be. It is an additional burden and so the development cost of maintaining this feature will be higher than other actions...
Here you can see the settings, and then an example where I select some text and use hyperkey+1 to improve the text clarity. I am using LM Studio (API address is http://localhost:4891/v1/chat/completions) as a local server with the Phi3 model loaded.
I also tried to use Clipboard Manager actions as a way to more easily store the JS, but don't use this actively (its included just for show).
For the editor and assistant the system prompts are:
const BEHAVIOR_DESCRIPTION = "You are an AI assistant that helps improve the clarity and grammar of written English texts. Please improve the following text: ";
const BEHAVIOR_DESCRIPTION = "You are a helpful AI assistant who gives precise and detailed step-by-step answers.";
You will need to tweak the address and probably model names for openrouter etc. Local LLMs are awesome enough I rarely use online models anymore...
I'll soon add some configuration options to set custom URLs for the chatgpt actions in BTT - then at least it should work with GPT4All and the like. For now @iandol's solution is great!
Thank you anyway for sharing, I managed to use the Groq model with the ChatGPT script provided on the blog https://folivora.ai/blog. However, when I trigger the action, it doesn't replace the selection but keeps the original text and adds the output after the original text. How can I replace the selected text?
When I call for the action, it does not display the HUD to indicate that the action is in progress or completed.
Thank you for the tip. Also, when I request the action, it does not show the HUD to indicate that the action is in progress or completed as seen on the demonstration video on the blog.
I tested the new feature with a custom URL, but when I trigger the action, nothing happens. I filled in the model name, API key, and URL with https://api.groq.com/openai/v1/chat/completions.
I also noticed that when I filled out the URL, the other actions using the native ChatGPT function without an API broke and no longer work.
Ah, unfortunately groq is not api compatible with the OpenAI chat/completions endpoint yet. They seem to be working on this. In that case you'll need to stick with the Java Script approach for now
For local LLMs no API key is used or needed, but the URL is. I can enter a false key as a workaround but just to mention this relationship is not ideal for local AI tools...
Can confirm with these setting a local model run using LM Studio works well with this action,
One small issue, while the Action itself works, for some reason the "Test Execute" does not, it sends no request (I can check the LM Studio server log directly to confirm this)...
[2024-06-21 08:20:04.696] [INFO] Received POST request to /v1/chat/completions with body: {
"model": "lmstudio-community/Phi-3-mini-4k-instruct-BPE-fix-GGUF",
"messages": [
{
"content": "You are a helpful assistant who interprets every input as raw text unless instructed otherwise. Your answers do not include a description unless prompted to do so.",
"role": "system"
},
{
"content": "You are an English editor who will correct the following text for clarity and precision of language use: : What is it goona a be huh?",
"role": "user"
}
]
}
Note there is a system prompt and a user prompt. It would be good if we could ALSO edit the system prompt, as this is important to define the behaviour which may depend on the model...
OR, only allow the system prompt to be set, and just use the selected text in the user prompt, this way you can instruct the model with the system prompt and just use the raw text in the user prompt?