Support GPT4All action?

There is a ChatGPT API tranform action. This requires web access and potential privacy violations etc.

GPT4All (nomic.ai) offers a free local app with multiple open source LLM model options optimised to run on a laptop. It has an API server that runs locally, and so BTT could use that API in a manner similar to the existing ChatGPT action without any privacy concerns etc. That would be really great!

I haven't looked at GPT4all yet, but it's probably easy to integrate using a slightly adapted Java Script like described here: folivora.ai - Great Tools for your Mac! (the inline & java script section)

If you know some Java Script, have a look! I'll also try to install it, but probably won't get to it until end of next week.

It apparently supports the OpenAI API with a single change of the URL:

GPT4All Chat comes with a built-in server mode allowing you to programmatically interact with any supported local LLM through a very familiar HTTP API. You can find the API documentation here.

Enabling server mode in the chat client will spin-up on an HTTP server running on localhost port 4891 (the reverse of 1984). You can enable the webserver via GPT4All Chat > Settings > Enable web server.

Begin using local LLMs in your AI powered apps by changing a single line of code: the base path for requests. GPT4All Chat Client - GPT4All Documentation

It also supports curl like this (I assume this is OpenAI compliant, though I never used their API):

curl -X POST http://localhost:4891/v1/completions -H "Content-Type: application/json" -H "Authorization: Bearer NO_API_KEY" -d "{\"model\": \"mistral-7b-instruct-v0.
1.Q4_0.gguf\", \"prompt\": \"How can I use curl to POST to a local API?\", \"temperature\": 0.7, \"max_tokens\": 200, \"top_p\": 0.95, \"n\": 1, \"stream\": false}"

You specify the model you downloaded and the prompt. Returns JSON. This therefore seems very promising that the action you already developed for OpenAI could easily support a local inference option for anyone who installs this free and open source tool.

I'll have a look at the javascript solution in the meantime. Thank you as always for such an amazing tool!!!!!

1 Like

Great, thanks for the details.
Then probably this should work with the "Run Real Java Script" action:

async (clipboardContentString) => {
    // ------------------------------------------------
    // CONFIGURATION (API Key):
    // ------------------------------------------------

	// here you can pre-configure the AI with some configuration prompt:
    const BEHAVIOR_DESCRIPTION = "You are a helpful AI assistant.";


    // ------------------------------------------------
    // CODE  starts here (no change required)
    // ------------------------------------------------

    let previousChat = [];
    showHUD("Transforming...", "brain", 100);
    try {

		if(BEHAVIOR_DESCRIPTION) {
    			previousChat.push({"role": "system", "content": BEHAVIOR_DESCRIPTION});	
		}
	
    		previousChat.push({"role": "user", "content": clipboardContentString ? clipboardContentString : ''});
        
		const response = await fetch("http://localhost:4891/v1/chat/completions", {
        	method: "POST",
        	headers: {
        	    "Content-Type": "application/json",
        	    Authorization: `Bearer NO_API_KEY`,
        	},
        	body: JSON.stringify({
        	    model: "mistral-7b-instruct-v0.1.Q4_0.gguf",
        	    messages: previousChat
        	}),
        });

        showHUD("Success", "checkmark.circle", 1);

        const data = await response.json();
        const text = data.choices[0].message.content;
        return `${clipboardContentString} ${text}`;
    } catch (error) {
        showHUD("Error", "x.circle", 3);
        return "Error";
    }

    function showHUD(text, icon, duration) {
        const hudConfig = {
        	BTTActionHUDHideWhenOtherHUDAppears: true,
        	BTTActionHUDDetail: text,
        	BTTIconConfigSFSymbolName: icon,
        	BTTActionHUDDuration: duration,
        	BTTActionHUDSlideDirection: 0,
        	BTTIconConfigImageHeight: 50,
        	BTTActionHUDTitle: "",
        	BTTIconConfigIconType: 2,
        };

        const actionDefinition = {
        	BTTWaitForReply: false,
        	BTTPredefinedActionType: 254,
        	BTTHUDActionConfiguration: JSON.stringify(hudConfig),
        };

        callBTT("trigger_action", { json: JSON.stringify(actionDefinition) });
    }
};

Great, the Javascript is basically working, brilliant thank you!!!!!!!!!

It seems GPT4All needs those parameters, I get something very truncated answers back

Q: How can I use curl to POST to a local API?

A: To make a POST request using cURL on the command line, you will

Exactly like this curl request:

curl http://localhost:4891/v1/completions -H "Content-Type: application/json" -H "Authorization: Bearer NO_API_KEY" -d "{\"model\": \"mistral-7b-instruct-v0.1.Q4_0.g
guf\", \"prompt\": \"How can I use curl to POST to a local API?\"}"
{"choices":[{"finish_reason":"length","index":0,"logprobs":null,"references":[],"text":"To make a POST request using cURL on the command line, you will"}],"created":1708596439,"id":"foobarbaz","model":"Mistral Instruct","object":"text_completion","usage":{"completion_tokens":16,"prompt_tokens":22,"total_tokens":38}}

As it is local we don't need to worry about token limits etc. :sunglasses:

So this is just a tweak of the javascript that generates the body, so totally totally cool!!!!

Here is the javascript with tweaks to add the parameters that give complete answers:

async (clipboardContentString) => {
    // ------------------------------------------------
    // CONFIGURATION:
    // ------------------------------------------------

    // select the model (see the GPT4All application support folder)
    const MODEL = "mistral-7b-instruct-v0.1.Q4_0.gguf"
    // here you can pre-configure the AI with some configuration prompt:
    const BEHAVIOR_DESCRIPTION = "You are a helpful AI assistant.";

    // ------------------------------------------------
    // CODE  starts here (no change required)
    // ------------------------------------------------

    let previousChat = [];
    showHUD("Transforming...", "brain", 100);
    try {

        if(BEHAVIOR_DESCRIPTION) {
                previousChat.push({"role": "system", "content": BEHAVIOR_DESCRIPTION});	
        }
    
        previousChat.push({"role": "user", "content": clipboardContentString ? clipboardContentString : ''});
        
        const response = await fetch("http://localhost:4891/v1/chat/completions", {
            method: "POST",
            headers: {
                "Content-Type": "application/json",
                Authorization: "Bearer NO_API_KEY",
            },
            body: JSON.stringify({
            model: MODEL,
            temperature: 0.7,
            max_tokens: 2048,
            top_p: 0.4,
            messages: previousChat
            }),
        });

        showHUD("Success", "checkmark.circle", 1);

        const data = await response.json();
        const text = data.choices[0].message.content;
        return `${clipboardContentString} ${text}`;
    } catch (error) {
        showHUD("Error", "x.circle", 3);
        return "Error";
    }

    function showHUD(text, icon, duration) {
        const hudConfig = {
            BTTActionHUDHideWhenOtherHUDAppears: true,
            BTTActionHUDDetail: text,
            BTTIconConfigSFSymbolName: icon,
            BTTActionHUDDuration: duration,
            BTTActionHUDSlideDirection: 0,
            BTTIconConfigImageHeight: 50,
            BTTActionHUDTitle: "",
            BTTIconConfigIconType: 2,
        };

        const actionDefinition = {
            BTTWaitForReply: false,
            BTTPredefinedActionType: 254,
            BTTHUDActionConfiguration: JSON.stringify(hudConfig),
        };

        callBTT("trigger_action", { json: JSON.stringify(actionDefinition) });
    }
};
1 Like

Great, I'll also try it soon :slight_smile:

1 Like

I think there are some typos in the javascript from that article, there are ` in a couple of places that I think should be " lines start Authorisation and return

` is used in Java Script for template strings :slight_smile:

Ah, I've never used javascript, thought it was some markdown error :laughing:

There is a second local app that uses the same model formats as GPT4All, and also supports the same API (you need to configure LMStudio to use port 8941):

You can swap from LMStudio to GPT4All and BTT's javascript action works well both tools. LMStudio has a slightly more powerful GUI, and for example exposes the local server logs so you can see the GPT model working as you run the BTT action. It uses the model specified in the GUI and ignores the model JSON that GPT4All uses. As both tools use the same models, you can symlink the models and only download a model once. I've tested with Googles latest model Gemma, Microsoft's Phi and Mistral Instruct — this is the strong advantage of a local LLM tool, freedom, privacy and flexibility with no financial cost.

For BTT, by changing the BEHAVIOR_DESCRIPTION prompt, you can set up different actions. I have a Technical Support action that answers technical questions, and another that acts as an AI Writing Editor to clarify english text. Then bind to hyper+ 1 and hyper+ 2 and both AIs are a keypress away in any app. You could create different "personas" (William Shakespeare, Friedrich Wilhelm Nietzsche etc.) and use BTT to manage each.