h@llo.ai - Upcoming AI Features

I confirm that screenshot_window and ocr_window work now in v5.634, thanks!

I found a bug. When I select (enable) the "retrieve_app_menubar_items - Retrieve menubar items of an app" in the "UI Automation" section, I get an error:

When I disable that action, I don't get an error from the AI.

From Proxyman, I see there's this JSON response:

{
  "error": {
    "message": "Provider returned error",
    "code": 400,
    "metadata": {
      "raw": "{\n  \"error\": {\n    \"code\": 400,\n    \"message\": \"Unable to submit request because required fields ['menubar_path'] are not defined in the schema properties.\",\n    \"status\": \"INVALID_ARGUMENT\"\n  }\n}\n",
      "provider_name": "Google"
    }
  },
  "user_id": "redacted"
}

It looks like something's wrong with menubar_path:

"Unable to submit request because required fields ['menubar_path'] are not defined in the schema properties."

interesting, I can see the issue (should be fixed with 5.635, uploading now). Was this assistant configured without the execute java script tool?

The assistant was configured without the execute javascript tool. Here's how I configured it:




I confirm that v5.635 fixed the "retrieve_app_menubar_items - Retrieve menubar items of an app" issue, thanks!

thank you! (in that case the error makes sense, if the execute JS tool is not added it will fall back to direct tool usage)

Btw. when copying & pasting or exporting an assistant it should automatically exclude the API keys from the JSON so these assistant configurations can be shared. After import they will then popup a UI to ask for the API keys.

1 Like

great! I've been really careful trying to not accidentally expose sensitive data :sweat_smile:

yep I should make these api key fields into secure input fields so users won't accidentally share screenshots of they keys.

1 Like

1. It is hoped that the "Copy" button on the right side of the reply content will be placed at the bottom of the reply conversation.

2. It is hoped that an edit button will be added to the right side of the question conversation. When clicked, the content of the previously asked question will be added to the question input box to facilitate revising the question again.

I am really excited to test it and share results, but tried very hard to make it work with any free API and couldn't make it work. Maybe it's a regional restrictions or lack of API interaction experience, but even with the information in this forum I still get error 0 with Gemini API key.

@andreypetr Ah I had the wrong API url for Gemini OpenAI Compatibility. The correct url is:

https://generativelanguage.googleapis.com/v1beta/openai/v1/chat/completions

I’ll soon add support for the native gemini API format so we don’t need to go through the openai compatibility.

1 Like

have you added it so far ?

I got a full BTT crash when trying to use H@llo to make an automation:

It just hung after initial contact with my API provider, with beach-ball-of-death and I think as it is not threaded it stopped BTT itself from working. A simple request question works, so it seems something about the tool use or other features may be to blame? V5.698 on Tahoe.

It seems that the format LMStudio receives the POST messages from BTT is always in the OpenAI Responses Compatible format, even when you select OpenAI Completions or Compatible API type. I tried using a local model but I get the error computer_use_preview tool is not supported (I think):

[Server Error] {
"error": {
"message": "Invalid discriminator value. Expected 'function' | 'mcp'",
"type": "invalid_request_error",
"param": "tools.0.type",
"code": "invalid_union_discriminator"
}
}

trying with local model ā€˜openai/gpt-oss-20b’

@poof that's weird. I'll try with that model. Which local URL are you using?

Unrelated to that a quick status update:
Work on h@llo.ai had paused for a while but will continue very soon. There have been some interesting developments recently that make this project much more viable again :slight_smile:

1 Like

Hi @Andreas_Hegenberg, The documentation was saying to connect with http://127.0.0.1:1234/v1/chat/completions and so I used the setting ā€œOpen AI Completions or Compatibleā€, but I got the lmstudio log error ā€œ'messages' field is requiredā€. The POST request looked more like it was meant for the v1/responses eindpoint, So I switched to ā€œOpen AI Responses Compatibleā€ and changed to http://127.0.0.1:1234/v1/responses. That gave me the lmstudio log error I gave before.

I’m thinking of using a proxy to transform the request, or do you have a solution I could use? the tool[0].type seems to be the problem, but I don’t see an option to change it.

The proxy filtering out the ā€˜computer_use_preview’ tool helped, but now I get the reasoning and the response duplicated in the chat: ā€œNeed greeting.Need greeting.Hello! How can I help you today?Hello! How can I help you today?ā€

I was able to fix the duplication through the proxy by the filters out the .done events from the stream. Also I added some markdown formatting to the reasoning stream ā€œNeed greetingā€ and added some new lines to make it look like this (though this is something you will change in BTT of course haha)

Let me check what's wrong, I'll get to it later today or tomorrow!

Just informing, that I notice that the system prompt isn’t being sent to lmstudio. { "role": "system", "content": "You are..." }. Because the ā€˜{BTT_TOOLS_AND_CONTEXT}’ needs to be filled out by BTT I can’t resolve this myself right?

If I paste the system prompt in lmstudio I get responses with this:
<|channel|>commentary to=run_javascript <|constrain|>json<|message|>{ "script": "console.log('Hello');" }

I understand this is still very much in development just wanted to give you some user feedback

It seems that the endpoint to openrouter ( Model Not Found | OpenRouter ) can not be used. I have tested with openai (https://api.openai.com/v1/chat/completions) and it run smoothly. Can we do anything on our behalf to run this? I used ā€œOpen Al Completions Or Compatibleā€.

edit: for some reason, I can’t put the url here without it turns into ā€œModel Not Found | OpenRouterā€. Please inspect it to see the url, sorry for the trouble.

@poof there was indeed a bug in the api type selection that caused it to use the wrong api style in some situations. This should be resolved in 5.785 (uploading)

This kind of stuff is apparently LMStudio's tool calling syntax, I haven't added support for that yet but I should be able to add that very soon, not sure why they don't support the openai style tool calling though.

<|channel|>commentary to=run_javascript <|constrain|>json<|message|>{ "script": "console.log('Hello');" }

AFAIK ollama supports the standard openai tool calling