Skip to content

feat: add LMStudio#5810

Open
sohaieb wants to merge 7 commits intoFlowiseAI:mainfrom
sohaieb:feat/add-lmstudio-and-mise-support
Open

feat: add LMStudio#5810
sohaieb wants to merge 7 commits intoFlowiseAI:mainfrom
sohaieb:feat/add-lmstudio-and-mise-support

Conversation

@sohaieb
Copy link
Copy Markdown
Contributor

@sohaieb sohaieb commented Feb 20, 2026

Changes

  • Add LMStudio nodes support
  • Fix an issue when navigating between multilines of chat textarea
  • Fix add user input to chat history

Important note (Issue)

[BACKGROUND]
All LMStudio nodes work well, however the LMStudio embedding are unstable: the vector column values are always stored with "0" values.
Not sure but It seems like Followise bases on these values?
I tried to simulate the same embedding process with Ollama but it returns correct values.
BTW I opened an issue to LMStudio basing on this topic, please check it out HERE

[UPDATE]
Regarding the previously mentioned issue and basing on @0xi4o comment:

this is a known issue with @langchain/openai sdk and LM Studio: langchain-ai/langchainjs#8221

The issue should be fixed now with this Pull Request, basing on the following Thread Comment.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello @sohaieb, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the platform's capabilities by integrating LMStudio for local large language model operations, providing users with more flexibility in managing their LLM workflows. Additionally, it incorporates Mise for streamlined development environment setup, ensuring consistent project dependencies across different environments.

Highlights

  • LMStudio Integration: Introduced comprehensive support for LMStudio, enabling the use of local large language models for both chat and embedding functionalities within the application. This includes a new credential type for LM Studio API keys and dedicated nodes for chat models, LLMs, and embeddings.
  • Mise Support: Added configuration for Mise, a tool version manager, specifying Node.js version 20 to ensure consistent development environment setup.
Changelog
  • mise.toml
    • Added configuration for Node.js version 20.
  • packages/components/credentials/LmStudioApi.credential.ts
    • Added a new credential class for LM Studio API keys.
  • packages/components/nodes/chatmodels/ChatLmStudio/ChatLmStudio.ts
    • Implemented a new chat model node for LM Studio, allowing configuration of various parameters like temperature, model name, and streaming.
  • packages/components/nodes/chatmodels/ChatLmStudio/FlowiseChatLmStudio.ts
    • Created a custom wrapper class, FlowiseChatLmStudio, extending Langchain's ChatOpenAI to integrate LM Studio chat models with Flowise-specific functionalities like multi-modal options.
  • packages/components/nodes/embeddings/LMStudioAIEmbedding/LmStudioEmbedding.ts
    • Introduced a new embedding node for LM Studio, enabling the generation of local embeddings with configurable base URL and model name.
  • packages/components/nodes/llms/LmStudio/LmStudio.ts
    • Added a new LLM node for LM Studio, providing a wrapper for LM Studio large language models with extensive parameter customization.
Activity
  • The author noted that LMStudio embeddings are currently unstable, with vector column values consistently showing "0".
  • The instability is attributed to LMStudio's API response returning prompt_tokens and total_tokens as 0, which might be critical for how Flowise processes embeddings.
  • The author requested confirmation from reviewers regarding the dependency on these token usage values for embedding functionality.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for Mise and LMStudio, which is a great addition. The implementation is solid, introducing new credentials and nodes for chat, LLM, and embeddings. I've made a few suggestions to improve the robustness and consistency of the new LMStudio nodes. Specifically, I've pointed out a couple of places where unsafe parsing of numeric inputs could lead to runtime issues and suggested a more robust way to handle optional parameters. I also recommended making the credential for the LMStudio LLM node optional to align with the other LMStudio nodes and common usage patterns. Overall, great work on expanding Flowise's capabilities.

Comment thread packages/components/nodes/chatmodels/ChatLmStudio/ChatLmStudio.ts
Comment thread packages/components/nodes/llms/LmStudio/LmStudio.ts
Comment thread packages/components/nodes/llms/LmStudio/LmStudio.ts
@sohaieb sohaieb force-pushed the feat/add-lmstudio-and-mise-support branch from 56780fb to 29a91da Compare February 21, 2026 16:18
@sohaieb sohaieb force-pushed the feat/add-lmstudio-and-mise-support branch from 3872592 to 5f3c1a1 Compare February 24, 2026 12:34
@sohaieb sohaieb changed the title feat: add LMStudio & Mise support feat: add LMStudio, Mise support, fix chat input multiline cursor navigation issue Feb 24, 2026
@sohaieb sohaieb force-pushed the feat/add-lmstudio-and-mise-support branch 3 times, most recently from 391d0c9 to 8770d19 Compare February 24, 2026 22:29
@sohaieb sohaieb force-pushed the feat/add-lmstudio-and-mise-support branch from 8770d19 to 14993e3 Compare February 25, 2026 15:49
@sohaieb sohaieb force-pushed the feat/add-lmstudio-and-mise-support branch 2 times, most recently from 923b7c6 to 983f38d Compare February 26, 2026 14:05
@HenryHengZJ
Copy link
Copy Markdown
Contributor

what is Mise? why is it needed here?

@sohaieb
Copy link
Copy Markdown
Contributor Author

sohaieb commented Feb 27, 2026

Hi dear @HenryHengZJ ,

what is Mise? why is it needed here?

Mise (Official Docs) helps manage tools like Node and Python per project. It installs and configures them locally and prevents version conflicts.

Please check the docs and let me know if we should include it in Flowise, so I can keep or remove it from the PR.

@sohaieb sohaieb force-pushed the feat/add-lmstudio-and-mise-support branch 4 times, most recently from 27d3fab to a7edb16 Compare March 3, 2026 10:20
@HenryHengZJ
Copy link
Copy Markdown
Contributor

it would be great if we can just keep this PR for LMStudio integration

@sohaieb
Copy link
Copy Markdown
Contributor Author

sohaieb commented Mar 16, 2026

it would be great if we can just keep this PR for LMStudio integration

@HenryHengZJ sure, I will remove the mise integration accordingly and update the branch as it is outdated

[UPDATE]
I updated the PR to remove Mise support ✅

@sohaieb sohaieb force-pushed the feat/add-lmstudio-and-mise-support branch from a7edb16 to 0d41fb8 Compare March 16, 2026 12:49
@sohaieb sohaieb changed the title feat: add LMStudio, Mise support, fix chat input multiline cursor navigation issue feat: add LMStudio Mar 16, 2026
@sohaieb sohaieb force-pushed the feat/add-lmstudio-and-mise-support branch 2 times, most recently from f791ec7 to a33392f Compare March 23, 2026 23:14
@sohaieb sohaieb force-pushed the feat/add-lmstudio-and-mise-support branch from 367d231 to e2e7216 Compare April 1, 2026 08:08
Comment thread packages/components/credentials/LmStudioApi.credential.ts
Comment thread packages/components/nodes/chatmodels/ChatLmStudio/ChatLmStudio.ts Outdated
@0xi4o
Copy link
Copy Markdown
Contributor

0xi4o commented Apr 1, 2026

@HenryHengZJ Tested this PR:

Screenshot 2026-04-01 at 8 36 54 PM Screenshot 2026-04-01 at 8 38 19 PM Screenshot 2026-04-01 at 8 42 27 PM Screenshot 2026-04-01 at 8 42 41 PM Screenshot 2026-04-01 at 8 42 53 PM Screenshot 2026-04-01 at 8 44 06 PM

As for the 0 usage data: afaik, we only measure usage for llm or chat models, not for embeddings. In my testing, using LMStudio Embeddings worked well as shown in the screenshots above.

Comment thread packages/ui/src/views/chatmessage/ChatMessage.jsx
@sohaieb sohaieb force-pushed the feat/add-lmstudio-and-mise-support branch from e2e7216 to 1150b8b Compare April 2, 2026 07:59
@sohaieb sohaieb requested a review from 0xi4o April 2, 2026 09:31
@sohaieb
Copy link
Copy Markdown
Contributor Author

sohaieb commented Apr 2, 2026

@HenryHengZJ Tested this PR:

Screenshot 2026-04-01 at 8 36 54 PM ...
...
As for the 0 usage data: afaik, we only measure usage for llm or chat models, not for embeddings. In my testing, using LMStudio Embeddings worked well as shown in the screenshots above.

Hi @0xi4o

I’m not 100% sure this is the issue, but could you double-check something for me: Can you look at the embeddings column in your database? You might find that the values are all 0,0,0,.... Please confirm if that’s the case.

Another hint: try asking the AI a few different questions. If you notice that the document store keeps returning results from the same (first) embedding entry, even when the question changes, it could be related.

From what I understand, if all embedding values are 0,0,0,..., the similarity search won’t work properly and will just fall back to the first entries. (But feel free to correct me if I’m wrong!)

If it's the case, I'm not sure if this is faced exactly in Flowise stage or in LmStudio stage, but as a reference I opened the following thread if you need more details: (Embedding response usage (prompt_tokens, total_tokens) stats are always 0)

@0xi4o
Copy link
Copy Markdown
Contributor

0xi4o commented Apr 3, 2026

@sohaieb You're right! I was able to verify that the values are all zeros. And the chat model kept returning the same chunks in the responses for different questions.

On further investigation, I found that this is a known issue with @langchain/openai sdk and LM Studio: langchain-ai/langchainjs#8221

The fix is to set encodingFormat explicitly in model params in LmStudioEmbedding.ts.

@sohaieb sohaieb force-pushed the feat/add-lmstudio-and-mise-support branch 2 times, most recently from 23b9941 to 3e9704c Compare April 3, 2026 17:41
Comment on lines +1032 to +1035
if (input.trim()) {
inputHistory.addToHistory(input)
}

Copy link
Copy Markdown
Contributor

@0xi4o 0xi4o Apr 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This works as expected when selectedInput is an object and gets added to input history. However, the only place where selectedInput will be an object is when setting input type to form in Agentflow Start Node. The purpose of the form input type is for flow builders to collect data from their users. So it's better to not save object input to input history.

Let's revert this change.

Copy link
Copy Markdown
Contributor Author

@sohaieb sohaieb Apr 7, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@0xi4o, thank you for your comment.

Just to clarify why I moved this line after the condition check: in both cases, we still convert input to a string, even when selectedInput is an object, because of this block:

} else if (typeof selectedInput === 'object') {
    input = Object.entries(selectedInput)
        .map(([key, value]) => `${key}: ${value}`)
        .join('\n')
}

Why this mattered in my case: once the Allow Image Uploads option is enabled and an image is uploaded together with a message, the written message is lost from the history, because it is not added to the history at all as adding an image for upload converts both { image + text message} to an object.

For that reason, and after checking both cases, I moved it afterward. This should fix the issue.

So, do we still need to revert the changes in this case or we can leave it? please let me know which option do you prefer in this case,
thanks 🙏

@sohaieb sohaieb force-pushed the feat/add-lmstudio-and-mise-support branch 3 times, most recently from 3614153 to 310e658 Compare April 11, 2026 21:15
@sohaieb sohaieb force-pushed the feat/add-lmstudio-and-mise-support branch from 310e658 to 21622d4 Compare April 19, 2026 10:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants