Conversation
Summary of ChangesHello @sohaieb, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the platform's capabilities by integrating LMStudio for local large language model operations, providing users with more flexibility in managing their LLM workflows. Additionally, it incorporates Mise for streamlined development environment setup, ensuring consistent project dependencies across different environments. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request adds support for Mise and LMStudio, which is a great addition. The implementation is solid, introducing new credentials and nodes for chat, LLM, and embeddings. I've made a few suggestions to improve the robustness and consistency of the new LMStudio nodes. Specifically, I've pointed out a couple of places where unsafe parsing of numeric inputs could lead to runtime issues and suggested a more robust way to handle optional parameters. I also recommended making the credential for the LMStudio LLM node optional to align with the other LMStudio nodes and common usage patterns. Overall, great work on expanding Flowise's capabilities.
56780fb to
29a91da
Compare
3872592 to
5f3c1a1
Compare
391d0c9 to
8770d19
Compare
8770d19 to
14993e3
Compare
923b7c6 to
983f38d
Compare
|
what is Mise? why is it needed here? |
|
Hi dear @HenryHengZJ ,
Mise (Official Docs) helps manage tools like Node and Python per project. It installs and configures them locally and prevents version conflicts. Please check the docs and let me know if we should include it in Flowise, so I can keep or remove it from the PR. |
27d3fab to
a7edb16
Compare
|
it would be great if we can just keep this PR for LMStudio integration |
@HenryHengZJ sure, I will remove the mise integration accordingly and update the branch as it is outdated [UPDATE] |
a7edb16 to
0d41fb8
Compare
f791ec7 to
a33392f
Compare
367d231 to
e2e7216
Compare
|
@HenryHengZJ Tested this PR:
As for the 0 usage data: afaik, we only measure usage for llm or chat models, not for embeddings. In my testing, using LMStudio Embeddings worked well as shown in the screenshots above. |
e2e7216 to
1150b8b
Compare
Hi @0xi4o I’m not 100% sure this is the issue, but could you double-check something for me: Can you look at the Another hint: try asking the AI a few different questions. If you notice that the document store keeps returning results from the same (first) embedding entry, even when the question changes, it could be related. From what I understand, if all embedding values are If it's the case, I'm not sure if this is faced exactly in Flowise stage or in LmStudio stage, but as a reference I opened the following thread if you need more details: (Embedding response usage (prompt_tokens, total_tokens) stats are always 0) |
|
@sohaieb You're right! I was able to verify that the values are all zeros. And the chat model kept returning the same chunks in the responses for different questions. On further investigation, I found that this is a known issue with The fix is to set |
608ab4e to
c308e24
Compare
23b9941 to
3e9704c
Compare
| if (input.trim()) { | ||
| inputHistory.addToHistory(input) | ||
| } | ||
|
|
There was a problem hiding this comment.
This works as expected when selectedInput is an object and gets added to input history. However, the only place where selectedInput will be an object is when setting input type to form in Agentflow Start Node. The purpose of the form input type is for flow builders to collect data from their users. So it's better to not save object input to input history.
Let's revert this change.
There was a problem hiding this comment.
@0xi4o, thank you for your comment.
Just to clarify why I moved this line after the condition check: in both cases, we still convert input to a string, even when selectedInput is an object, because of this block:
} else if (typeof selectedInput === 'object') {
input = Object.entries(selectedInput)
.map(([key, value]) => `${key}: ${value}`)
.join('\n')
}Why this mattered in my case: once the Allow Image Uploads option is enabled and an image is uploaded together with a message, the written message is lost from the history, because it is not added to the history at all as adding an image for upload converts both { image + text message} to an object.
For that reason, and after checking both cases, I moved it afterward. This should fix the issue.
So, do we still need to revert the changes in this case or we can leave it? please let me know which option do you prefer in this case,
thanks 🙏
3614153 to
310e658
Compare
310e658 to
21622d4
Compare







Changes
Important note (Issue)
[BACKGROUND]
All LMStudio nodes work well, however the LMStudio embedding are unstable: the vector column values are always stored with "0" values.Not sure but It seems like Followise bases on these values?I tried to simulate the same embedding process with Ollama but it returns correct values.BTW I opened an issue to LMStudio basing on this topic, please check it out HERE[UPDATE]
Regarding the previously mentioned issue and basing on @0xi4o comment:
The issue should be fixed now with this Pull Request, basing on the following Thread Comment.