Problem
Currently, the platform is limited to a few specific LLM providers. I would love to see an option to add custom LLM providers.
Since the vast majority of emerging AI platforms and local inference servers (such as Groq, Together AI, OpenRouter, LM Studio, and vLLM) have adopted the standard OpenAI API schema, adding a generic "Custom Provider" option would be incredibly powerful.
Proposed Solution
Add a "Custom OpenAI-Compatible Provider" option in the LLM settings where users can simply define:
Base URL (e.g., https://api.groq.com/openai/v1)
API Key
Model Name (e.g., llama3-70b-8192)
This would instantly unlock dozens of external providers and local models without the maintainers needing to build and maintain native integrations for each one.
Alternatives Considered
No response
Additional Context
No response
Willing to Help Implement?
Problem
Currently, the platform is limited to a few specific LLM providers. I would love to see an option to add custom LLM providers.
Since the vast majority of emerging AI platforms and local inference servers (such as Groq, Together AI, OpenRouter, LM Studio, and vLLM) have adopted the standard OpenAI API schema, adding a generic "Custom Provider" option would be incredibly powerful.
Proposed Solution
Add a "Custom OpenAI-Compatible Provider" option in the LLM settings where users can simply define:
Base URL(e.g.,https://api.groq.com/openai/v1)API KeyModel Name(e.g.,llama3-70b-8192)This would instantly unlock dozens of external providers and local models without the maintainers needing to build and maintain native integrations for each one.
Alternatives Considered
No response
Additional Context
No response
Willing to Help Implement?