### Current Situation/Problem * The `llm` script has no abstraction layer for model backends. * All requests assume an OpenAI-style API (prompt → completion). * Users running self-hosted models (e.g. LLaMA + Ollama, vLLM, LocalAI) cannot integrate easily. ### Proposed Feature * **Backend Abstraction Layer:** Introduce a pluggable model provider interface (e.g., `provider: "openai" | "llama" | "custom"`). * **Configurable Model Selection:** Allow setting the model provider and model name in config or via CLI flags: ### Alternatives (Optional) _No response_ ### Additional Information _No response_
Current Situation/Problem
llmscript has no abstraction layer for model backends.Proposed Feature
provider: "openai" | "llama" | "custom").Alternatives (Optional)
No response
Additional Information
No response