Conversation
…authored-by: openhands <openhands@all-hands.dev>
…sation startup\n\n- ProfileManager manages ~/.openhands/llm-profiles/*.json (load/save/list/register)\n- LocalConversation now calls ProfileManager.register_all to eagerly populate LLMRegistry\n\nCo-authored-by: openhands <openhands@all-hands.dev>
Co-authored-by: openhands <openhands@all-hands.dev>
Co-authored-by: openhands <openhands@all-hands.dev>
Co-authored-by: openhands <openhands@all-hands.dev>
Co-authored-by: openhands <openhands@all-hands.dev>
Co-authored-by: openhands <openhands@all-hands.dev>
Co-authored-by: openhands <openhands@all-hands.dev>
Co-authored-by: openhands <openhands@all-hands.dev>
Co-authored-by: openhands <openhands@all-hands.dev>
Co-authored-by: openhands <openhands@all-hands.dev>
Co-authored-by: openhands <openhands@all-hands.dev>
- embed profile lifecycle APIs into the registry - update persistence helpers, docs, and examples to use registry - replace profile manager tests with registry profile coverage Co-authored-by: openhands <openhands@all-hands.dev>
- note that LLMRegistry is the unified entry point for disk and runtime profiles - mention how to override the profile directory when embedding the SDK Co-authored-by: openhands <openhands@all-hands.dev>
- rename payload helpers to resolve_llm_profiles/compact_llm_profiles - update conversation state to use clearer helper names - drop the optional agent_settings convenience module and its tests Co-authored-by: openhands <openhands@all-hands.dev>
- replace the _transform flag with dedicated _compact/_resolve helpers - make compact_llm_profiles/resolve_llm_profiles easier to follow by delegating to the new helpers Co-authored-by: openhands <openhands@all-hands.dev>
Bring in new package layout and port LLM profile switching support.
Revert the in-progress switch_llm helpers and tests; agent-sdk-18 branch now only contains LLM profile persistence.
Example 25 now performs a read/write/delete workflow and verifies the persisted profile reference.
- move inline/profile compaction into LLM serializer/validator - use model_dump_json context in ConversationState persistence - add persistence settings module and cover profile reference tests - document persistence comparison and recommendations
| context={"expose_secrets": True}, | ||
| ) | ||
| llm_payload = agent_payload.get("llm") | ||
| if isinstance(llm_payload, dict) and llm_payload.get("profile_id"): |
There was a problem hiding this comment.
@OpenHands Why are we using profile_id, didn't we have a kind profile_ref? Please investigate the entire diff, and understand the code patterns and fix.
There was a problem hiding this comment.
I'm on it! enyst can track my progress at all-hands.dev
There was a problem hiding this comment.
Final update:
Summary (new since last update):
- No additional changes since the previous summary; branch is up to date and already pushed.
Checklist:
- Confirmed the request is fully addressed.
- Verified there are no extra or unrelated changes (diff vs. remote is empty).
- Changes are concise and already pushed to the PR branch.
If you want any further adjustments or additional tests run, just let me know.
Co-authored-by: OpenHands Bot <contact@all-hands.dev>
Co-authored-by: openhands <openhands@all-hands.dev>
Co-authored-by: openhands <openhands@all-hands.dev>
Break the main function into script-level code to save a level of indentation. Also document this preference in AGENTS.md. Co-authored-by: openhands <openhands@all-hands.dev>
|
Looks like there are a few issues preventing this PR from being merged!
If you'd like me to help, just leave a comment, like Feel free to include any additional details that might help me get this PR into a better state. You can manage your notification settings |
|
[Automatic Post]: It has been a while since there was any activity on this PR. @enyst, are you still working on it? If so, please go ahead, if not then please request review, close it, or request that someone else follow up. |
1 similar comment
|
[Automatic Post]: It has been a while since there was any activity on this PR. @enyst, are you still working on it? If so, please go ahead, if not then please request review, close it, or request that someone else follow up. |
|
This PR has been the oldest experiment I think, with LLM profiles. Meanwhile, I've integrated profiles in a fun typescript version of the SDK: That is in a sense a full-featured version (switch on the fly, define on the fly, use multiple LLMs for tiny user-friendly functionality during the run). But also, a somewhat simplified design (e.g. always have a default profile, usage_ids are hardcoded, and some unused). Maybe we should simplify here too, though maybe not so much, we'll see as we move forward. @VascoSch92 has added the first version of profiles, and we'll look at the next steps. Maybe this comment is something we'll get back to. We'll see! |
|
@OpenHands Look at the llm profiles on Look also for the PR adding switch llm. Read the PRs. Make a plan of adding the behavior in those two PRs to main. Take your time to investigate and split into chunks that work all the time. Post your plan as a comment to this PR. Note that it will be rendered as markdown. |
|
I'm on it! enyst can track my progress at all-hands.dev |
Investigation notes (current
|
|
I investigated the current “LLM profiles” feature on What I did:
Where the plan is posted: No code changes were made in the repository for this request; I only performed investigation and posted the plan comment. |
|
[Automatic Post]: It has been a while since there was any activity on this PR. @enyst, are you still working on it? If so, please go ahead, if not then please request review, close it, or request that someone else follow up. |
3 similar comments
|
[Automatic Post]: It has been a while since there was any activity on this PR. @enyst, are you still working on it? If so, please go ahead, if not then please request review, close it, or request that someone else follow up. |
|
[Automatic Post]: It has been a while since there was any activity on this PR. @enyst, are you still working on it? If so, please go ahead, if not then please request review, close it, or request that someone else follow up. |
|
[Automatic Post]: It has been a while since there was any activity on this PR. @enyst, are you still working on it? If so, please go ahead, if not then please request review, close it, or request that someone else follow up. |
|
@OpenHands understand this pr very well, then merge main into it carefully (fix conflcits, but also keep an eye for redundancies that the commits on main to this may have intro) |
|
I'm on it! enyst can track my progress at all-hands.dev |
Resolve the LLM profile branch conflicts against current main, keep registry profile-ref restore behavior intact, and preserve the current wsproto auth test compatibility setup. Co-authored-by: openhands <openhands@all-hands.dev>
Python API breakage checks — ✅ PASSEDResult: ✅ PASSED |
REST API breakage checks (OpenAPI) — ✅ PASSEDResult: ✅ PASSED |
|
Final summary of new work since the last summary:
Conciseness check:
|
HUMAN:
LLM Profiles behavior
usage_idprofile_idis set by the user, and corresponds tollm_profiles/profile_id.jsoninpersistence_dir(if set)LLM_PROFILES_DIRSummary
LLMRegistry, exposing list/load/save/register/validate helpers with configurable profile directoriesTesting
Related
Agent Server images for this PR
• GHCR package: https://github.com/OpenHands/agent-sdk/pkgs/container/agent-server
Variants & Base Images
eclipse-temurin:17-jdknikolaik/python-nodejs:python3.13-nodejs22-slimgolang:1.21-bookwormPull (multi-arch manifest)
# Each variant is a multi-arch manifest supporting both amd64 and arm64 docker pull ghcr.io/openhands/agent-server:d1a4998-pythonRun
All tags pushed for this build
About Multi-Architecture Support
d1a4998-python) is a multi-arch manifest supporting both amd64 and arm64d1a4998-python-amd64) are also available if needed