Skip to content

victor201202/ami-hmas

Repository files navigation

AmI HMAS

AmI HMAS is a proof of concept for goal-driven interaction in smart environments using Hypermedia Multi-Agent Systems (HMAS), Web of Things Thing Descriptions, LLM-based agents, and signifiers.

The main idea is simple: instead of making a user control every device separately, the system should be able to understand a higher-level request such as "It's too dark in here", reason about the environment, propose a plan, ask for confirmation, and then execute the necessary actions.

This repository accompanies the demo paper AmI HMAS: goal-driven interaction support in Smart Environments, presented at EUMAS 2025, and contains the proof-of-concept implementation of the HMAS architecture described there.

A short demo video of the system is available here: Conference demo video

AmI HMAS architecture

Overview

This project is built around a simulated smart lab called lab308, running on top of the Yggdrasil Hypermedia MAS platform. The environment contains artifacts such as a smart lamp, motorized blinds, and environmental sensors. These artifacts are exposed through Web of Things Thing Descriptions, while a set of specialized agents handle planning, monitoring, semantic discovery, experience reuse, and execution.

The project is meant to show how an HMAS-style environment can support more flexible and context-aware interaction than a setup based only on fixed scripts or direct device commands.

Why HMAS

A central goal of this project is to treat the environment as a hypermedia space, not as a hardcoded list of devices.

In practice, this means that artifacts and workspaces are represented as web resources that can be discovered, described, and queried through standard web technologies. Agents do not need to rely only on fixed integrations. Instead, they can inspect what exists in the environment, understand what each artifact exposes, and adapt their behavior based on that information.

This proof of concept focuses on several HMAS ideas:

  • web-addressable artifacts and workspaces
  • semantic descriptions of capabilities
  • runtime discovery of available affordances
  • agent-to-agent interaction over web interfaces
  • reuse of previous successful interactions through signifiers

Because of this, the project is not just about controlling a lamp or some blinds. It is about exploring how a hypermedia multi-agent environment can support reasoning, adaptation, and coordination between agents.

Semantic representation of artifacts

A key part of the system is the way artifacts are semantically represented and understood.

Each artifact in lab308 is exposed as a resource with a machine-readable description based on Web of Things Thing Descriptions. These descriptions do more than just expose endpoints. They describe what an artifact is, what properties it has, what actions it supports, what input schema is required, and how each affordance can be invoked.

Because of this, the agents can reason over artifact descriptions in a structured way. They can inspect:

  • what type of artifact they are dealing with
  • what properties are available
  • what actions can be invoked
  • what parameters are required
  • what HTTP method and target are associated with each action

In the current system:

  • EnvExplorerAgent retrieves artifact descriptions and indexes action affordances
  • EnvMonitoringAgent keeps RDF-based snapshots of live artifact state

Together, these two services give the planner both a semantic understanding of what the environment can do and a live view of what the environment is doing at a given moment.

Signifiers and experience reuse

Another important part of the project is the use of signifiers.

In this system, signifiers are used as reusable, context-rich records of successful past interactions. After a plan is confirmed and executed, the system can store that experience in a form that connects:

  • the user intent
  • the artifact affordance that was useful
  • the context in which that affordance made sense

This allows the system to reuse previous experience instead of planning from scratch every time. When a similar request appears again, the InteractionSolverAgent first checks whether the EnvExplorerAgent already has a signifier that matches the current goal and context. If a matching signifier exists, the system can reuse it. If not, it can still fall back to first-principles planning from raw affordances and live state.

Because of this, signifiers act as a bridge between semantic affordances and accumulated experience. They help the system move from "what the environment can do" to "what has already worked before in a similar situation."

What the system does

  • Understands natural-language requests and turns them into actionable intents.
  • Discovers semantic artifact capabilities and tracks live environment state.
  • Reuses previous successful interactions through signifiers when the context matches.
  • Generates, confirms, and executes plans when direct reuse is not possible.

System overview

The system is organized around four agent roles and one environment platform:

  • UserAssistantAgent: the conversational entry point. It receives the user request, summarizes the plan, asks for confirmation, and starts execution.
  • InteractionSolverAgent: the reasoning component. It builds a JSON plan from capabilities, signifiers, and live environment state.
  • EnvExplorerAgent: the capability and signifier service. It indexes artifact affordances and stores reusable experience.
  • EnvMonitoringAgent: the state service. It keeps an in-memory view of artifact state from WebSub updates.
  • yggdrasil-signifiers: the Lab308 simulation and frontend, built on Yggdrasil.

Main ports used by the system:

  • 8080: Yggdrasil / Lab308 backend
  • 5173: Yggdrasil frontend
  • 8081: EnvMonitoringAgent
  • 8082: EnvExplorerAgent
  • 8083: InteractionSolverAgent
  • 8084: UserAssistantAgent

Interaction flow

  1. A user sends a natural-language request to the UserAssistantAgent.
  2. The UserAssistantAgent decides whether the request is in scope and forwards a structured intent to the InteractionSolverAgent.
  3. The InteractionSolverAgent asks EnvExplorerAgent for signifiers and available affordances, and EnvMonitoringAgent for live state.
  4. If a matching signifier exists, the solver tries to reuse it. Otherwise, it plans from raw capabilities and current context.
  5. The resulting JSON plan is sent back to the UserAssistantAgent, which explains it to the user and asks for confirmation.
  6. After confirmation, the plan is stored as a signifier and executed against the Lab308 environment.

Repository structure

Each service also has its own local README with more specific details.

Main technologies

  • Java 21
  • Kotlin + Spring Boot + WebFlux
  • Eclipse LMOS / ARC agent tooling
  • LangChain4j
  • RDF4J and RDF/Turtle models
  • Web of Things Thing Descriptions
  • WebSub notifications
  • Yggdrasil Hypermedia MAS
  • CArtAgO-based artifact simulation
  • React + Vite frontend

Related projects

  • Yggdrasil - the Hypermedia MAS platform used to model and run the Lab308 environment.
  • Eclipse LMOS / ARC - the agent platform and framework used to build the four LLM-based services.
  • W3C Web of Things Thing Description 1.1 - the semantic description model used to represent artifacts, properties, actions, and invocation metadata.
  • WebSub - the notification mechanism used to propagate environment updates.
  • JaCaMo - the broader multi-agent ecosystem that includes CArtAgO, the artifact-oriented environment model behind the simulated setup.

Running the project

If you want to run the full setup, the easiest path is the following.

Prerequisites

  • Java 21+
  • Node.js + npm
  • One configured LLM backend
    • OPENAI_API_KEY is the simplest default for all agents
    • UserAssistantAgent also supports OLLAMA_URL and OPENROUTER_API_KEY

1. Start the Lab308 backend

From yggdrasil-signifiers/:

./gradlew.bat
java -jar build/libs/yggdrasil-0.0.0-SNAPSHOT-all.jar -conf conf/cartago_config_light308.json

2. Start the frontend

From yggdrasil-signifiers/frontend/:

npm install
npm run dev

3. Start the agents

In separate terminals:

cd EnvMonitoringAgent
./gradlew.bat bootRun
cd EnvExplorerAgent
./gradlew.bat bootRun
cd InteractionSolverAgent
./gradlew.bat bootRun
cd UserAssistantAgent
./gradlew.bat bootRun

4. Interact with the system

  • Open the Lab308 frontend at http://localhost:5173
  • Use the LMOS chat UI with the UserAssistantAgent:
https://eclipse.dev/lmos/chat/index.html?agentUrl=http://localhost:8084#/chat

Example request:

It's too dark in here.

Current status

The current implementation is still a research prototype rather than a fully hardened platform. It already demonstrates:

  • multi-agent separation of responsibilities
  • runtime discovery of device affordances
  • semantic interpretation of artifact descriptions
  • context-aware planning over a WoT-described environment
  • reuse of prior experience through signifiers

Some parts still need more work:

  • stronger plan validation before execution
  • better rollback and error recovery for multi-step plans
  • broader evaluation scenarios
  • richer user-guided teaching of new signifiers

Related documentation

About

Proof-of-concept HMAS for goal-driven interaction in smart environments using WoT Thing Descriptions, LLM agents, and signifiers.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors