AI Agent Framework
How Eliza and LangGraph orchestrate the Metalos assistant across portfolio, governance, and automation tasks.
How Eliza and LangGraph orchestrate the Metalos assistant across portfolio, governance, and automation tasks.
Metalos uses a multi-node architecture powered by Eliza and Langgraph that combines specialized workflow nodes for different tasks. The Metalos agentic infra handles portfolio analysis, research, proposal generation, and data retrieval. The system combines deterministic workflow graphs, carefully engineered prompts, and contextual data to deliver intelligent responses.
The AI layer sits between user-facing apps and the underlying automation services. When a user opens the portfolio chat window, the client sends the current wallet address, a snapshot of their portfolio, and the conversation history to a Metalos API endpoint. The server passes that bundle to a LangGraph StateGraph
workflow. The workflow first verifies and enriches portfolio data, then invokes an LLM with a custom system prompt that includes vault descriptions, risk characteristics, and the user's actual balances. The assistant's response is streamed back to the client and displayed in the chat UI.
Phase 1 keeps data collection client-driven for simplicity. Phase 2 will move portfolio aggregation and risk analytics fully server-side so responses rely on Metalos-signed data rather than client-provided snapshots.
Metalos uses two separate LangGraph agents for different use cases:
Location: Embedded in Next.js app (lib/langgraph/portfolio-agent.ts
)
Purpose: Portfolio-specific questions and analysis
Nodes:
fetch_portfolio
- Loads wallet positions, normalizes balances, calculates weighted APYchat
- LLM with portfolio-aware system promptState: Conversation history, portfolio data, wallet address
Use Cases:
Location: ROFL agent service (rofl-agent/src/research-agent/
)
Purpose: DeFi research, proposal generation, documentation retrieval
Nodes:
chat
- Main conversational node with tool callingproposal_generation
- Guided governance proposal creation (multi-step flow)Tools Integrated:
State Management:
Use Cases:
This is a two-agent architecture: simple portfolio agent for basic questions, comprehensive research agent for advanced analysis and proposal workflows.
The system prompt generated in system-prompts.ts
gives the LLM a complete snapshot of the user’s holdings:
Because the prompt is regenerated every turn, the assistant always references the latest balances and strategy settings without needing persistent memory.
The Research Agent runs inside Oasis ROFL (Runtime Offchain Logic), a Trusted Execution Environment (TEE) that provides hardware-level security for sensitive AI workloads.
Code Integrity:
Runtime Privacy:
IP Protection (Metalos):
Unique Advantage: Metalos is one of the only DeFi platforms offering TEE-protected AI research. This enables confidential analysis that traditional AI services can't provide.
Metalos can be configured to run the Research Agent in TEE or standard mode via environment variables:
USE_TEE_AGENT=true
→ All queries use Oasis ROFL TEEWhen TEE is enabled:
TEE produces cryptographic proof:
You receive:
TEE usage is currently an all-or-nothing configuration, not per-query routing. When enabled, all Research Agent queries run in the TEE. When disabled, all queries use standard execution.
In the UI: Look for the shield icon (🛡️) next to AI responses
Click the shield to see:
See TEE Attestations Guide for detailed verification instructions.
Feature | TEE (ROFL) | Standard |
---|---|---|
Privacy | Maximum (hardware encrypted) | Standard HTTPS |
Verifiability | Cryptographic attestations | None |
Latency | +200-500ms (encryption overhead) | Fast |
Cost | Higher per query | Lower |
Configuration | USE_TEE_AGENT=true | Default |
TEE is configured at deployment level (all queries or none), not per-query. When TEE is enabled, you get maximum privacy for all research with slightly higher latency.
The Eliza UI hosts the chat interface inside the portfolio page. It handles:
Because Eliza is highly customizable, future enhancements can add voice input, multimodal responses, or embedded action buttons driven by agent output.
Data Source Integration:
RAG Documentation Retrieval:
Proposal Generation:
Deep Agent Tools:
State Persistence:
Portfolio Analysis:
Conversational Support:
Portfolio Agent Enhancements:
Research Agent Enhancements:
System-Wide:
Current Status: Both Portfolio Chat (basic 2-node) and Research Agent (advanced multi-node with RAG, proposal generation, and Deep Agent tools) are live and operational.