TL;DR
- ARP is not a walled garden. It’s a portability + governance layer at the seams.
- Existing protocols like MCP, plus A2A endpoints, can be treated as capability sources.
- Once imported, capabilities inherit the same ARP posture: bounded menus, enforceable constraints, and durable evidence.
The interoperability stance
Most ecosystems optimize for connectivity: “can I call the thing?”
ARP optimizes for operating a system safely:
- can I bound the action space,
- can I enforce policy checkpoints,
- can I replay and audit runs,
- can I swap implementations without rewriting clients.
That means interoperability happens through composition, not replacement.
The mapping: “external tool” → NodeType
In ARP, the integration surface is a NodeType contract. An adapter can wrap an external tool or remote agent endpoint behind a stable NodeType with schemas and constraints.
Conceptual example:
{
"node_type_id": "mcp.github.search_issues",
"version": "0.1.0",
"kind": "atomic",
"description": "Search GitHub issues via an MCP-backed integration.",
"input_schema": {
"type": "object",
"required": ["query"],
"properties": { "query": { "type": "string" } }
},
"output_schema": {
"type": "object",
"required": ["results"],
"properties": { "results": { "type": "array" } }
},
"constraints": {
"gates": { "side_effect_class": "read", "require_approval": false },
"budgets": { "max_wall_time_ms": 8000, "max_external_calls": 1 }
}
}
The safety wedge: imported capabilities become bounded
Once capabilities exist as NodeTypes, Selection can emit a bounded CandidateSet at every subtask. That keeps MCP/A2A integrations from becoming “a giant unbounded tool universe”.
When to use this approach
- You want to bring existing tool catalogs into an ARP system without rewriting everything.
- You need auditability and governance for remote actions.
- You want portability across multiple underlying tool protocols.
Next
- Read the Standard: /spec
- See the ecosystem overview: /ecosystem
- Run JARVIS and inspect artifacts: /quickstart