Model API Reference
Authoritative reference for model integration interfaces and configuration.
Interfaces
In the MVP, model integration is internal to the Runtime (there is no separate “model service” API).
When running the Runtime in openai mode, the Planner, Tool Args Generator, and Chat roles call an LlmClient:
jarvis_runtime.llm.client.LlmClient.responses(system_prompt, user_prompt, output_schema?, model_override?) -> dict
The shipped implementation is OpenAIResponsesClient, which uses the OpenAI Python SDK to call the OpenAI Responses API and (optionally) request JSON Schema structured outputs.
Configuration
OpenAI mode configuration:
- Required:
OPENAI_API_KEY(orJARVIS_OPENAI_API_KEY)
- Optional:
OPENAI_BASE_URL(orJARVIS_OPENAI_BASE_URL)JARVIS_MODEL_DEFAULTJARVIS_MODEL_PLANNER,JARVIS_MODEL_TOOL_ARGS,JARVIS_MODEL_CHAT
Runtime selection:
--mode openai(orJARVIS_RUNTIME_MODE=openai)
Extensibility
To add a new provider in the MVP:
- Implement
LlmClient. - Add a new runtime mode that instantiates your client and wires it into
LlmPlanner,LlmToolArgsGenerator, andLlmChat. - Validate behavior via trace output (
llm_call/llm_resultevents) and keepstubmode as a fallback.