Page
Operations
Configuration
Myria reads JSON configuration.
Key sections:
servicemcppostgresllmbuilder
Important v1 rules:
mcp.transportmust bestdio- network MCP is not supported
llm.request_template_pathmust point to a JSON OpenRouter request templateservice.log_fileshould point to a local append-only log file
OpenRouter Request Template
The request template lives outside environment variables so model-call settings can be versioned and edited explicitly.
The current implementation injects:
- messages
- tool schemas
- tool choice
The template carries:
- model
- provider preferences
- temperature
- top-p
- headers such as the OpenRouter API key
Builder Operations
The builder can run through:
- threshold trigger on append
- inactivity timeout in the background worker
- explicit manual trigger
Build publish is serialized. The active snapshot only changes after validation and successful publish.
Schema Discipline
Myria now records a schema version in PostgreSQL during migration.
Operationally, that means:
migrateinitializes both tables and schema version metadataserveandbuildverify the schema version before doing real work- startup fails early if the schema is missing or incompatible
Commands
Run migrations:
go run ./cmd/myria -config ./myria.json -cmd migrate
Run one build:
go run ./cmd/myria -config ./myria.json -cmd build
Run the stdio MCP server:
go run ./cmd/myria -config ./myria.json -cmd serve
Testing
The current test suite includes:
- unit tests for core identity logic
- PostgreSQL-backed service integration tests
- OpenRouter-backed LLM integration tests with explicit timeouts
Recommended test command:
go test -timeout 5m ./...
Observability
Use the MCP admin tools for operational visibility:
myria.get_active_snapshotmyria.list_snapshotsmyria.get_snapshot_statusmyria.get_topicmyria.get_eventmyria.trigger_build
myria.get_snapshot_status is the main live status surface for:
- unindexed backlog
- build readiness
- current builder activity
- last build result
Internal runtime logs are written to the file configured by service.log_file.
The current implementation writes append-only JSON lines and flushes the file after each write.
Practical Limits
The internal LLM workflows are intentionally bounded. Operationally, that means:
- every model turn is participant-masked
- every inspection tool is depth/breadth/byte-limited
- long logs are handled through incremental inspection rather than giant prompts