Generate structured onboarding context for Large Language Models (LLMs).
The llm command outputs a curated JSON document describing RESERVE’s commands, semantics, workflows, and known gotchas. It is designed to give LLMs the minimum context needed to use RESERVE correctly and efficiently.
Examples:
- analyze — statistical analysis semantics
- search — discovery workflows
- pipeline — JSONL and operator model
USAGE
reserve onboard [command]
All commands and subcommands support LLM onboarding. There is also a full onboarding package for LLMs that support large context windows or project documentation.
Reserve onboarding usage can be done as simple as pasting onboarding into an agentic chatbot to storing structured onboarding files in an LLM’s workspace. When exposed to a smart terminal (e.g. Visual Studio Code, Claude, Warp Terminal), the LLM command is discoverable.
# Full onboarding documentation
reserve onboard obs
{
"command": {
"common_user_intents": [
"Fetch a date-bounded observation range.",
"Get the latest reading for one or more known series IDs."
],
"description": "`obs` is the canonical observation retrieval command family for both live API reads and local cached reads.",
"examples": [
"reserve obs get CPIAUCSL --start 2020-01-01 --format jsonl",
"reserve obs latest FEDFUNDS UNRATE"
],
"flags": {
"get": "--from --start --end --freq --units --agg --limit",
"latest": "no command-specific flags"
},
"gotchas": [
"`obs get` defaults to table format even when piped. Always add `--format jsonl` before `| reserve transform ...`.",
"If you fetch multiple series at once and pipe them, downstream pipeline operators treat the interleaved rows as one stream."
],