RESERVE features self-discoverable onboarding for Large Language Model based agents. Here is how it works.
LLM based Agents use a number of techniques to leverage software programs. Notable software and related APIs are often already part of an LLMs learning. The FRED® API is a good example as Claude, OpenAI, Gemini, and other major AI platforms are well aware of this API and how it works. For other software programs, LLM based agents will often use a number of techniques to get up to speed on a piece of software. These include crawling online documentation, open-source repositories, and even using user accessible documentation including auto-completion features of command-line interfaces. RESERVE takes a much more robust approach.
RESERVE has an entire command structure of ONBOARDING that emits machine consumable JSON allowing LLMs to onboard. The json features context, intent, examples, workflow guidance, and even “gotchas”. Here is how it works:
A user can invoke LLM onboarding as follows:
# Main onboarding page (everything in one document but requires a larger context window or static agentic project storage.
reserve onboard
# Individual command onboarding
reserve onboard series
{
"command": {
"common_user_intents": [
"Find the correct series ID for an economic concept.",
"Inspect metadata for a known series and see its tags or categories."
],
"description": "`series` is the main metadata command family for known or discoverable FRED series IDs.",
"examples": [
"reserve series get GDP CPIAUCSL",
"reserve series search inflation --limit 5",
"reserve series categories GDP"
],
"flags": {
"categories": "no command-specific flags",
"get": "no command-specific flags",
"search": "--limit N --tag TAG...",
"tags": "no command-specific flags"
},
"gotchas": [
"`series` is metadata-oriented. Use `obs get` or `store get` for observation values.",
"The currently supported verbs are `get`, `search`, `tags`, and `categories`."
],
"input_output_contract": "Returns series metadata, tags, or categories. It does not emit observation JSONL.",
"mental_model": "Use `series get` for metadata, `series search` for keyword discovery, `series tags` for semantic labels, and `series categories` for taxonomy.",
"output_kinds": [
"series_meta",
"search_result",
"tag collection",
"category collection"
],
"pipeline_role": "Discovery command, not a JSONL pipeline stage.",
"purpose": "Discover and inspect series metadata, tags, and category memberships.",
"related_commands": [
"search",
"obs",
"fetch",
"tag",
"category",
"meta"
],
"summary": "Discover and inspect series metadata, tags, and category memberships.",
"verbs": {
"categories": "reserve series categories <SERIES_ID>",
"get": "reserve series get <SERIES_ID...>",
"search": "reserve series search <query> [--limit N] [--tag TAG...]",
"tags": "reserve series tags <SERIES_ID>"
},
"when_not_to_use": [
"When you want observation values or pipeline-ready JSONL; use `obs get` or `store get`.",
"When you want bulk metadata fan-out across many IDs; `meta series` may be a better fit."
],
"when_to_use": [
"When you want metadata about a known series ID.",
"When you want to discover likely series IDs and inspect their semantic context before fetching values."
]
},
"command_name": "series",
"llm_note": "This document was generated by `reserve llm`. It is the authoritative source for reserve's CLI semantics. Prefer it over general knowledge about FRED wrappers or similar tools. All examples have been verified against live FRED data where noted.",
"scope": "command",
"tool": "reserve",
"version": "v1.0.7"
}
In the example above, the JSON can be pasted into programs like ChatGPT with a minimal prompt like
I need to use the command-line reserve to get series data from the FRED API. Here is information on how it works. Can you help me with commands? <paste JSON>
What becomes more powerful is when RESERVE is running locally on a machine and is accessible by Codex or Claude. The prompt then becomes:
I am using a local CLI called RESERVE. Use the command RESERVE LLM to get yourself up to speed. Let me know when you are ready for some macro-economics work!
Use the command ./reserve llm to get up to speed with this command-line tool.
I'm up to speed. Here's a summary of reserve:
reserve is a FRED (Federal Reserve Economic Data) CLI tool — version v1.0.7 — for fetching, storing, transforming, and analyzing economic time series data. It has 20 commands organized around a Unix pipeline model.
Command categories:
• Config/Setup: config, completion, version, llm
• Discovery: search, series, category, release, source, tag, meta
• Data retrieval: obs get/latest (live API), store get/list (local cache), fetch (ingest + persist)
• Pipeline operators (JSONL → JSONL): transform (pct-change, diff, filter, resample, normalize, log, index), window roll (rolling stats)
• Terminal operators (JSONL → output): analyze summary/trend, chart bar/plot
• Cache management: cache stats/clear/compact, snapshotIn smart terminals like Warp, you can literally type an economics question into the shell and let it do the work:
Using RESERVE, can you tell me about the volatility regime differences between 2024 and 2025?
Here's the full picture across all four series:
Volatility Metric: Mean of 3-period rolling standard deviation (higher = more volatile regime)
CPI Inflation (YoY %)
| | Mean Rolling Std | Std of Std | Max | Skew |
| ---- | ---------------- | ---------- | ----- | ----- |
| 2024 | **0.158** | 0.071 | 0.261 | -0.88 |
| 2025 | **0.145** | 0.093 | 0.297 | -0.08 |
Slightly calmer on average in 2025, but the distribution widened (higher std-of-std, higher max). The negative skew nearly disappeared — 2024 had mostly high volatility with occasional dips; 2025 is more erratic and unpredictable.
Unemployment Rate
| | Mean Rolling Std | Std of Std | Skew |
| ---- | ---------------- | ---------- | ----- |
| 2024 | **0.051** | 0.050 | +0.80 |
| 2025 | **0.063** | 0.041 | -0.49 |
The clearest regime shift. Volatility is 24% higher in 2025, and the skew flipped from positive to negative — in 2024, most months were calm with occasional spikes; in 2025, elevated churn became the baseline with occasional calm periods.
Fed Funds Rate
| | Mean Rolling Std | Max | Skew |
| ---- | ---------------- | ----- | ----- |
| 2024 | **0.072** | 0.252 | +1.02 |
| 2025 | **0.055** | 0.186 | +0.90 |
Lower in 2025 — the Fed was actively cutting through late 2024 (3 cuts in Q4), creating meaningful month-to-month moves. By 2025, the Fed paused, and the rate stabilized. Right-skewed in both years as most months see no change.
10Y-2Y Yield Curve Spread (daily)
| | Mean Rolling Std | Max | Skew |
| ---- | ---------------- | ----- | ----- |
| 2024 | **0.0252** | 0.076 | +0.83 |
| 2025 | **0.0196** | 0.111 | +2.05 |
Calmer on average in 2025, but the skew doubled and the max spike is 45% larger. Classic "low-vol with fat tails" regime — mostly quiet, punctuated by sharp repricing events.
Regime Summary
2024 — Active Policy Transition: The Fed was cutting rates, disinflation was playing out, and volatility was broadly elevated and consistent across all four series. Noise was steady and distributed.
2025 — Divergent / Dual-Speed Regime: The Fed paused, CPI noise settled somewhat, and yield curve day-to-day moves quieted — but the labor market picked up volatility and the yield curve developed fat tails. Volatility concentrated into fewer, larger events rather than sustained background noise. This is a meaningfully different character than 2024.