fp-wraptr¶
Python toolkit to modernize the Fair-Parke macroeconomic model workflow.
fp-wraptr wraps Ray Fair's US Macroeconometric Model, making it easier to run scenarios, inspect results, compare forecasts, and build on decades of economic modeling work — all from Python.
It reads the standard Fair Model files (fminput.txt, fmdata.txt, fmexog.txt, fmout.txt) directly, so you can use your existing model data as-is. On top of that, fp-wraptr adds YAML scenario configs, a compact DSL, and an MCP server for LLM-assisted authoring.
New to the Fair-Parke model?
The FP model is a large-scale macroeconometric model of the US economy maintained by Ray Fair at Yale University. It contains 130+ equations covering output, employment, prices, interest rates, and government accounts. fp-wraptr lets you drive this model from modern tooling instead of hand-editing FORTRAN-era input files.
What can you do?¶
- Run forecasts — Define scenarios in YAML, execute with
fp run, get structured output in pandas DataFrames - Compare scenarios — Diff two runs side-by-side, identify top-moving variables, export deltas to CSV
- Update data from FRED — Pull the latest economic data from FRED, BEA, and BLS directly into the model
- Explore equations — Build dependency graphs, trace how variables flow through 130+ equations
- Validate with parity — Run the original FORTRAN engine and a pure-Python solver head-to-head to verify results
- Use AI agents — An MCP server with 44 tools lets LLMs author scenarios, run models, and interpret results
Getting started¶
Then follow the Quickstart guide to configure your model files and run your first scenario.
Meet the mascots¶
| Name | Role | |
|---|---|---|
![]() |
Rex (Velociraptor) | fp.exe — the original FORTRAN engine |
![]() |
Archie (Archaeopteryx) | fppy — the pure-Python solver |
![]() |
Raptr (Eagle) | Agentic features — MCP server, packs, and workspace authoring |
Architecture at a glance¶
graph LR
A[YAML Scenario] --> B[ScenarioConfig]
B --> C[Runner]
C --> D[fp.exe / fppy]
D --> E[Parser]
E --> F[DataFrames]
F --> G[Reports & Charts]
F --> H[Dashboard]
F --> I[Parity Check]
Features¶
- Scenario configs — Define runs in YAML with Pydantic validation
- IO parsing — Read FP outputs into pandas DataFrames with canonical keys
- Batch runner — Execute multiple scenarios and compare against golden baselines
- Dependency graph — Trace upstream/downstream variable dependencies with networkx
- Report generation — Markdown run reports and comparison summaries
- Visualization — Matplotlib charts and a 12-page Streamlit dashboard with Plotly
- MCP server — 44 tools for LLM-assisted exploration and scenario authoring
- Managed workspaces — reusable scenario packs and templates for LLM-driven or manual authoring
- Dual engines — Run the FORTRAN binary and pure-Python solver side-by-side for parity validation
- Data pipelines — FRED, BEA, and BLS data integration with safe-lane update workflows
Documentation¶
-
Set up your environment and run your first scenario
-
Module layout, data flow, and design decisions
-
YAML configuration reference with examples
-
Complete command reference for 70+ CLI commands
-
12-page Streamlit dashboard guide
-
LLM-assisted scenario design with managed workspaces
-
Operator playbook for dual-engine validation
-
FRED/BEA/BLS data refresh workflows
-
Browse exported forecasts and share results with your team


