How it works
Integrate the router middleware.
Three lines of code; works alongside whatever harness you're building or extending.
Route real tasks.
Each task gets a model choice based on the shared learned policy; outcomes (test pass/fail, cost, latency) feed back automatically.
Earn a stake.
Performance improvements attributable to your data mint tokens to you. Hold or convert to USDC.
What a heavy integrator earns.
Projected example: a harness routing ~10,000 coding tasks per week, contributing outcome data that improves the router's cost-adjusted task success rate by 3 DeltaOne over a quarter, earns approximately 18,000 tokens. At today's projected bonding-curve valuation, that's ~$9,000. Tokens are held as a position in the router or redeemed for USDC at any time.
Tasks / week
10,000
(projected)
Router lift
+3 DeltaOne
3 percentage points of cost-adjusted task success on the shared coding benchmark
Estimated tokens
18,000
(projected)
Estimated USDC
~$9,000
(projected)
Pass through to users. OSS harness maintainers can pass 100% of token flow to the engineers whose tasks generate the data. Turns ownership into a user-acquisition feature.
Keep as revenue. Commercial harnesses can retain some or all of the token flow as a new revenue line that doesn't require a paywall.
Split. Mix the two. Configurable at integration; changeable later.
What the choice layer does vs. what gateways do
Gateways move calls. The choice layer picks which call to make. They solve different problems at different points in the stack — and they compose cleanly.
your harness
Hokusai
← picks which call to make
Gateway
← delivers it: failover, retry, rate limits
model
| AI Gateways | Hokusai Choice Layer | |
|---|---|---|
| Job | Move the call to a model | Pick which call to make |
| Optimizes for | Latency, cost-per-call, failover, uptime | Task outcome and cost-adjusted success |
| How decisions are made | Rules and fallback chains | Learned from real task outcomes |
| Scope of the decision | Provider, region, retry path | Model, prompt, or harness per task |
| How it improves | You update the rules | Automatically, as outcomes accrue |
| What you get | Observability and a unified API | A stake in the model that learns from your data |
Already using a gateway? Keep it. The choice layer sits above your gateway and tells it which model to call.
What we see, what we don't, how you can verify.
What we see
Routing decisions (which model was chosen), outcome signals (test pass/fail, cost, latency), task category embedding, anonymized error class.
What we don't see
Raw source code, proprietary content, customer data, secrets.
How to verify
Open-source SDK; on-chain attribution of contributions and token mint events; auditable outcome log scoped to your account.
Where does your routing data go today?
| Lab-owned auto-routing | Hokusai | |
|---|---|---|
| Who captures the optimization signal | The lab | You and the contributors |
| Who keeps the inference cost savings | The lab keeps margin | You |
| What you build over time | Nothing transferable | A token position in the router |
| Portability across harnesses | Locked in | Take your position with you |
| Auditability | Opaque | On-chain attribution |
Integration
Build on the router now, then go deeper into the protocol mechanics when you need the full economic and attribution model.