The choice layer for AI coding.

Learned models that pick the right model, prompt, or harness per task.
Earn a stake every time your data improves them.

How it works

Integrate the router middleware.

Three lines of code; works alongside whatever harness you're building or extending.

Route real tasks.

Each task gets a model choice based on the shared learned policy; outcomes (test pass/fail, cost, latency) feed back automatically.

Earn a stake.

Performance improvements attributable to your data mint tokens to you. Hold or convert to USDC.

What a heavy integrator earns.

Projected example: a harness routing ~10,000 coding tasks per week, contributing outcome data that improves the router's cost-adjusted task success rate by 3 DeltaOne over a quarter, earns approximately 18,000 tokens. At today's projected bonding-curve valuation, that's ~$9,000. Tokens are held as a position in the router or redeemed for USDC at any time.

Tasks / week

10,000

(projected)

Router lift

+3 DeltaOne

3 percentage points of cost-adjusted task success on the shared coding benchmark

Estimated tokens

18,000

(projected)

Estimated USDC

~$9,000

(projected)

Pass through to users. OSS harness maintainers can pass 100% of token flow to the engineers whose tasks generate the data. Turns ownership into a user-acquisition feature.

Keep as revenue. Commercial harnesses can retain some or all of the token flow as a new revenue line that doesn't require a paywall.

Split. Mix the two. Configurable at integration; changeable later.

What the choice layer does vs. what gateways do

Gateways move calls. The choice layer picks which call to make. They solve different problems at different points in the stack — and they compose cleanly.

your harness

Hokusai

Gateway

model

 AI GatewaysHokusai Choice Layer
JobMove the call to a modelPick which call to make
Optimizes forLatency, cost-per-call, failover, uptimeTask outcome and cost-adjusted success
How decisions are madeRules and fallback chainsLearned from real task outcomes
Scope of the decisionProvider, region, retry pathModel, prompt, or harness per task
How it improvesYou update the rulesAutomatically, as outcomes accrue
What you getObservability and a unified APIA stake in the model that learns from your data

Already using a gateway? Keep it. The choice layer sits above your gateway and tells it which model to call.

What we see, what we don't, how you can verify.

What we see

Routing decisions (which model was chosen), outcome signals (test pass/fail, cost, latency), task category embedding, anonymized error class.

What we don't see

Raw source code, proprietary content, customer data, secrets.

How to verify

Open-source SDK; on-chain attribution of contributions and token mint events; auditable outcome log scoped to your account.

Where does your routing data go today?

 Lab-owned auto-routingHokusai
Who captures the optimization signalThe labYou and the contributors
Who keeps the inference cost savingsThe lab keeps marginYou
What you build over timeNothing transferableA token position in the router
Portability across harnessesLocked inTake your position with you
AuditabilityOpaqueOn-chain attribution

Integration

Build on the router now, then go deeper into the protocol mechanics when you need the full economic and attribution model.