Submit a Model for Contract Clause Extractor

Extracts and classifies legal clauses from enterprise contracts

Legal
Benchmark Requirements

Performance Target

Metric:F1 Score
Target Value:0.85(higher is better)

Harmonic mean of precision and recall

Token Reward

1,000 CLAUS per verified DeltaOne

Tokens minted per unit improvement in F1 Score

How to Submit Your Model
1

Create Model Entry

Register your model in the Hokusai marketplace. We've pre-filled the benchmark requirements from this proposal. You'll configure your model details, token economics, and performance tracking.

2

Register Your Model

After creating your model entry, register your trained model with the Hokusai registry using the SDK.

Install SDK

pip install git+https://github.com/Hokusai-protocol/hokusai-data-pipeline.git#subdirectory=hokusai-ml-platform

Configure API Key

export HOKUSAI_API_KEY="your-hokusai-api-key-here"

Register Model

Register Contract Clause Extractor with token CLAUS

import os
import mlflow
from hokusai.core import ModelRegistry

# Set up MLflow tracking URI
mlflow.set_tracking_uri("https://registry.hokus.ai/api/mlflow")

# IMPORTANT: Use your Hokusai API key, NOT an MLflow token
# The Hokusai API key authenticates both the registry and MLflow
os.environ["MLFLOW_TRACKING_TOKEN"] = os.getenv("HOKUSAI_API_KEY")

# Initialize registry
registry = ModelRegistry()

# Register your model
with mlflow.start_run() as run:
    # Log your model (replace with your actual model)
    mlflow.sklearn.log_model(
        your_trained_model,
        "model",
        registered_model_name="Contract Clause Extractor"
    )

    # Register with Hokusai
    model_uri = f"runs:/{run.info.run_id}/model"
    registered_model = registry.register_tokenized_model(
        model_uri=model_uri,
        model_name="Contract Clause Extractor",
        token_id="CLAUS",
        metric_name="accuracy",
        baseline_value=0.92,
        additional_tags={"author": "your-name", "version": "1.0"}
    )

    print(f"✅ Model registered successfully: {registered_model.name}")
3

Trigger Evaluation

Once your model is registered, trigger an evaluation run against the benchmark dataset. The system will automatically measure your model's performance against the target criteria.

Evaluation process:

  • Your model runs on the specified evaluation dataset
  • Performance is measured using F1 Score
  • Results are verified and recorded on-chain
  • Token rewards are calculated based on performance improvement
4

Claim Your Rewards

If your model meets or exceeds the benchmark target, you'll earn token rewards. Tokens are automatically minted and can be claimed from your dashboard.

What to Expect

Evaluation Timeline

Evaluation runs typically complete within 5-30 minutes, depending on model complexity and dataset size. You'll receive notifications when evaluation completes.

Performance Verification

All evaluation results are verified and recorded on-chain to ensure transparency and prevent manipulation. Your model's performance will be publicly visible.

Token Distribution

Tokens are minted automatically when your model achieves performance improvements. The amount is calculated based on the delta between your model's performance and the baseline, multiplied by the tokens-per-delta-one rate.

Support & Documentation

Need help? Visit our model submission guide for detailed instructions, code examples, and troubleshooting tips.