Skip to content

netclaw model

Assign LLM models to roles: Main, Fallback, and Compaction. Run netclaw model for an interactive TUI, or use subcommands to script it.

You need at least one provider configured first. If you haven’t added one, see netclaw provider.

Terminal window
netclaw model # launch TUI
netclaw model <subcommand> [options] # CLI mode

Three roles, only Main is required:

RolePurposeWhen unset
MainPrimary model for all interactionsRequired — cannot be cleared
FallbackAutomatic failover when Main is unavailable (rate limits, network errors, provider outages)Falls back to Main
CompactionContext summarization with a cheaper/faster modelFalls back to Main

Role names are case-insensitive.

Compaction runs automatically when a session’s context approaches the model’s token limit.

Terminal window
netclaw model

Model Manager TUI showing role assignments with Main configured and Fallback/Compaction unset

Select a role to reassign it, or use the hotkeys:

KeyAction
/ Navigate roles
EnterAssign model to selected role
DDiscover available models from a provider
CClear optional role (Fallback or Compaction)
EscBack / Quit (from role overview)
Ctrl+QQuit from any screen

Enter opens the assignment flow: pick a provider, discover its models, confirm. Discovery times out after 20 seconds and shows up to 30 results. If yours isn’t listed, pick “Enter model ID manually…” and type it in (e.g., qwen3:30b, llama3.2:latest).

Terminal window
netclaw model list
Role Provider Model ID Context Window
Main remote-gpu qwen3:30b 32,768 tokens
Fallback remote-gpu qwen3:8b (default)
Compaction (not set)

Context window shows (default) unless you’ve set an explicit --context-window value.

This reads from config, not from the running daemon. With no models configured, it prompts you to run model set or open the TUI.

Terminal window
netclaw model set <role> <provider> <model-id> [--context-window <tokens>]
FlagDescriptionDefault
--context-window <tokens>Override context window size (positive integer)Provider-detected

Use this when a local model doesn’t report its context window (discovery shows -), or to cap usage below the model’s actual maximum.

The provider must already exist in your config. If it doesn’t, the error lists your configured providers:

Error: Provider 'my-cloud' not found in configuration.
Configured providers: remote-gpu, my-anthropic

Shrinking Main’s context window prints a warning because existing sessions with longer histories may fail until compacted.

Restart the daemon for changes to take effect.

Terminal window
netclaw model discover <provider>

The provider must be reachable. This queries its API live:

Model ID Context Window Cost (in/out per 1M)
claude-opus-4-1 200,000 $15.00 / $75.00
claude-sonnet-4 200,000 $3.00 / $15.00
gpt-4-turbo 128,000 $10.00 / $30.00
gpt-4o 128,000 $5.00 / $15.00
4 model(s) found.

Cost and context window columns show - when the provider doesn’t report them (common with Ollama and OpenAI-compatible endpoints).

Terminal window
netclaw model clear <role>

Clears Fallback or Compaction. Cannot clear Main:

Error: Cannot clear the main model role. Use `netclaw model set main` to change it instead.

Cleared roles are removed from the config file entirely.

Terminal window
# Set main model on a remote Ollama server
netclaw model set main remote-gpu qwen3:30b --context-window 32768
# Add a smaller fallback model on the same provider
netclaw model set fallback remote-gpu qwen3:8b
# Use a cloud model for compaction
netclaw model set compaction my-anthropic claude-sonnet-4
# See what models an Ollama server has available
netclaw model discover my-ollama
# Remove the fallback assignment
netclaw model clear fallback

Assignments live in ~/.netclaw/config/netclaw.json under the Models key:

{
"Models": {
"Main": {
"Provider": "remote-gpu",
"ModelId": "qwen3:30b",
"ContextWindow": 32768
},
"Fallback": {
"Provider": "remote-gpu",
"ModelId": "qwen3:8b",
"ContextWindow": 32768
}
}
}

Restart the daemon afterward.

0 on success, 1 on invalid arguments, unknown provider, or validation failure.

After setting your models, run netclaw status to confirm the daemon picked them up.