X-User-Id header. The standard setup flow is
tm auth login followed by tm init --sync, which derives and stores the
gateway user id plus the rest of the managed gateway state under [managed] in
the active config.toml state root. When TM_CONFIG_HOME is unset, that file
is ~/.config/tensormesh/config.toml. Use tm auth whoami when you need a live
Control Plane token check.
If tm is not already on your PATH, install the CLI first with
Installation. The commands below assume tm is on
your PATH. If you are running from this repo checkout without activating a
shell that already exposes tm, use ./.venv/bin/tm.
If you want the shortest serverless request instead, use:
tm billing pricing serverless list before using that shortcut.
If you are not sure what is already configured locally, run tm init first.
Use tm infer doctor when you specifically want to check whether the On-Demand path is ready for a direct request or for --model @latest.
Set Gateway Credentials
tm init --sync syncs the available gateway settings from the Control Plane.
When the command runs with --controlplane-base, it also persists that
controlplane_base into the active config.toml so later @latest requests
stay on the same environment. When a served deployment
already exists, the sync includes the served gateway model name. Use that
served gateway model name here, not the Control Plane
modelId UUID. If no served model exists yet, tm init --sync can still sync
the API key and user id, but you will need to deploy a model or pass an
explicit served model name later before On-Demand chat can succeed.
Send A Chat Request
Using @latest
@latest asks the CLI to resolve the served gateway model from Control Plane inventory before sending the gateway request.
X-User-Id even when @latest is used, so keep the normal tm init --sync step in place before this shortcut.
Related Reference
tm infertm infer doctortm infer chattm inittm billing pricing serverless listtm models list- API Quickstart for raw HTTP equivalents of these flows

