tm is not already on your PATH, install the CLI first with
Installation. The commands below assume tm is on
your PATH. If you are running from this repo checkout without activating a
shell that already exposes tm, use ./.venv/bin/tm.
Path 1: Fastest Serverless Request
Choose this when you already have both:- an inference API key
- a valid serverless model name for your target host
tm billing pricing serverless list. That discovery command requires tm auth login for the same Tensormesh environment.
Path 2: Standard On-Demand Setup
Choose this when you want the most guided CLI flow and are okay using the synced On-Demand configuration:tm init --sync syncs the available managed gateway settings in the active
config.toml state root. When TM_CONFIG_HOME is unset, that file is
~/.config/tensormesh/config.toml. If a served On-Demand deployment already
exists, that includes the served gateway model name used for the default
request flow. If no served model exists yet, gateway_model_id stays unset
while the API key and user id can still sync.
When you are targeting a different Control Plane host, run the same explicit
--controlplane-base on tm init --sync once so the active config.toml
persists that host for later @latest and other Control Plane-assisted flows.
For automation or CI, add --exit-status to readiness checks so they fail the shell when required prerequisites are still missing:
Understand The Readiness Commands
Use the readiness commands like this:tm auth status: local Control Plane and gateway credential presencetm infer doctor: local readiness for the next inference requesttm doctor: broader local config and credential diagnosis across CLI setuptm auth whoami: live Control Plane auth check
tm auth status, tm infer doctor, and tm doctor are local checks. They do not prove that a live API call will succeed.
After the local checks look good, use tm auth whoami as the explicit live validation step for your current Control Plane login.
First Working On-Demand Request
Aftertm init shows that On-Demand inference credentials are ready, send a request like this:
@latest, so keep the normal tm init --sync step in the flow.
What tm init Helps You Check
- whether the current config file exists
- whether Control Plane login is ready
- whether
[managed]in the activeconfig.tomlhas synced gateway settings - whether inference API key, user id, and model name are present
- whether
--model @latestcan be resolved yet - the exact next commands to finish setup

