tm, covers two surfaces:
- Control Plane workflows such as authentication, models, users, billing, support, logs, and metrics.
- Inference API workflows for shared OpenAI-compatible Serverless and On-Demand endpoints, with explicit
X-User-Idrouting on On-Demand.
tm is
not already on your PATH. The examples below assume tm is available on your
PATH. If you are working from this repo checkout without activating a shell
that already exposes tm, use ./.venv/bin/tm.
Choose A Starting Path
Use the serverless path when you already know the model name you want:tm auth login, then tm billing pricing serverless list, and copy the returned pricing[].model value.
For the other verified serverless endpoints, use the same --api-key and --model flow:
tm billing pricing serverless list first or use the On-Demand flow above.
For a first gateway request after setup:
Guides
- Installation
- Getting Started
- Authentication
- First Inference Request
- Control Plane Workflows
- Config And Environment
- Production Scripting
- Admin Workflows
- Troubleshooting
Reference
- Root Command
- Version Command
- Init Command
- Auth Login
- Auth Status
- Auth Whoami
- Config Show
- Infer Doctor
- Infer Chat
- Infer Models
- Infer Responses
- Serverless Pricing List
- Auth Commands
- Config Commands
- Inference Commands
- Models Commands
- Billing Commands
- Activities Commands
- Metrics Commands
- Users Commands
- Products Commands
- Tickets Commands
- Reserved Deployments Commands
- Admin Commands
- Doctor
- Logs

