Skip to main content
Serverless quickstarts need a model name such as MiniMaxAI/MiniMax-M2.5.

How To Discover A Serverless Model

If you have Control Plane access for the same Tensormesh environment, log in first and then use the pricing command:
tm auth login
tm billing pricing serverless list
For machine-friendly output:
tm --output json billing pricing serverless list --active-only
Use the returned pricing[].model value in SDK, CLI, or raw API requests.
  • Control Plane pricing records can help you discover published serverless models for the current Tensormesh environment.
  • Serverless requests use a model name string, not a Control Plane modelId UUID.
  • The pricing[].model field is the value to pass as model.
  • The id field is the pricing record id, not the inference model name.
  • The name field is display-oriented; use model when sending requests.

What To Do In Practice

  • If your team already has a known serverless target, use that exact model value in the SDK, CLI, or raw API request.
  • If you have Control Plane access for the same environment, run tm billing pricing serverless list and pick the model from the returned records.
  • If you only have inference credentials, or you are targeting a different serverless host or base URL override, ask your operator or admin for the exact serverless model string for that host before sending the request.

Alternative Path

If you decide you would rather use the standard On-Demand setup flow instead of serverless discovery, switch to tm auth login plus tm init --sync and use the synced served gateway model name for that deployment.