Skip to main content
POST
/
tokenize
Tokenize Text
curl --request POST \
  --url https://external.nebius.tensormesh.ai/tokenize \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --header 'X-User-Id: <x-user-id>' \
  --data '
{
  "model": "openai-gpt-oss-120b-gpu-type-h200x1_8nic16",
  "prompt": "Hello!"
}
'
{
  "count": 123,
  "max_model_len": 123,
  "tokens": [
    123
  ],
  "token_strs": [
    "<string>"
  ]
}
Use this page when you want token ids for a specific routed On-Demand model.
  • Auth: Authorization: Bearer <API_KEY>
  • Routing: required X-User-Id: <uuid>
  • Host: choose the external Tensormesh host for your provider
  • Model: pass a served On-Demand model name in the JSON request body

Authorizations

Authorization
string
header
required

Bearer authentication using your On-Demand API key. Format: Bearer <API_KEY>

Headers

X-User-Id
string<uuid>
required

Tensormesh user id used for attribution and routing.

Body

application/json
model
string
required

On-Demand served model name to use.

Example:

"openai-gpt-oss-120b-gpu-type-h200x1_8nic16"

prompt
string
required
Example:

"Hello!"

messages
object[]

Response

Successful Response

count
integer
max_model_len
integer
tokens
integer[]
token_strs
string[]