Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.tensormesh.ai/llms.txt

Use this file to discover all available pages before exploring further.

The default provider instance targets Tensormesh serverless inference:
import { tensormesh } from "@tensormesh/ai-sdk-provider";

const model = tensormesh("mistralai/Devstral-2-123B-Instruct-2512");
Serverless defaults to https://serverless.tensormesh.ai/v1.

Explicit Serverless Provider

Use createTensormesh for serverless when you want to pass settings explicitly, such as a custom fetch, custom headers, or an explicit API key.
import { createTensormesh } from "@tensormesh/ai-sdk-provider";

const tensormesh = createTensormesh({
  apiKey: process.env.TENSORMESH_INFERENCE_API_KEY,
});
Supported settings:
  • apiKey: inference API key used as the bearer token. Defaults to TENSORMESH_INFERENCE_API_KEY.
  • baseURL: inference API base URL. Defaults to https://serverless.tensormesh.ai/v1.
  • userId: Tensormesh user ID forwarded as X-User-Id for on-demand inference. Defaults to TENSORMESH_USER_ID.
  • headers: additional headers for each request.
  • fetch: custom fetch implementation.

On-Demand Inference

For on-demand inference, pass the routed base URL and user ID.
import { createTensormesh } from "@tensormesh/ai-sdk-provider";

const tensormesh = createTensormesh({
  baseURL: "https://YOUR_ON_DEMAND_BASE_URL/v1",
  userId: process.env.TENSORMESH_USER_ID,
});

const model = tensormesh("your-served-model-name");
Pass a /v1 base URL and use the model name exposed by that on-demand endpoint.