Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.tensormesh.ai/llms.txt

Use this file to discover all available pages before exploring further.

@tensormesh/ai-sdk-provider adds Tensormesh language models to the Vercel AI SDK. JavaScript and TypeScript applications can use AI SDK functions such as generateText and streamText while requests are sent to Tensormesh text inference endpoints. Calling tensormesh(modelId) creates a chat model backed by /v1/chat/completions. Calling tensormesh.completionModel(modelId) creates a completion model backed by /v1/completions. Structured output and tool calling are requested through chat model calls. Raw helpers expose additional inference endpoints such as /v1/responses.

Install

npm install ai @tensormesh/ai-sdk-provider
The package requires Node.js 20 or newer.

What It Supports

  • Serverless inference by default at https://serverless.tensormesh.ai/v1
  • On-demand inference by passing a routed baseURL and userId
  • Text generation and streaming with chat models over /v1/chat/completions
  • Text completions over /v1/completions
  • Structured output through AI SDK chat model calls
  • Tool calling through AI SDK chat model calls
  • Direct helper methods for /v1/models, /v1/responses, /tokenize, /detokenize, /health, and /version

Provider API

import { createTensormesh, tensormesh } from "@tensormesh/ai-sdk-provider";

// Default serverless provider.
const model = tensormesh("mistralai/Devstral-2-123B-Instruct-2512");

// Explicit serverless provider configuration.
const serverless = createTensormesh({
  apiKey: process.env.TENSORMESH_INFERENCE_API_KEY,
});

// On-demand inference adds a routed base URL and user id.
const onDemand = createTensormesh({
  baseURL: "https://YOUR_ON_DEMAND_BASE_URL/v1",
  userId: process.env.TENSORMESH_USER_ID,
});
The default provider instance, tensormesh, is equivalent to createTensormesh() with serverless defaults. Both serverless and on-demand inference read TENSORMESH_INFERENCE_API_KEY unless you pass apiKey explicitly. On-demand inference also uses TENSORMESH_USER_ID unless you pass userId explicitly.

Common Guides