Documentation Index
Fetch the complete documentation index at: https://docs.tensormesh.ai/llms.txt
Use this file to discover all available pages before exploring further.
@tensormesh/ai-sdk-provider adds Tensormesh language models to the Vercel AI SDK. JavaScript and TypeScript applications can use AI SDK functions such as generateText and streamText while requests are sent to Tensormesh text inference endpoints.
Calling tensormesh(modelId) creates a chat model backed by /v1/chat/completions. Calling tensormesh.completionModel(modelId) creates a completion model backed by /v1/completions. Structured output and tool calling are requested through chat model calls. Raw helpers expose additional inference endpoints such as /v1/responses.
Install
What It Supports
- Serverless inference by default at
https://serverless.tensormesh.ai/v1 - On-demand inference by passing a routed
baseURLanduserId - Text generation and streaming with chat models over
/v1/chat/completions - Text completions over
/v1/completions - Structured output through AI SDK chat model calls
- Tool calling through AI SDK chat model calls
- Direct helper methods for
/v1/models,/v1/responses,/tokenize,/detokenize,/health, and/version
Provider API
tensormesh, is equivalent to createTensormesh() with serverless defaults. Both serverless and on-demand inference read TENSORMESH_INFERENCE_API_KEY unless you pass apiKey explicitly. On-demand inference also uses TENSORMESH_USER_ID unless you pass userId explicitly.

